Planet Russell


Planet DebianBits from Debian: DebConf20 online closes

DebConf20 group photo - click to enlarge

On Saturday 29 August 2020, the annual Debian Developers and Contributors Conference came to a close.

DebConf20 has been held online for the first time, due to the coronavirus (COVID-19) disease pandemic.

All of the sessions have been streamed, with a variety of ways of participating: via IRC messaging, online collaborative text documents, and video conferencing meeting rooms.

With more than 850 attendees from 80 different countries and a total of over 100 event talks, discussion sessions, Birds of a Feather (BoF) gatherings and other activities, DebConf20 was a large success.

When it became clear that DebConf20 was going to be an online-only event, the DebConf video team spent much time over the next months to adapt, improve, and in some cases write from scratch, technology that would be required to make an online DebConf possible. After lessons learned from the MiniDebConfOnline in late May, some adjustments were made, and then eventually we came up with a setup involving Jitsi, OBS, Voctomix, SReview, nginx, Etherpad, and a newly written web-based frontend for voctomix as the various elements of the stack.

All components of the video infrastructure are free software, and the whole setup is configured through their public ansible repository.

The DebConf20 schedule included two tracks in other languages than English: the Spanish language MiniConf, with eight talks in two days, and the Malayalam language MiniConf, with nine talks in three days. Ad-hoc activities, introduced by attendees over the course of the entire conference, have been possible too, streamed and recorded. There have also been several team gatherings to sprint on certain Debian development areas.

Between talks, the video stream has been showing the usual sponsors on the loop, but also some additional clips including photos from previous DebConfs, fun facts about Debian and short shout-out videos sent by attendees to communicate with their Debian friends.

For those who were not able to participate, most of the talks and sessions are already available through the Debian meetings archive website, and the remaining ones will appear in the following days.

The DebConf20 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events.

Next year, DebConf21 is planned to be held in Haifa, Israel, in August or September.

DebConf is committed to a safe and welcome environment for all participants. During the conference, several teams (Front Desk, Welcome team and Community team) have been available to help so participants get their best experience in the conference, and find solutions to any issue that may arise. See the web page about the Code of Conduct in DebConf20 website for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf20, particularly our Platinum Sponsors: Lenovo, Infomaniak, Google and Amazon Web Services (AWS).

About Debian

The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system.

About DebConf

DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, and Bosnia and Herzegovina. More information about DebConf is available from

About Lenovo

As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world.

About Infomaniak

Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).

About Google

Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware.

Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform.

About Amazon Web Services (AWS)

Amazon Web Services (AWS) is one of the world's most comprehensive and broadly adopted cloud platforms, offering over 175 fully featured services from data centers globally (in 77 Availability Zones within 24 geographic regions). AWS customers include the fastest-growing startups, largest enterprises and leading government agencies.

Contact Information

For further information, please visit the DebConf20 web page at or send mail to


Planet DebianAndrew Cater: Just coming to the end of Debconf 20 2020 - and a preview.

 One more talk from Jon "maddog" Hall and then the closing wrap up. This has been a blast: I've enjoyed it a lot and it's made me more enthusiastic than I have been for a long time.

 So once more with thanks to the video team
It's almost the end of this year's DebConf dream
As we all break up, there's a favour to ask
Stay safe - wear a smile - and where needed, a mask

We'll not take you to task ; it's a favour we ask
Stay safe - wear a smile - and where needed, a mask

Haifa, Pristina or Kochi - we'll see how that lands
There's not much left to tidy up - wash your own hands
So now if you'll join us in virtual beer
We'll bring this to a close - and we'll meet up next year

So now if you'll join us - let us all raise a cheer
To Debconf21 - and we'll see you next year

Planet DebianRuss Allbery: PGP::Sign 1.02

This is another test-only release of my module for manipulating PGP signatures in Perl. I'm trying to get the CPAN testing failures down to a dull roar. This iteration fixes some testing issues with systems that have only GnuPG v1 and tries to handle systems whose gpg is GnuPG v2 but is older than 2.1.12 and therefore doesn't have the --pinentry-mode flag that GnuPG uses to suppress password prompting.

I handled the latter by skipping the tests if the gpg on the user's PATH was too old. I'm not certain this is the best approach, although it makes the CPAN automated testing more useful for me, since the module will not work without special configuration on those systems. On the other hand, if someone is installing it to point to some other GnuPG binary on the system at runtime, failing the installation because their system gpg is too old seems wrong, and the test failure doesn't indicate a bug in the module.

Essentially, I'm missing richer test metadata in the Perl ecosystem. I want to be able to declare a dependency on a non-Perl system binary, but of course Perl has no mechanism to do that.

I thought about trying to deal with the Windows failures due to missing IPC::Run features (redirecting high-numbered file descriptors) on the Windows platform in a similar way, but decided in that case I do want the tests to fail because PGP::Sign will never work on that platform regardless of the runtime configuration. Here too I spent some time searching for some way to indicate with Module::Build that the module doesn't work on Windows, and came up empty. This seems to be a gap in Perl's module distribution ecosystem.

In any case, hopefully this release will clean up the remaining test failures on Linux and BSD systems, and I can move on to work on the Big Eight signing key, which was the motivating application for these releases.

You can get the latest release from CPAN or from the PGP::Sign distribution page.

CryptogramFriday Squid Blogging: How Squid Survive Freezing, Oxygen-Deprived Waters

Lots of interesting genetic details.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.


Planet DebianJelmer Vernooij: Debian Janitor: The Slow Trickle from Git Repositories to the Debian Archive

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

Last week’s blog post documented how there are now over 30,000 lintian issues that have been fixed in git packaging repositories by the Janitor.

It's important to note that any fixes from the Janitor that make it into a Git packaging repository will also need to be uploaded to the Debian archive. This currently requires that a Debian packager clones the repository and builds and uploads the package.

Until a change makes it into the archive, users of Debian will unfortunately not see the benefits of improvements made by the Janitor.

82% of the 30,000 changes from the Janitor that have made it into a Git repository have not yet been uploaded, although changes do slowly trickle in as maintainers make other changes to packages and upload them along with the lintian fixes from the Janitor. This is not just true for changes from the Janitor, but for all sorts of other smaller improvements as well.

However, the process of cloning and building git repositories and uploading the resulting packages to the Debian archive is fairly time-consuming – and it’s probably not worth the time of developers to follow up every change from the Janitor with a labour-intensive upload to the archive.

It would be great if it was easier to trigger uploads from git commits. Projects like tag2upload will hopefully help, and make it more likely that changes end up in the Debian archive.

The majority packages do get at least one new source version upload per release, so most changes will eventually make it into the archive.

For more information about the Janitor's lintian-fixes efforts, see the landing page

Sam VargheseManaging a relationship is hard work

For many years, Australia has been trading with China, apparently in the belief that one can do business with a country for yonks without expecting the development of some sense of obligation. The attitude has been that China needs Australian resources and the relationship needs to go no further than the transfer of sand dug out of Australia and sent to China.

Those in Beijing, obviously, haven’t seen the exchange this way. There has been an expectation that there would be some obligation for the relationship to go further than just the impersonal exchange of goods for money. Australia, in true colonial fashion, has expected China to know its place and keep its distance.

This is similar to the attitude the Americans took when they pushed for China’s admission to the World Trade Organisation: all they wanted was a means of getting rid of their manufacturing so their industries could grow richer and an understanding that China would agree to go along with the American diktat to change as needed to keep the US on top of the trading world.

But then you cannot invite a man into your house for a dinner party and insist that he eat only bread. Once inside, he is free to choose what he wants to consume. It appears that the Americans do not understand this simple rule.

Both Australia and the US have forgotten they are dealing with the oldest civilisation in the world. A culture that plays the long waiting game. The Americans read the situation completely wrong for the last 70 years, assuming initially that the Kuomintang would come out on top and that the Communists would be vanquished. In the interim, the Americans obtained most of the money used for the early development of their country by selling opium to the Chinese.

China has not forgotten that humiliation.

There was never a thought given to the very likely event that China would one day want to assert itself and ask to be treated as an equal. Which is what is happening now. Both Australia and the US are feigning surprise and acting as though they are competely innocent in this exercise.

Fast forward to 2020 when the Americans and the Australians are both on the warpath, asserting that China is acting aggressively and trying to intimidate Australia while refusing to bow to American demands that it behave as it is expected to. There are complaints about Chinese demands for technology transfers, completely ignoring the fact that a developing country can ask for such transfers under WTO rules.

There are allegations of IP theft by the Americans, completely forgetting that they stole IP from Britain in the early days of the colonies; the name Samuel Slater should ring a bell in this context. Many educated Americans have themselves written about Slater.

Racism is one trait that defines the Australian approach to China. The Asian nation has been expected to confine itself to trade and never ask for more. And Australia, in condescending fashion, has lauded its approach, never understanding that it is seen as an American lapdog and no more. China has been waiting for the day when it can level scores.

It is difficult to comprehend why Australia genuflects before the US. There has been an attitude of veneration going back to the time of Harold Holt who is well known for his “All the way with LBJ” line, referring to the fact that Australian soldiers would be sent to Vietnam to serve as cannon fodder for the Americans and would, in short, do anything as long as the US decided so. Exactly what fight Australia had with Vietnam is not clear.

At that stage, there was no seminal action by the US that had put the fear of God into Australia; this came later, in 1975, when the CIA manipulated Australian politics and influenced the sacking of prime minister Gough Whitlam by the governor-general, Sir John Kerr. There is still resistance from Australian officialdom and its toadies to this version of events, but the evidence is incontrovertible; Australian journalist Guy Rundle has written two wonderful accounts of how the toppling took place.

Whitlam’s sins? Well, he had cracked down on the Australian Security Intelligence Organisation, an agency that spied on Australians and conveyed information to the CIA, when he discovered that it was keeping tabs on politicians. His attorney-general, Lionel Murphy, even ordered the Australian Federal Police to raid the ASIO, a major affront to the Americans who did not like their client being treated this way.

Whitlam also hinted that he would not renew a treaty for the Americans to continue using a base at Pine Gap as a surveillance centre. This centre was offered to the US, with the rent being one peppercorn for 99 years.

Of course, this was pure insolence coming from a country which the Americans — as they have with many other nations — treated as a vassal state and one only existing to do their bidding. So Whitlam was thrown out.

On China, too, Australia has served the role of American lapdog. In recent days, the Australian Prime Minister Scott Morrison has made statements attacking China soon after he has been in touch with the American leadership. In other words, the Americans are using Australia to provoke China. It’s shameful to be used in this manner, but then once a bootlicker, always a bootlicker.

Australia’s subservience to the US is so great that it even co-opted an American official, former US Secretary of Homeland Security Kirstjen Nielsen, to play a role in developing a cyber security strategy. There are a large number of better qualified people in the country who could do a much better job than Nielsen, who is a politician and not a technically qualified individual. But the slave mentality has always been there and will remain.

Krebs on SecuritySendgrid Under Siege from Hacked Accounts

Email service provider Sendgrid is grappling with an unusually large number of customer accounts whose passwords have been cracked, sold to spammers, and abused for sending phishing and email malware attacks. Sendgrid’s parent company Twilio says it is working on a plan to require multi-factor authentication for all of its customers, but that solution may not come fast enough for organizations having trouble dealing with the fallout in the meantime.

Image: Wikipedia

Many companies use Sendgrid to communicate with their customers via email, or else pay marketing firms to do that on their behalf using Sendgrid’s systems. Sendgrid takes steps to validate that new customers are legitimate businesses, and that emails sent through its platform carry the proper digital signatures that other companies can use to validate that the messages have been authorized by its customers.

But this also means when a Sendgrid customer account gets hacked and used to send malware or phishing scams, the threat is particularly acute because a large number of organizations allow email from Sendgrid’s systems to sail through their spam-filtering systems.

To make matters worse, links included in emails sent through Sendgrid are obfuscated (mainly for tracking deliverability and other metrics), so it is not immediately clear to recipients where on the Internet they will be taken when they click.

Dealing with compromised customer accounts is a constant challenge for any organization doing business online today, and certainly Sendgrid is not the only email marketing platform dealing with this problem. But according to multiple emails from readers, recent threads on several anti-spam discussion lists, and interviews with people in the anti-spam community, over the past few months there has been a marked increase in malicious, phishous and outright spammy email being blasted out via Sendgrid’s servers.

Rob McEwen is CEO of, an anti-spam firm whose data on junk email trends are used to improve the spam-blocking technologies deployed by several Fortune 100 companies. McEwen said no other email service provider has come close to generating the volume of spam that’s been emanating from Sendgrid accounts lately.

“As far as the nasty criminal phishes and viruses, I think there’s not even a close second in terms of how bad it’s been with Sendgrid over the past few months,” he said.

Trying to filter out bad emails coming from a major email provider that so many legitimate companies rely upon to reach their customers can be a dicey business. If you filter the emails too aggressively you end up with an unacceptable number of “false positives,” i.e., benign or even desirable emails that get flagged as spam and sent to the junk folder or blocked altogether.

But McEwen said the incidence of malicious spam coming from Sendgrid has gotten so bad that he recently launched a new anti-spam block list specifically to filter out email from Sendgrid accounts that have been known to be blasting large volumes of junk or malicious email.

“Before I implemented this in my own filtering system a week ago, I was getting three to four phone calls or stern emails a week from angry customers wondering why these malicious emails were getting through to their inboxes,” McEwen said. “And I just am not seeing anything this egregious in terms of viruses and spams from the other email service providers.”

In an interview with KrebsOnSecurity, Sendgrid parent firm Twilio acknowledged the company had recently seen an increase in compromised customer accounts being abused for spam. While Sendgrid does allow customers to use multi-factor authentication (also known as two-factor authentication or 2FA), this protection is not mandatory.

But Twilio Chief Security Officer Steve Pugh said the company is working on changes that would require customers to use some form of 2FA in addition to usernames and passwords.

“Twilio believes that requiring 2FA for customer accounts is the right thing to do, and we’re working towards that end,” Pugh said. “2FA has proven to be a powerful tool in securing communications channels. This is part of the reason we acquired Authy and created a line of account security products and services. Twilio, like other platforms, is forming a plan on how to better secure our customers’ accounts through native technologies such as Authy and additional account level controls to mitigate known attack vectors.”

Requiring customers to use some form of 2FA would go a long way toward neutralizing the underground market for compromised Sendgrid accounts, which are sold by a variety of cybercriminals who specialize in gaining access to accounts by targeting users who re-use the same passwords across multiple websites.

One such individual, who goes by the handle “Kromatix” on several forums, is currently selling access to more than 400 compromised Sendgrid user accounts. The pricing attached to each account is based on volume of email it can send in a given month. Accounts that can send up to 40,000 emails a month go for $15, whereas those capable of blasting 10 million missives a month sell for $400.

“I have a large supply of cracked Sendgrid accounts that can be used to generate an API key which you can then plug into your mailer of choice and send massive amounts of emails with ensured delivery,” Kromatix wrote in an Aug. 23 sales thread. “Sendgrid servers maintain a very good reputation with [email service providers] so your content becomes much more likely to get into the inbox so long as your setup is correct.”

Neil Schwartzman, executive director of the anti-spam group CAUCE, said Sendgrid’s 2FA plans are long overdue, noting that the company bought Authy back in 2015.

Single-factor authentication for a company like this in 2020 is just ludicrous given the potential damage and malicious content we’re seeing,” Schwartzman said.

“I understand that it’s a task to invoke 2FA, and given the volume of customers Sendgrid has that’s something to consider because there’s going to be a lot of customer overhead involved,” he continued. “But it’s not like your bank, social media account, email and plenty of other places online don’t already insist on it.”

Schwartzman said if Twilio doesn’t act quickly enough to fix the problem on its end, the major email providers of the world (think Google, Microsoft and Apple) — and their various machine-learning anti-spam algorithms — may do it for them.

“There is a tipping point after which receiving firms start to lose patience and start to more aggressively filter this stuff,” he said. “If seeing a Sendgrid email according to machine learning becomes a sign of abuse, trust me the machines will make the decisions even if the people don’t.”

Planet DebianDirk Eddelbuettel: anytime 0.3.9: More Minor Maintenance

A new minor release of the anytime package arrived on CRAN yesterday. This is the twentieth release, but sadly we seem to be spinning our wheels just accomodating CRAN (which the two or three last releases focused on). Code and functionality remain mature and stable, of course.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string as well as accomodating different formats in one input vector. See the anytime page, or the GitHub for a few examples.

This release once again has to play catchup with CRAN as r-devel now changes how tzone propagates when we do as.POSIXct(as.POSIXlt(Sys.time()) — which is now no longer “equal” to as.POSIXct(Sys.time()) even for a fixed, stored Sys.time() call result. Probably for the better, but an issue for now so we … effectively just reduced test coverage. Call it “progress”.

The full list of changes follows.

Changes in anytime version 0.3.9 (2020-08-26)

  • Skip one test file that is impossible to run across different CRAN setups, and life is definitely too short for these games.

  • Change remaining http:// to https:// because, well, you know.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page. The issue tracker tracker off the GitHub repo can be use for questions and comments.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramUS Postal Service Files Blockchain Voting Patent

The US Postal Service has filed a patent on a blockchain voting method:

Abstract: A voting system can use the security of blockchain and the mail to provide a reliable voting system. A registered voter receives a computer readable code in the mail and confirms identity and confirms correct ballot information in an election. The system separates voter identification and votes to ensure vote anonymity, and stores votes on a distributed ledger in a blockchain

I wasn't going to bother blogging this, but I've received enough emails about it that I should comment.

As is pretty much always the case, blockchain adds nothing. The security of this system has nothing to do with blockchain, and would be better off without it. For voting in particular, blockchain adds to the insecurity. Matt Blaze is most succinct on that point:

Why is blockchain voting a dumb idea?

Glad you asked.

For starters:

  • It doesn't solve any problems civil elections actually have.
  • It's basically incompatible with "software independence", considered an essential property.
  • It can make ballot secrecy difficult or impossible.

Both Ben Adida and Matthew Green have written longer pieces on blockchain and voting.

News articles.

Planet DebianBits from Debian: DebConf20 welcomes its sponsors!

DebConf20 logo

DebConf20 is taking place online, from 23 August to 29 August 2020. It is the 21st Debian conference, and organizers and participants are working hard together at creating interesting and fruitful events.

We would like to warmly welcome the 17 sponsors of DebConf20, and introduce them to you.

We have four Platinum sponsors.

Our first Platinum sponsor is Lenovo. As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world.

Our next Platinum sponsor is Infomaniak. Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).

Google is our third Platinum sponsor. Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware. Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner.

Amazon Web Services (AWS) is our fourth Platinum sponsor. Amazon Web Services is one of the world's most comprehensive and broadly adopted cloud platforms, offering over 175 fully featured services from data centers globally (in 77 Availability Zones within 24 geographic regions). AWS customers include the fastest-growing startups, largest enterprises and leading government agencies.

Our Gold sponsors are Deepin, the Matanel Foundation, Collabora, and HRT.

Deepin is a Chinese commercial company focusing on the development and service of Linux-based operating systems. They also lead research and development of the Deepin Debian derivative.

The Matanel Foundation operates in Israel, as its first concern is to preserve the cohesion of a society and a nation plagued by divisions. The Matanel Foundation also works in Europe, in Africa and in South America.

Collabora is a global consultancy delivering Open Source software solutions to the commercial world. In addition to offering solutions to clients, Collabora's engineers and developers actively contribute to many Open Source projects.

Hudson-Trading is a company led by mathematicians, computer scientists, statisticians, physicists and engineers. They research and develop automated trading algorithms using advanced mathematical techniques.

Our Silver sponsors are:

Linux Professional Institute, the global certification standard and career support organization for open source professionals, Civil Infrastructure Platform, a collaborative project hosted by the Linux Foundation, establishing an open source “base layer” of industrial grade software, Ubuntu, the Operating System delivered by Canonical, and Roche, a major international pharmaceutical provider and research company dedicated to personalized healthcare.

Bronze sponsors: IBM, MySQL, Univention.

And finally, our Supporter level sponsors, ISG.EE and Pengwin.

Thanks to all our sponsors for their support! Their contributions make it possible for a large number of Debian contributors from all over the globe to work together, help and learn from each other in DebConf20.

Participating in DebConf20 online

The 21st Debian Conference is being held online, due to COVID-19, from August 23 to 29, 2020. Talks, discussions, panels and other activities run from 10:00 to 01:00 UTC. Visit the DebConf20 website at to learn about the complete schedule, watch the live streaming and join the different communication channels for participating in the conference.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, July 2020

A Debian LTS logo Like each month, albeit a bit later due to vacation, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 249.25 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA did 18.0h (out of 14h assigned and 6h from June), and gave back 2h to the pool.
  • Adrian Bunk did 16.0h (out of 25.25h assigned), thus carrying over 9.25h to August.
  • Ben Hutchings did 5h (out of 20h assigned), and gave back the remaining 15h.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 60h (out of 5.75h assigned and 54.25h from June).
  • Holger Levsen spent 10h (out of 10h assigned) for managing LTS and ELTS contributors.
  • Markus Koschany did 15h (out of 25.25h assigned), thus carrying over 10.25h to August.
  • Mike Gabriel did nothing (out of 8h assigned), thus is carrying over 8h for August.
  • Ola Lundqvist did 3h (out of 12h assigned and 7h from June), thus carrying over 16h to August.
  • Roberto C. Sánchez did 26.5h (out of 25.25h assigned and 1.25h from June).
  • Sylvain Beucler did 25.25h (out of 25.25h assigned).
  • Thorsten Alteholz did 25.25h (out of 25.25h assigned).
  • Utkarsh Gupta did 25.25h (out of 25.25h assigned).

Evolution of the situation

July was our first month of Stretch LTS! Given this is our fourth LTS release we anticipated a smooth transition and it seems everything indeed went very well. Many thanks to the members of the Debian ftpmaster-, security, release- and publicity- teams who helped us make this happen!
Stretch LTS begun on July 18th 2020 after the 13th and final Stretch point release. and is currently scheduled to end on June 30th 2022.

Last month, we asked you to participate in a survey and we got 1764 submissions, which is pretty awesome. Thank you very much for participating!. Right now we are still busy crunching the results, but we already shared some early analysis during the Debconf LTS bof this week.

The security tracker currently lists 54 packages with a known CVE and the dla-needed.txt file has 52 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Worse Than FailureError'd: Don't Leave This Page

"My Kindle showed me this for the entire time I read this book. Luckily, page 31 is really exciting!" writes Hans H.


Tim wrote, "Thanks JustPark, I'd love to verify my account! about that button?"


"I almost managed to uninstall Viber, or did I?" writes Simon T.


Marco wrote, "All I wanted to do was to post a one-time payment on a reputable cloud provider. Now I'm just confused."


Brinio H. wrote, "Somehow I expected my muscles to feel more sore after walking over 382 light-years on one day."


"Here we have PowerBI failing to dispel the perception that 'Business Intelligence' is an oxymoron," writes Craig.


[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianReproducible Builds (diffoscope): diffoscope 158 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 158. This version includes the following changes:

* Improve PGP support:
  - Support extracting of files within PGP signed data.
    (Closes: reproducible-builds/diffoscope#214)
  - pgpdump(1) can successfully parse some unrelated, non-PGP binary files,
    so check that the parsed output contains something remotely sensible
    before identifying it as a PGP file.
* Don't use Python's repr(...)-style output in "Calling external command"
  logging output.
* Correct a typo of "output" in an internal comment.

You find out more by visiting the project homepage.


Krebs on SecurityConfessions of an ID Theft Kingpin, Part II

Yesterday’s piece told the tale of Hieu Minh Ngo, a hacker the U.S. Secret Service described as someone who caused more material financial harm to more Americans than any other convicted cybercriminal. Ngo was recently deported back to his home country after serving more than seven years in prison for running multiple identity theft services. He now says he wants to use his experience to convince other cybercriminals to use their skills for good. Here’s a look at what happened after he got busted.

Hieu Minh Ngo, 29, in a recent photo.

Part I of this series ended with Ngo in handcuffs after disembarking a flight from his native Vietnam to Guam, where he believed he was going to meet another cybercriminal who’d promised to hook him up with the mother of all consumer data caches.

Ngo had been making more than $125,000 a month reselling ill-gotten access to some of the biggest data brokers on the planet. But the Secret Service discovered his various accounts at these data brokers and had them shut down one by one. Ngo became obsessed with restarting his business and maintaining his previous income. By this time, his ID theft services had earned roughly USD $3 million.

As this was going on, Secret Service agents used an intermediary to trick Ngo into thinking he’d trodden on the turf of another cybercriminal. From Part I:

The Secret Service contacted Ngo through an intermediary in the United Kingdom — a known, convicted cybercriminal who agreed to play along. The U.K.-based collaborator told Ngo he had personally shut down Ngo’s access to Experian because he had been there first and Ngo was interfering with his business.

“The U.K. guy told Ngo, ‘Hey, you’re treading on my turf, and I decided to lock you out. But as long as you’re paying a vig through me, your access won’t go away’,” the Secret Service’s Matt O’Neill recalled.

After several months of conversing with his apparent U.K.-based tormentor, Ngo agreed to meet him in Guam to finalize the deal. But immediately after stepping off of the plane in Guam, he was apprehended by Secret Service agents.

“One of the names of his identity theft services was findget[.]me,” O’Neill said. “We took that seriously, and we did like he asked.”

In an interview with KrebsOnSecurity, Ngo said he spent about two months in a Guam jail awaiting transfer to the United States. A month passed before he was allowed a 10 minute phone call to his family and explain what he’d gotten himself into.

“This was a very tough time,” Ngo said. “They were so sad and they were crying a lot.”

First stop on his prosecution tour was New Jersey, where he ultimately pleaded guilty to hacking into MicroBilt, the first of several data brokers whose consumer databases would power different iterations of his identity theft service over the years.

Next came New Hampshire, where another guilty plea forced him to testify in three different trials against identity thieves who had used his services for years. Among them was Lance Ealy, a serial ID thief from Dayton, Ohio who used Ngo’s service to purchase more than 350 “fullz” — a term used to describe a package of everything one would need to steal someone’s identity, including their Social Security number, mother’s maiden name, birth date, address, phone number, email address, bank account information and passwords.

Ealy used Ngo’s service primarily to conduct tax refund fraud with the U.S. Internal Revenue Service (IRS), claiming huge refunds in the names of ID theft victims who first learned of the fraud when they went to file their taxes and found someone else had beat them to it.

Ngo’s cooperation with the government ultimately led to 20 arrests, with a dozen of those defendants lured into the open by O’Neill and other Secret Service agents posing as Ngo.

The Secret Service had difficulty pinning down the exact amount of financial damage inflicted by Ngo’s various ID theft services over the years, primarily because those services only kept records of what customers searched for — not which records they purchased.

But based on the records they did have, the government estimated that Ngo’s service enabled approximately $1.1 billion in new account fraud at banks and retailers throughout the United States, and roughly $64 million in tax refund fraud with the states and the IRS.

“We interviewed a number of Ngo’s customers, who were pretty open about why they were using his services,” O’Neill said. “Many of them told us the same thing: Buying identities was so much better for them than stolen payment card data, because card data could be used once or twice before it was no good to them anymore. But identities could be used over and over again for years.”

O’Neill said he still marvels at the fact that Ngo’s name is practically unknown when compared to the world’s most infamous credit card thieves, some of whom were responsible for stealing hundreds of millions of cards from big box retail merchants.

“I don’t know of anyone who has come close to causing more material harm than Ngo did to the average American,” O’Neill said. “But most people have probably never heard of him.”

Ngo said he wasn’t surprised that his services were responsible for so much financial damage. But he was utterly unprepared to hear about the human toll. Throughout the court proceedings, Ngo sat through story after dreadful story of how his work had ruined the financial lives of people harmed by his services.

“When I was running the service, I didn’t really care because I didn’t know my customers and I didn’t know much about what they were doing with it,” Ngo said. “But during my case, the federal court received like 13,000 letters from victims who complained they lost their houses, jobs, or could no longer afford to buy a home or maintain their financial life because of me. That made me feel really bad, and I realized I’d been a terrible person.”

Even as he bounced from one federal detention facility to the next, Ngo always seemed to encounter ID theft victims wherever he went, including prison guards, healthcare workers and counselors.

“When I was in jail at Beaumont, Texas I talked to one of the correctional officers there who shared with me a story about her friend who lost her identity and then lost everything after that,” Ngo recalled. “Her whole life fell apart. I don’t know if that lady was one of my victims, but that story made me feel sick. I know now that was I was doing was just evil.”

Ngo’s former ID theft service usearching[.]info.

The Vietnamese hacker was released from prison a few months ago, and is now finishing up a mandatory three-week COVID-19 quarantine in a government-run facility near Ho Chi Minh city. In the final months of his detention, Ngo started reading everything he could get his hands on about computer and Internet security, and even authored a lengthy guide written for the average Internet user with advice about how to avoid getting hacked or becoming the victim of identity theft.

Ngo said while he would like to one day get a job working in some cybersecurity role, he’s in no hurry to do so. He’s already had at least one job offer in Vietnam, but he turned it down. He says he’s not ready to work yet, but is looking forward to spending time with his family — and specifically with his dad, who was recently diagnosed with Stage 4 cancer.

Longer term, Ngo says, he wants to mentor young people and help guide them on the right path, and away from cybercrime. He’s been brutally honest about his crimes and the destruction he’s caused. His LinkedIn profile states up front that he’s a convicted cybercriminal.

“I hope my work can help to change the minds of somebody, and if at least one person can change and turn to do good, I’m happy,” Ngo said. “It’s time for me to do something right, to give back to the world, because I know I can do something like this.”

Still, the recidivism rate among cybercriminals tends to be extremely high, and it would be easy for him to slip back into his old ways. After all, few people know as well as he does how best to exploit access to identity data.

O’Neill said he believes Ngo probably will keep his nose clean. But he added that Ngo’s service if it existed today probably would be even more successful and lucrative given the sheer number of scammers involved in using stolen identity data to defraud states and the federal government out of pandemic assistance loans and unemployment insurance benefits.

“It doesn’t appear he’s looking to get back into that life of crime,” O’Neill said. “But I firmly believe the people doing fraudulent small business loans and unemployment claims cut their teeth on his website. He was definitely the new coin of the realm.”

Ngo maintains he has zero interest in doing anything that might send him back to prison.

“Prison is a difficult place, but it gave me time to think about my life and my choices,” he said. “I am committing myself to do good and be better every day. I now know that money is just a part of life. It’s not everything and it can’t bring you true happiness. I hope those cybercriminals out there can learn from my experience. I hope they stop what they are doing and instead use their skills to help make the world better.”

CryptogramCory Doctorow on The Age of Surveillance Capitalism

Cory Doctorow has writtten an extended rebuttal of The Age of Surveillance Capitalism by Shoshana Zuboff. He summarized the argument on Twitter.

Shorter summary: it's not the surveillance part, it's the fact that these companies are monopolies.

I think it's both. Surveillance capitalism has some unique properties that make it particularly unethical and incompatible with a free society, and Zuboff makes them clear in her book. But the current acceptance of monopolies in our society is also extremely damaging -- which Doctorow makes clear.

Worse Than FailureCodeSOD: Win By Being Last

I’m going to open with just one line, just one line from Megan D, before we dig into the story:

public static boolean comparePasswords(char[] password1, char[] password2)

A long time ago, someone wrote a Java 1.4 application. It’s all about getting data out of data files, like CSVs and Excel and XML, and getting it into a database, where it can then be turned into plots and reports. Currently, it has two customers, but boy, there’s a lot of technology invested in it, so the pointy-hairs decided that it needed to be updated so they could sell it to new customers.

The developers played a game of “Not It!” and Megan lost. It wasn’t hard to see why no one wanted to touch this code. The UI section was implemented in code generated by an Eclipse plugin that no longer exists. There was UI code which wasn’t implemented that way, but there were no code paths that actually showed it. The project didn’t have one “do everything” class of utilities- it had many of them.

The real magic was in All the data got converted into strings before going into the database, and data got pulled back out as lists of strings- one string per row, prepended with the number of columns in that row. The string would get split up and converted back into the actual real datatypes.

Getting back to our sample line above, Megan adds:

No restrictions on any data in the database, or even input cleaning - little Bobby Tables would have a field day. There are so many issues that the fact that passwords are plaintext barely even registers as a problem.

A common convention used in the database layer is “loop and compare”. Want to check if a username exists in the database? SELECT username FROM users WHERE username = 'someuser', loop across the results, and if the username in the result set matches 'someuser', set a flag to true (set it to false otherwise). Return the flag. And if you're wondering why they need to look at each row instead of just seeing a non-zero number of matches, so am I.

Usernames are not unique, but the username/group combination should be.

Similarly, if you’re logging in, it uses a “loop and compare”. Find all the rows for users with that username. Then, find all the groups for that username. Loop across all the groups and check if any of them match the user trying to log in. Then loop across all the stored- plaintext stored passwords and see if they match.

But that raises the question: how do you tell if two strings match? Just use an equality comparison? Or a .equals? Of course not.

We use “loop and compare” on sequences of rows, so we should also use “loop and compare” on sequences of characters. What could be wrong with that?

   * Compares two given char arrays for equality.
   * @param password1
   *          The first password to compare.
   * @param password2
   *          The second password to compare.
   * @return True if the passwords are equal false otherwise.
  public static boolean comparePasswords(char[] password1, char[] password2)
    // assume false until prove otherwise
    boolean aSameFlag = false;
    if (password1 != null && password2 != null)
      if (password1.length == password2.length)
        for (int aIndex = 0; aIndex < password1.length; aIndex++)
          aSameFlag = password1[aIndex] == password2[aIndex];
    return aSameFlag;

If the passwords are both non-null, if they’re both the same length, compare them one character at a time. For each character, set the aSameFlag to true if they match, false if they don’t.

Return the aSameFlag.

The end result of this is that only the last letter matters, so from the perspective of this code, there’s no difference between the word “ship” and a more accurate way to describe this code.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!


Planet DebianDirk Eddelbuettel: #29: Easy, Reliable, Fast Linux CRAN Binaries via BSPM

Welcome to the 29th post in the randomly repeating R recommendations series or R4 for short. Our last post #28 introduced RSPM, and just before that we also talked in #27 about binary installations on Ubuntu (which was also a T4 video). This post is joined with Iñaki Ucar and mainly about work we have done with his bspm package.


CRAN has been a cornerstone of the success of R in recent years. As a well-maintained repository with stringent quality control, it ensures users have access to highest-quality statistical / analytical software that “just works”. Users on Windows and macOS also benefit from faster installation via pre-compiled binary packages.

Linux users generally install from source, which can be more tedious and, often, much slower. Those who know where to look have had access to (at least some) binaries for years as well (and one of us blogged and vlogged about this at length). Debian users get close to 1000 CRAN and BioConductor packages (and, true to Debian form, for well over a dozen hardware platforms). Michael Rutter maintains a PPA with 4600 binaries for three different Ubuntu flavors (see c2d4u4.0+). More recently, Fedora joined the party with 16000 (!!) binaries, essentially all of CRAN, via a Copr repository (see iucar/cran).

The buzz currently is however with RSPM, a new package manager by RStudio. An audacious project, it provides binaries for several Linux distributions and releases. It has already been tested in many RStudio Cloud sessions (including with some of our students) as well as some CI integrations.

RSPM cuts “across” and takes the breadth of CRAN across several Linux distributions, bringing installation of pre-built CRAN packages a binaries under their normal CRAN package names. Another nice touch is the integration with install.packages(): these binaries are installed in a way that is natural for R users—but as binaries. It is however entirely disconnected from the system package management. This means that the installation of a package requiring an external library may “succeed” and still fail, as a required library simply cannot be pulled in directly by RSPM.

So what is needed is a combination. We want binaries that are aware of their system dependencies but accessible directly from R just like RSPM offers it. Enter BSPM—the Bridge to System Package Manager package (also on CRAN).

The first illustration (using Ubuntu 18.04) shows RSPM on the left, and BSPM on the right, both installing the graphics package Cairo (and both using custom Rocker containers).

This fails for RSPM as no binary is present and a source build fails for the familiar lack of a -dev package. It proceeds just fine on the right under BSPM.

A second illustration shows once again RSPM on the left, and BSPM on the right (this time on Fedora), both installing the units package without a required system dependency.

The installation of units works for BSPM as the dependency libudunits is brought in, but fails under RSPM. The binary installation succeeds in both cases, but the missing dependency (the UDUNITS2 library) is brought in only by BSPM. Consequently, the package fails to load under RSPM.


To conclude, highlights of BSPM are:

  • direct installation of binary packages from R via R commands under their normal CRAN names (just like RSPM);
  • full integration with the system package manager, delivering system installations (improving upon RSPM);
  • full dependency resolution for R packages, including system requirements (improving upon RSPM).

This offers easy, reliable, fast installation of R packages, and we invite you to pick all three. We recommend usage with either Ubuntu with the 4.6k packages via the Rutter PPA, or Fedora via the even more complete Copr repository (which already includes a specially-tailored version of BSPM called CoprManager).

We hope this short note wets your appetite to learn more about bspm (which is itself on CRAN) and the two sets of Rocker containers shown. The rocker/r-rspm container comes in two two flavours for Ubuntu 18.04 and 20.04. Similarly, the rocker/r-bspm container comes in the same two two flavours for Ubuntu 18.04 and 20.04, as well as in a Debian testing variant.

Feedback is appreciated at the bspm or rocker issue trackers.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecurityConfessions of an ID Theft Kingpin, Part I

At the height of his cybercriminal career, the hacker known as “Hieupc” was earning $125,000 a month running a bustling identity theft service that siphoned consumer dossiers from some of the world’s top data brokers. That is, until his greed and ambition played straight into an elaborate snare set by the U.S. Secret Service. Now, after more than seven years in prison Hieupc is back in his home country and hoping to convince other would-be cybercrooks to use their computer skills for good.

Hieu Minh Ngo, in his teens.

For several years beginning around 2010, a lone teenager in Vietnam named Hieu Minh Ngo ran one of the Internet’s most profitable and popular services for selling “fullz,” stolen identity records that included a consumer’s name, date of birth, Social Security number and email and physical address.

Ngo got his treasure trove of consumer data by hacking and social engineering his way into a string of major data brokers. By the time the Secret Service caught up with him in 2013, he’d made over $3 million selling fullz data to identity thieves and organized crime rings operating throughout the United States.

Matt O’Neill is the Secret Service agent who in February 2013 successfully executed a scheme to lure Ngo out of Vietnam and into Guam, where the young hacker was arrested and sent to the mainland U.S. to face prosecution. O’Neill now heads the agency’s Global Investigative Operations Center, which supports investigations into transnational organized criminal groups.

O’Neill said he opened the investigation into Ngo’s identity theft business after reading about it in a 2011 KrebsOnSecurity story, “How Much is Your Identity Worth?” According to O’Neill, what’s remarkable about Ngo is that to this day his name is virtually unknown among the pantheon of infamous convicted cybercriminals, the majority of whom were busted for trafficking in huge quantities of stolen credit cards.

Ngo’s businesses enabled an entire generation of cybercriminals to commit an estimated $1 billion worth of new account fraud, and to sully the credit histories of countless Americans in the process.

“I don’t know of any other cybercriminal who has caused more material financial harm to more Americans than Ngo,” O’Neill told KrebsOnSecurity. “He was selling the personal information on more than 200 million Americans and allowing anyone to buy it for pennies apiece.”

Freshly released from the U.S. prison system and deported back to Vietnam, Ngo is currently finishing up a mandatory three-week COVID-19 quarantine at a government-run facility. He contacted KrebsOnSecurity from inside this facility with the stated aim of telling his little-known story, and to warn others away from following in his footsteps.


Ten years ago, then 19-year-old hacker Ngo was a regular on the Vietnamese-language computer hacking forums. Ngo says he came from a middle-class family that owned an electronics store, and that his parents bought him a computer when he was around 12 years old. From then on out, he was hooked.

In his late teens, he traveled to New Zealand to study English at a university there. By that time, he was already an administrator of several dark web hacker forums, and between his studies he discovered a vulnerability in the school’s network that exposed payment card data.

“I did contact the IT technician there to fix it, but nobody cared so I hacked the whole system,” Ngo recalled. “Then I used the same vulnerability to hack other websites. I was stealing lots of credit cards.”

Ngo said he decided to use the card data to buy concert and event tickets from Ticketmaster, and then sell the tickets at a New Zealand auction site called TradeMe. The university later learned of the intrusion and Ngo’s role in it, and the Auckland police got involved. Ngo’s travel visa was not renewed after his first semester ended, and in retribution he attacked the university’s site, shutting it down for at least two days.

Ngo said he started taking classes again back in Vietnam, but soon found he was spending most of his time on cybercrime forums.

“I went from hacking for fun to hacking for profits when I saw how easy it was to make money stealing customer databases,” Ngo said. “I was hanging out with some of my friends from the underground forums and we talked about planning a new criminal activity.”

“My friends said doing credit cards and bank information is very dangerous, so I started thinking about selling identities,” Ngo continued. “At first I thought well, it’s just information, maybe it’s not that bad because it’s not related to bank accounts directly. But I was wrong, and the money I started making very fast just blinded me to a lot of things.”


His first big target was a consumer credit reporting company in New Jersey called MicroBilt.

“I was hacking into their platform and stealing their customer database so I could use their customer logins to access their [consumer] databases,” Ngo said. “I was in their systems for almost a year without them knowing.”

Very soon after gaining access to MicroBilt, Ngo says, he stood up Superget[.]info, a website that advertised the sale of individual consumer records. Ngo said initially his service was quite manual, requiring customers to request specific states or consumers they wanted information on, and he would conduct the lookups by hand.

Ngo’s former identity theft service, superget[.]info

“I was trying to get more records at once, but the speed of our Internet in Vietnam then was very slow,” Ngo recalled. “I couldn’t download it because the database was so huge. So I just manually search for whoever need identities.”

But Ngo would soon work out how to use more powerful servers in the United States to automate the collection of larger amounts of consumer data from MicroBilt’s systems, and from other data brokers. As I wrote of Ngo’s service back in November 2011:

“Superget lets users search for specific individuals by name, city, and state. Each “credit” costs USD$1, and a successful hit on a Social Security number or date of birth costs 3 credits each. The more credits you buy, the cheaper the searches are per credit: Six credits cost $4.99; 35 credits cost $20.99, and $100.99 buys you 230 credits. Customers with special needs can avail themselves of the “reseller plan,” which promises 1,500 credits for $500.99, and 3,500 credits for $1000.99.

“Our Databases are updated EVERY DAY,” the site’s owner enthuses. “About 99% nearly 100% US people could be found, more than any sites on the internet now.”

Ngo’s intrusion into MicroBilt eventually was detected, and the company kicked him out of their systems. But he says he got back in using another vulnerability.

“I was hacking them and it was back and forth for months,” Ngo said. “They would discover [my accounts] and fix it, and I would discover a new vulnerability and hack them again.”


This game of cat and mouse continued until Ngo found a much more reliable and stable source of consumer data: A U.S. based company called Court Ventures, which aggregated public records from court documents. Ngo wasn’t interested in the data collected by Court Ventures, but rather in its data sharing agreement with a third-party data broker called U.S. Info Search, which had access to far more sensitive consumer records.

Using forged documents and more than a few lies, Ngo was able to convince Court Ventures that he was a private investigator based in the United States.

“At first [when] I sign up they asked for some documents to verify,” Ngo said. “So I just used some skill about social engineering and went through the security check.”

Then, in March 2012, something even more remarkable happened: Court Ventures was purchased by Experian, one of the big three major consumer credit bureaus in the United States. And for nine months after the acquisition, Ngo was able to maintain his access.

“After that, the database was under control by Experian,” he said. “I was paying Experian good money, thousands of dollars a month.”

Whether anyone at Experian ever performed due diligence on the accounts grandfathered in from Court Ventures is unclear. But it wouldn’t have taken a rocket surgeon to figure out that this particular customer was up to something fishy.

For one thing, Ngo paid the monthly invoices for his customers’ data requests using wire transfers from a multitude of banks around the world, but mostly from new accounts at financial institutions in China, Malaysia and Singapore.

O’Neill said Ngo’s identity theft website generated tens of thousands of queries each month. For example, the first invoice Court Ventures sent Ngo in December 2010 was for 60,000 queries. By the time Experian acquired the company, Ngo’s service had attracted more than 1,400 regular customers, and was averaging 160,000 monthly queries.

More importantly, Ngo’s profit margins were enormous.

“His service was quite the racket,” he said. “Court Ventures charged him 14 cents per lookup, but he charged his customers about $1 for each query.”

By this time, O’Neill and his fellow Secret Service agents had served dozens of subpoenas tied to Ngo’s identity theft service, including one that granted them access to the email account he used to communicate with customers and administer his site. The agents discovered several emails from Ngo instructing an accomplice to pay Experian using wire transfers from different Asian banks.


Working with the Secret Service, Experian quickly zeroed in on Ngo’s accounts and shut them down. Aware of an opportunity here, the Secret Service contacted Ngo through an intermediary in the United Kingdom — a known, convicted cybercriminal who agreed to play along. The U.K.-based collaborator told Ngo he had personally shut down Ngo’s access to Experian because he had been there first and Ngo was interfering with his business.

“The U.K. guy told Ngo, ‘Hey, you’re treading on my turf, and I decided to lock you out. But as long as you’re paying a vig through me, your access won’t go away’,” O’Neill recalled.

The U.K. cybercriminal, acting at the behest of the Secret Service and U.K. authorities, told Ngo that if he wanted to maintain his access, he could agree to meet up in person. But Ngo didn’t immediately bite on the offer.

Instead, he weaseled his way into another huge data store. In much the same way he’d gained access to Court Ventures, Ngo got an account at a company called TLO, another data broker that sells access to extremely detailed and sensitive information on most Americans.

TLO’s service is accessible to law enforcement agencies and to a limited number of vetted professionals who can demonstrate they have a lawful reason to access such information. In 2014, TLO was acquired by Trans Union, one of the other three big U.S. consumer credit reporting bureaus.

And for a short time, Ngo used his access to TLO to power a new iteration of his business — an identity theft service rebranded as usearching[.]info. This site also pulled consumer data from a payday loan company that Ngo hacked into, as documented in my Sept. 2012 story, ID Theft Service Tied to Payday Loan Sites. Ngo said the hacked payday loans site gave him instant access to roughly 1,000 new fullz records each day.

Ngo’s former ID theft service usearching[.]info.


By this time, Ngo was a multi-millionaire: His various sites and reselling agreements with three Russian-language cybercriminal stores online had earned him more than USD $3 million. He told his parents his money came from helping companies develop websites, and even used some of his ill-gotten gains to pay off the family’s debts (its electronics business had gone belly up, and a family member had borrowed but never paid back a significant sum of money).

But mostly, Ngo said, he spent his money on frivolous things, although he says he’s never touched drugs or alcohol.

“I spent it on vacations and cars and a lot of other stupid stuff,” he said.

When TLO locked Ngo out of his account there, the Secret Service used it as another opportunity for their cybercriminal mouthpiece in the U.K. to turn the screws on Ngo yet again.

“He told Ngo he’d locked him out again, and the he could do this all day long,” O’Neill said. “And if he truly wanted lasting access to all of these places he used to have access to, he would agree to meet and form a more secure partnership.”

After several months of conversing with his apparent U.K.-based tormentor, Ngo agreed to meet him in Guam to finalize the deal. Ngo says he understood at the time that Guam is an unincorporated territory of the United States, but that he discounted the chances that this was all some kind of elaborate law enforcement sting operation.

“I was so desperate to have a stable database, and I got blinded by greed and started acting crazy without thinking,” Ngo said. “Lots of people told me ‘Don’t go!,’ but I told them I have to try and see what’s going on.”

But immediately after stepping off of the plane in Guam, he was apprehended by Secret Service agents.

“One of the names of his identity theft services was findget[.]me,” O’Neill said. “We took that seriously, and we did like he asked.”

This is Part I of a multi-part series. Check back tomorrow (Aug. 27) for Part II, which will examine what investigators learned following Ngo’s arrest, and delve into his more recent effort to right the wrongs he’s done.

Planet DebianAndrew Cater: The Debconf20 song

The DebConf 20 song - a sea shanty - to the tune of "Fathom the bowl"

Here's to DebConf 20, the brightest and best
Now it's this year's orga team getting no rest
We're not met in Haifa - it's all doom and gloom
And I'm sat like a lifer here trapped in my room

I'm sat in my room, it's all doom and gloom
And I'm sat at my keyboard here trapped in my room

Now there's IRC rooms and there's jitsi and all
But no fun conversations as we meet in the hall
No hugs for old friends, no shared wine and cheese
Just shared indigestion as we take our ease

I'm sat in my room, it's all doom and gloom
And I'm sat with three screens around me in my room

But there's people to chat to, and faces we know
And new things to learn and we're all on the go
Algo en espanol - there's no cause for alarm
An Indic track showcasing Malayalam

I'm sat in my room, it's all doom and gloom
And I'm sat with my Thinkpads and cats in my room

With webcams and buffering, with lag and delay
It's as well that there's Debconf time all through the day
The effects of tiredness are hard to foresee
For the Debian clocks all are timezone UTC

I'm sat in my room, it's all doom and gloom
And I'll sing out of tune as I'm sat in my room

There's no social drinking, there's no games of Mao
Keeping social distance, we can't think quite how
This year is still friendly though minus some fun
We'll catch up next year when we'll all get some sun

I'm sat in my room, it's all doom and gloom
I'm sat with my friends around here in my room

There's loopy@debconf and snippets and such
To cheer us all up, sure, it doesn't take much
For we're all one big family, though we each code alone
And we sometimes switch off or just complain and moan

I'm sat in my room, it's all doom and gloom
And there's space for us all in the debconf chat room

This is my first DebConf - hope it won't be my last
And we'll meet up somewhere when this COVID is past
To all who have done this - we deserve the credit
Now if you'll excuse me - I've web pages to edit

I'm sat in my room, it's not all doom and gloom
And we're met as one Debian here in my room


CryptogramAmazon Supplier Fraud

Interesting story of an Amazon supplier fraud:

According to the indictment, the brothers swapped ASINs for items Amazon ordered to send large quantities of different goods instead. In one instance, Amazon ordered 12 canisters of disinfectant spray costing $94.03. The defendants allegedly shipped 7,000 toothbrushes costing $94.03 each, using the code for the disinfectant spray, and later billed Amazon for over $650,000.

In another instance, Amazon ordered a single bottle of designer perfume for $289.78. In response, according to the indictment, the defendants sent 927 plastic beard trimmers costing $289.79 each, using the ASIN for the perfume. Prosecutors say the brothers frequently shipped and charged Amazon for more than 10,000 units of an item when it had requested fewer than 100. Once Amazon detected the fraud and shut down their accounts, the brothers allegedly tried to open new ones using fake names, different email addresses, and VPNs to obscure their identity.

It all worked because Amazon is so huge that everything is automated.

Worse Than FailureCodeSOD: Where to Insert This

If you run a business of any size, you need some sort of resource-management/planning software. Really small businesses use Excel. Medium businesses use Excel. Enterprises use Excel. But in addition to that, the large businesses also pay through the nose for a gigantic ERP system, like Oracle or SAP, that they can wire up to Excel.

Small and medium businesses can’t afford an ERP, but they might want to purchase a management package in the niche realm of “SMB software”- small and medium business software. Much like their larger cousins, these SMB tools have… a different idea of code quality.

Cassandra’s company had deployed such a product, and with it came a slew of tickets. The performance was bad. There were bugs everywhere. While the company provided support, Cassandra’s IT team was expected to also do some diagnosing.

While digging around in one nasty performance problem, Cassandra found that one button in the application would generate and execute this block of SQL code using a SQLCommand object in C#.

DECLARE @tmp TABLE (Id uniqueidentifier)

--{ Dynamic single insert statements, may be in the hundreds. }

IF NOT EXISTS (SELECT TOP 1 1 FROM SomeTable AS st INNER JOIN @tmp t ON t.Id = st.PK)
    INSERT INTO SomeTable (PK, SomeDate) SELECT Id, getdate() as SomeDate FROM @tmp 
    UPDATE st
        SET SomeDate = getdate()
        FROM @tmp t
        LEFT JOIN SomeTable AS st ON t.Id = st.PK AND SomeDate = NULL

At its core, the purpose of this is to take a temp-table full of rows and perform an “upsert” for all of them: insert if a record with that key doesn’t exist, update if a record with that key does. Now, this code is clearly SQL Server code, so a MERGE handles that.

But okay, maybe they’re trying to be as database agnostic as possible, and don’t want to use something that, while widely supported, has some dialect differences across databases. Fine, but there’s another problem here.

Whoever built this understood that in SQL Server land, cursors are frowned upon, so they didn’t want to iterate across every row. But here’s their problem: some of the records may exist, some of them may not, so they need to check that.

As you saw, this was their approach:

IF NOT EXISTS (SELECT TOP 1 1 FROM SomeTable AS st INNER JOIN @tmp t ON t.Id = st.PK)

This is wrong. This will be true only if none of the rows in the dynamically generated INSERT statements exist in the base table. If some of the rows exist and some don’t, you aren’t going to get the results you were expecting, because this code only goes down one branch: it either inserts or updates.

There are other things wrong with this code. For example, SomeDate = NULL is going to have different behavior based on whether the ANSI_NULLS database flag is OFF (in which case it works), or ON (in which case it doesn’t). There’s a whole lot of caveats about whether you set it at the database level, on the connection string, during your session, but in Cassandra’s example, ANSI_NULLS was ON at the time this ran, so that also didn’t work.

There are other weird choices and performance problems with this code, but the important thing is that this code doesn’t work. This is in a shipped product, installed by over 4,000 businesses (the vendor is quite happy to cite that number in their marketing materials). And it ships with code that can’t work.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianAlexandre Viau: Setting up Nightscout using MongoDB Atlas

Nightscout is an Open Source web-based CGM (Continuous Glucose Monitor) that allows multiple caregivers to remotely view a patient’s glucose data in real time.

It is often deployed by non-technical people for their own use. The traditional method used a MongoDB Addon on Heroku that is now deprecated. I have sent patches to adapt the documentation on the relevant projects:

This app is life-changing. Some Nigthscout users may be impatient, so I am writing this blog post to guide them in the meantime.

Setting up Nightscout

If you want to setup Nightscout from scratch using MongoDB Atlas, please follow this modified guide.

However, note that you will have to make one modification to the steps in the guide. At the start of Step 4, you will need to go to this repository instead: This is my own version of Nightscout and it contains small modifications that will allow you to set it up with MongoDB Atlas easily.

I will keep this blog post updated as I receive feedback. Come back here for more instructions.

Planet DebianJacob Adams: Get A Command Line

How to access a command line on your laptop without installing Linux.

Linux is great, and I recommend trying it out, whether on real hardware or in a virtual machine, to anyone interested in Computer Science.

However, the process can be quite involved, and sometimes you don’t want to change your whole operating system or sort out installing virtual machines.

Fortunately, these days you can try out one of Linux’s greatest features, the command line, without going through all that hassle.


If you have a Mac, all you need to do is open, which is usually found under /Applications/Utilities. Note that Mac now defaults to zsh instead of bash, which is usually the shell used on Linux. This shouldn’t matter, but it’s something you should be aware of.


On Windows, things are much more complex. There’s always Powershell, but if you want a true Unix shell experience like you’d get on Linux, you’ll need to install the Windows Subsystem for Linux. This allows you to run Linux programs on your Windows 10 computer.

This boils down to opening Powershell (open the start menu, search for “powershell”) as an administrator (right-click, then “Run as Administrator,” then click “Yes” or enter an administrator’s password when the UAC prompt appears). In this new Powershell window, you need to run the following command to enable WSL:

dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart

When this command executes successfully you should see the following output: WSL Sucess

After this output appears, you’ll need to reboot your machine before you can continue.

Once your machine is rebooted, you need to install a Linux distribution. Different communities and companies ship their own versions of Linux and the various services and utilities required to use it, and these versions are called “distributions.”

If you’re not sure which one to use, I would recommend Ubuntu, as it was the distribution first integrated into WSL, and it’s a common distribution for new users.

After installing your chosen distribution, you’ll need to perform the first-time setup. You’ll just need to run it, as it is now installed as a program on your computer, and then it will walk you through the setup process, which requires you to create a new user account for your Linux distribution.

This does not need to be the same as your Windows username and password, and it’s probably safer if it isn’t. You’ll need to remember that password for running administrative commands with sudo.


Kevin RuddAFR: How Mitch Hooke Axed the Mining Tax and Climate Action

Published by The Australian Financial Review on 25 August 2020

The Australian political arena is full of reinventions.

Tony Abbott has gone from pushing emissions cuts under the Paris climate agreement to demanding Australia withdraw from the treaty altogether. And Scott Morrison, who accused Labor of presiding over “crippling” debt, now binges on wasteful debt-fuelled spending that makes our government’s stimulus look like a rounding error.

However, neither of these metamorphoses comes close to the transformation of Mitch Hooke, the former Minerals Council chief and conservative political operative, who now pretends he is a lifelong evangelist of carbon pricing.

Writing in The Australian Financial Review, (Ken Henry got it wrong on climate wars, mining tax on August 11) Hooke said he supported emissions trading throughout the mid-2000s until my government came to power in 2007.

I then supposedly “trashed that consensus” by using the proceeds of a carbon price to compensate motorists, low-income households and trade-exposed industries.

How dreadful to help those most impacted by a carbon price! The very point of an emissions trading scheme is that it can change consumers’ behaviour without making people on low to middle incomes worse off. That’s why you increase the price of emissions-intensive goods and services (relative to less polluting alternatives) then give that money back to people through the tax or benefits system so they’re no worse off. But they are then able to choose a more climate-friendly product.

The alternative is the government just pockets the cash – thereby defeating the entire purpose of a market-based scheme. Obviously this is pure rocket science for Mitch.

Hooke also seems to have forgotten that such compensation was not only appropriate, but it was exactly what Malcolm Turnbull was demanding in exchange for Liberal support for our proposal in the Senate. Without it, any emissions trading scheme would be a non-starter.

When that deal was tested in the Liberal party room, it was defeated by a single vote. Even so, enough Liberal senators crossed the floor to give the Green political party the balance of power.

Showing their true colours, Bob Brown’s senators sided with Tony Abbott and Barnaby Joyce to kill the legislation. The Green party has, to this day, been unable to adequately explain its decision to voters. If they hadn’t, Australia would now be 10 years down the path of steady decarbonisation.

For Hooke, the reality is that he never wanted an emissions trading scheme if he could avoid one. But rather than state this outright, he just insists on impossible preconditions. As for Hooke’s most beloved Howard government, John Winston would in all probability have gone even further than Labor in compensating people affected by his own proposed emissions trading scheme, given Howard’s legendary ability to bake middle-class welfare into any national budget. Just ask Peter Costello.

Hooke has, like Abbott, been one of the most destructive voices in Australian national climate change action. He also expresses zero remorse for his deceptive campaign of misinformation, in partnership with those wonderful corporate citizens at Rio, targeting my government’s efforts to introduce a profits-based tax for minerals, mirroring the petroleum resource rent tax implemented by the Hawke government in the 1980s.

Our Resource Super Profits Tax would have funded new infrastructure to address looming capacity constraints affecting the sector as well as an across-the-board company tax cut to 28 per cent. Most importantly it sought to fairly spread the proceeds of mining profits when they vastly exceeded the industry norms – such as during commodity price booms – with the broader Australian public. Lest we forget, they actually own those resources. Rio just rents them.

In response, Hooke and his mates at Rio and BHP accumulated a $90 million war chest and $22.2 million of shareholders’ funds were poured into a political advertising campaign over six weeks.

Another $1.9 million was tipped into Liberal and National party coffers to keep conservative politicians on side. All to keep Rio and BHP happy, while ignoring the deep structural interests of the rest of our mining sector, many of whom supported our proposal.

At their height, Hooke’s television ads were screening around 33 times per day on free-to-air channels. Claims the tax would be a “hand grenade” to retirement savings were blasted by the Australian Institute of Superannuation Trustees which referred the “irresponsible” and “scaremongering” campaign to regulators.

This was not an exercise in public debate to refine aspects of the tax’s design; it was a systematic effort to use the wealth of two multinational mining companies to bludgeon the government into submission.

And when Gillard and Swan capitulated as the first act of their new government, they essentially turned over the drafting pen to Hooke to write a new rent tax that collected almost zero revenue.

The industry, however, was far from unified. Fortescue Metals Group chairman Andrew “Twiggy” Forrest understood what we were trying to achieve, having circumvented Hooke’s spin machine to deal directly with my resources minister Martin Ferguson.

We ultimately agreed that Forrest would stand alongside me and pledge to support the tax. The next day, Gillard and Swan struck. And Hooke has been a happy man ever since, even though Australia is the poorer for it.

It doesn’t matter where you sit on the political spectrum, everyone involved in public debate should hope that they’ve helped to improve the lives of ordinary people.

That is not Hooke’s legacy. Nor his interest. However much he may now seek to rationalise his conduct, Hooke’s stock and trade was brutal, destructive politics in direct service of BHP, Rio and the carbon lobby.

He was paid handsomely to thwart climate change action and ensure wealthy multinationals didn’t pay a dollar more in tax than was absolutely necessary. He succeeded. But I’m not sure his grandchildren will be all that proud of his destructive record.

Congratulations, Mitch.

The post AFR: How Mitch Hooke Axed the Mining Tax and Climate Action appeared first on Kevin Rudd.

Planet DebianJonas Meurer: cryptsetup-suspend

Introducing cryptsetup-suspend

Today, we're introducing cryptsetup-suspend, whose job is to protect the content of your harddrives while the system is sleeping.


  • You can lock your encrypted harddrives during suspend mode by installing cryptsetup-suspend
  • For cryptsetup-suspend to work properly, at least Linux kernel 5.6 is required
  • We hope that in a bright future, everything will be available out-of-the-box in Debian and it's derivatives





Table of contents

What does this mean and why should you care about it?

If you don't use full-disk encryption, don't read any further. Instead, think about, what will happen if you lose your notebook on the train, a random person picks it up and browses through all your personal pictures, e-mails, and tax records. Then encrypt your system and come back.

If you believe full-disk encryption is necessary, you might know that it only works when your machine is powered off. Once you turn on the machine and decrypt your harddrive, your encryption key stays in RAM and can potentially be extracted by malicious software or physical access. Even if these attacks are non-trivial, it's enough to worry about. If an attacker is able to extract your disk encryption keys from memory, they're able to read the content of your disk in return.

Sadly, in 2020, we hardly power off our laptops anymore. The sleep mode, also known as "suspend mode", is just too convenient. Just close the lid to freeze the system state and lift it anytime later in order to continue. Well, convenience usually comes with a cost: during suspend mode, your system memory is kept powered, all your data - including your encryption keys - stays there, waiting to be extracted by a malicious person.  Unfortunately, there are practical attacks to extract the data of your powered memory.

Cryptsetup-suspend expands the protection of your full-disk encryption to all those times when your computer sleeps in suspend mode. Cryptsetup-suspend utilizes the suspend feature of LUKS volumes and integrates it with your Debian system. Encryption keys are evicted from memory before suspend mode and the volumes have to be re-opened after resuming - potentially prompting for the required passphrases.

By now, we have a working prototype which we want to introduce today. We did quite some testing, both on virtualized and bare-metal Debian and Ubuntu systems, with and without graphical stack, so we dare to unseal and set free the project and ask you - the community - to test, review, criticize and give feedback.

Here's a screencast of cryptsetup-suspend in action:

State of the implementation: where are we?

If you're interested in the technical details, here's how cryptsetup-suspend works internally. It basically consists of three parts:


  1. cryptsetup-suspend: A C program that takes a list of LUKS devices as arguments, suspends them via luksSuspend and suspends the system afterwards. Also, it tries to reserve some memory for decryption, which we'll explain below.
  2. cryptsetup-suspend-wrapper: A shell wrapper script which works the following way:
    1. Extract the initramfs into a ramfs
    2. Run (systemd) pre-suspend scripts, stop udev, freeze almost all cgroups
    3. Chroot into the ramfs and run cryptsetup-suspend
    4. Resume initramfs devices inside chroot after resume
    5. Resume non-initramfs devices outside chroot
    6. Thaw groups, start udev, run (systemd) post-suspend scripts
    7. Unmount the ramfs
  3. A systemd unit drop-in file overriding the Exec property of systemd-suspend.service so that it invokes the script cryptsetup-suspend-wrapper.

Reusing large parts of the existing cryptsetup-initramfs implementation has some positive side-effects: Out-of-the-box, we support all LUKS block device setups that have been supported by the Debian cryptsetup packages before.

Freezing most processes/cgroups is necessary to prevent possible race-conditions and dead-locks after the system resumes. Processes will try to access data on the locked/suspended block devices eventually leading to buffer overflows and data loss.

Technical challenges and caveats

  • Dead-locks at suspend: In order to prevent possible dead-locks between suspending the encrypted LUKS disks and suspending the system, we have to tell the Linux kernel to not sync() before going to sleep. A corresponding patch got accepted upstream in Linux 5.6. See section What about the kernel patch? below for details.
  • Race conditions at resume: Likewise, there's a risk of race conditions between resuming the system and unlocking the encypted LUKS disks. We went with freezing as many processes as possible as a counter measurement. See last part of section State of the implementation: where are we? for details.
  • Memory management: Memory management is definitely a challenge. Unlocking disks might require a lot of memory (if key derivation function is argon2i) and the swap device most likely is locked at that time. See section All that matters to me is the memories! below for details.
  • systemd dependency: Our implementation depends on systemd. It uses a unit drop-in file for systemd-suspend.service for hooking into the system suspend process and depends on systemds cgroup management to freeze and thaw processes. If you're using a different init system, sorry, you're currently out of luck.

What about the kernel patch?

The problem is simple: the Linux kernel suspend implementation enforces a final filesystem sync() before the system goes to sleep in order to prevent potential data loss. While that's sensible in most scenarios, it may result in dead-locks in our situation, since the block device that holds the filesystem is already suspended. The fssync() call will block forever as it waits for the block device to finish the sync() operation. So we need a way to conditionally disable this sync() call in the Linux kernel resume function. That's what our patch does, by introducing a run-time switch at /sys/power/sync_on_suspend, but it only got accepted into the Linux kernel recently and was first released with Linux kernel 5.6.

Since release 4.3, the Linux kernel at least provides a build-time flag to disable the sync(): CONFIG_SUSPEND_SKIP_SYNC (that was called SUSPEND_SKIP_SYNC first and renamed to CONFIG_SUSPEND_SKIP_SYNC in kernel release 4.9). Enabling this flag at build-time protects you against the dead locks perfectly well. But while that works on an individual basis, it's a non-option for the distribution Linux kernel defaults. In most cases you still want the sync() to happen, except if you have user-space code that takes care of the sync() just before suspending your system - just like our cryptsetup-suspend implementation does.

So in order to properly test cryptsetup-suspend, you're strongly advised to run Linux kernel 5.6 or newer. Fortunately, Linux 5.6 is available in buster-backports thanks to the Debian Kernel Team.

All that matters to me is the memories!

One of the tricky parts is memory management. Since version 2, LUKS uses argon2i as default key derivation function. Argon2i is a memory-hard hash function and LUKS2 assigns the minimum of half of your systems memory or 1 GB to unlocking your device. While this is usually unproblematic during system boot - there's not much in the system memory anyway - it can become problematic when suspending. When cryptsetup tries to unlock a device and wants 1 GB of memory for this, but everything is already occupied by your browser and video player, there's only two options what to do:

  1. Kill a process to free some memory
  2. Move some of the data from memory to swap space

The first option is certainly not what you expect when suspending your system. The second option is impossible, because swap is located on your harddrive which we have locked before. Our current solution is to allocate the memory after freezing the other processes, but before locking the disks. At this time, the system can still move data to swap, but it won't be accessed anymore. We then release the memory just in time for cryptsetup to claim it again. The implementation of this is still subject to change.


What's missing: A proper user interface

As mentioned before, we consider cryptsetup-suspend usable, but it certainly still has bugs and shortcomings. The most obvious one is lack of a proper user interface. Currently, we switch over to a tty command-line interface to prompt for passphrases when unlocking the LUKS devices. It certainly would be better to replace this with a graphical user interface later, probably by using plymouth or something alike. Unfortunately, it seems rather impossible to spawn a real graphical environment for the passphrase prompt. That would imply to load the full graphical stack into the ramfs, raising the required amount of memory significantly. Lack of memory is currently our biggest concern and source of trouble.

We'd definitely appreciate to learn about your ideas how to improve the user experience here.

Let's get practical: how to use

TL;DR: On Debian Bullseye (Testing), all you need to do is to install the cryptsetup-suspend package from experimental. It's not necessary to upgrade the other cryptsetup packages. On Debian Buster, cryptsetup packages from backports are required.

  1. First, be sure that you're running Linux kernel 5.6 or newer. For Buster systems, it's available in buster-backports.
  2. Second, if you're on Debian Buster, install the cryptsetup 2:2.3.3-2~bpo10+1 packages from buster-backports.
  3. Third, install the cryptsetup-suspend package from experimental. Beware that cryptsetup-suspend depends on cryptsetup-initramfs (>= 2:2.3.3-1~). Either you need the cryptsetup packages from testing/unstable, or the backports from buster-backports.
  4. Now that you have the cryptsetup-suspend package installed, everything should be in place: Just send your system to sleep. It should switch to a virtual text terminal before going to sleep, ask for a passphrase to unlock your encrypted disk(s) after resume and switch back to your former working environment (most likely your graphical desktop environment) afterwards.

Security considerations

Suspending LUKS devices basically means to remove the corresponding encryption keys from system memory. This protects against all sort of attacks trying to read them from there, e.g. cold-boot attacks. But, cryptsetup-suspend only protects the encryption keys of your LUKS devices. Most likely there's more sensitive data in system memory, like all kinds of private keys (e.g. OpenPGP, OpenSSH) or documents with sensitive content.

We hope that the community will help improve this situation by providing useful pre-/post-suspend scripts. A positive example is KeepassXC, which is able to lock itself when going to suspend mode.

Feedback and Comments

We'd be more than happy to learn about your thoughts on cryptsetup-suspend. For specific issues, don't hesitate to open a bugreport against cryptsetup-suspend. You can also reach us via mail - see the next section for contact addresses. Last but not least, comments below the blogpost work as well.


  • Tim (tim at
  • Jonas (jonas at

LongNowThe Alchemical Brothers: Brian Eno & Roger Eno Interviewed

Long Now co-founder Brian Eno on time, music, and contextuality in a recent interview, rhyming on Gregory Bateson’s definition of information as “a difference that makes a difference”:

If a Martian came to Earth and you played her a late Beethoven String Quartet and then another written by a first-year music student, it is unlikely that she would a) understand what the point of listening to them was at all, and b) be able to distinguish between them.

What this makes clear is that most of the listening experience is constructed in our heads. The ‘beauty’ we hear in a piece of music isn’t something intrinsic and immutable – like, say, the atomic weight of a metal is intrinsic – but is a product of our perception interacting with that group of sounds in a particular historical context. You hear the music in relation to all the other experiences you’ve had of listening to music, not in a vacuum. This piece you are listening to right now is the latest sentence in a lifelong conversation you’ve been having. What you are hearing is the way it differs from, or conforms to, the rest of that experience. The magic is in our alertness to novelty, our attraction to familiarity, and the alchemy between the two.

The idea that music is somehow eternal, outside of our interaction with it, is easily disproven. When I lived for a few months in Bangkok I went to the Chinese Opera, just because it was such a mystery to me. I had no idea what the other people in the audience were getting excited by. Sometimes they’d all leap up from their chairs and cheer and clap at a point that, to me, was effectively identical to every other point in the performance. I didn’t understand the language, and didn’t know what the conversation had been up to that point. There could be no magic other than the cheap thrill of exoticism.

So those poor deluded missionaries who dragged gramophones into darkest Africa because they thought the experience of listening to Bach would somehow ‘civilise the natives’ were wrong in just about every way possible: in thinking that ‘the natives’ were uncivilised, in not recognising that they had their own music, and in assuming that our Western music was culturally detachable and transplantable – that it somehow carried within it the seeds of civilisation. This cultural arrogance has been attached to classical music ever since it lost its primacy as the popular centre of the Western musical universe, as though the soundtrack of the Austro-Hungarian Empire in the 19th Century was somehow automatically universal and superior.

Google AdsenseAdSense Reports Technical Lead Manager

The new AdSense reporting is live

CryptogramIdentifying People by Their Browsing Histories

Interesting paper: "Replication: Why We Still Can't Browse in Peace: On the Uniqueness and Reidentifiability of Web Browsing Histories":

We examine the threat to individuals' privacy based on the feasibility of reidentifying users through distinctive profiles of their browsing history visible to websites and third parties. This work replicates and extends the 2012 paper Why Johnny Can't Browse in Peace: On the Uniqueness of Web Browsing History Patterns[48]. The original work demonstrated that browsing profiles are highly distinctive and stable.We reproduce those results and extend the original work to detail the privacy risk posed by the aggregation of browsing histories. Our dataset consists of two weeks of browsing data from ~52,000 Firefox users. Our work replicates the original paper's core findings by identifying 48,919 distinct browsing profiles, of which 99% are unique. High uniqueness hold seven when histories are truncated to just 100 top sites. Wethen find that for users who visited 50 or more distinct do-mains in the two-week data collection period, ~50% can be reidentified using the top 10k sites. Reidentifiability rose to over 80% for users that browsed 150 or more distinct domains.Finally, we observe numerous third parties pervasive enough to gather web histories sufficient to leverage browsing history as an identifier.

One of the authors of the original study comments on the replication.

Worse Than FailureCodeSOD: Wait a Minute

Hanna's co-worker implemented a new service, got it deployed, and then left for vacation someplace where there's no phones or Internet. So, of course, Hanna gets a call from one of the operations folks: "That new service your team deployed keeps crashing on startup, but there's nothing in the log."

Hanna took it on herself to check into the VB.Net code.

Public Class Service Private mContinue As Boolean = True Private mServiceException As System.Exception = Nothing Private mAppSettings As AppSettings '// ... snip ... // Private Sub DoWork() Try Dim aboutNowOld As String = "" Dim starttime As String = DateTime.Now.AddSeconds(5).ToString("HH:mm") While mContinue Threading.Thread.Sleep(1000) Dim aboutnow As String = DateTime.Now.ToString("HH:mm") If starttime = aboutnow And aboutnow <> aboutNowOld Then '// ... snip ... // starttime = DateTime.Now.AddMinutes(mAppSettings.pollingInterval).ToString("HH:mm") End If aboutNowOld = aboutnow End While Catch ex As Exception mServiceException = ex End Try If mServiceException IsNot Nothing Then EventLog.WriteEntry(mServiceException.ToString, Diagnostics.EventLogEntryType.Error) Throw mServiceException End If End Sub End Class

Presumably whatever causes the crash is behind one of those "snip"s, but Hanna didn't include that information. Instead, let's focus on our unique way to idle.

First, we pick our starttime to be the minute 5 seconds into the future. Then we enter our work loop. Sleep for one second, and then check which minute we're on. If that minute is our starttime and this loop hasn't run during this minute before, we can get into our actual work (snipped), and then calculate the nextstarttime, based on our app settings.

If there are any exceptions, we break the loop, log and re-throw it- but don't do that from the exception handler. No, we store the exception in a member variable and then if it IsNot Nothing we log it out.

Hanna writes: "After seeing this I gave up immediately before I caused a time paradox. Guess we'll have to wait till she's back from the future to fix it."

It's not quite a paradox, but it's certainly far more complex than it ever needs to be. First, we have the stringly-typed date handling. That's just awful. Then, we have the once-per-second polling, but we except pollingInterval to be in minutes. But AddMinutes takes doubles, so it could be seconds, expressed as fractional minutes. But wait, if we know how long we want to wait between executions, couldn't we just Sleep that long? Why poll every second? Does this job absolutely have to run in the first second of every minute? Even if it does, we could easily calculate that sleep time with reasonable accuracy if we actually looked at the seconds portion of the current time.

The developer who wrote this saw the problem of "execute this code once every polling interval" and just called it a day.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Rondam RamblingsThey knew. They still know.

Never forget what conservatives were saying about Donald Trump before he cowed them into submission.(Sorry about the tiny size of the embedded video.  That's the default that Blogger gave me and I can't figure out how to adjust the size.  If it bothers you, click on the link above to see the original.)

Rondam RamblingsRepublicans officially endorse a Trump dictatorship

The Republican party has formally decided not to adopt a platform this year, instead passing a resolution that says essentially, "we will support whatever the Dear Leader says".  Since the resolution calls out the media for its biased reporting, I will quote the resolution here in its entirety, with the salient portions highlighted: WHEREAS, The Republican National Committee (RNC) has

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 14)

Here’s part fourteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.


Planet DebianJonathan Dowland: Out of control (21 minutes of madness mix)

Chemical Brothers — Out Of Control (21 Minutes of Madness remix) Copy 1143 of 1999

Chemical Brothers — Out Of Control (21 Minutes of Madness remix) Copy 1143 of 1999

Also known as The Secret Psychedelic Mix. I picked this up last year. It was issued to promote the 20th anniversary re-issue of the parent album "Surrender". I remember liking this song back when it came out. At that time I didn't know who the guest singer was — Bernard Sumner — and if I had it wouldn't mean anything to me.

This is a pretty good mix. There's nothing "extra" in the mix, really, it's the same elements as the original 7 minute version, for 21 minutes this time, with perhaps some more production elements (more dubby stuff) but it doesn't seem to overstay its welcome.

Planet DebianJonathan Carter: DebConf 20 Sessions

DebConf20 is happening from 23 August to 29 August. The full is schedule available on the DebConf20 website.

I’m preparing (or helping to prepare) 3 sessions for this DebConf. I wish I had the time for more, but with my current time constraints, even preparing for these sessions took some careful planning!

Bits from the DPL

Time: Aug 24 (Mon): 16:00 UTC.

The traditional DebConf talk from the DPL, where we take a look at the state of the Debian project and where we’re heading. This talk is pre-recorded, but there will be a few minutes after the talk for questions.

Leadership in Debian BOF/Panel

Time: Aug 27 (Thu): 18:00 UTC.

In this session, we will host a panel of people who hold (or who have held) leadership positions within Debian.

We’ll go through a few questions for the panel and then continue with open questions and discussion.

Local Teams

Time: Aug 29 (Sat): 19:00 UTC.

We already have a number of large and very successful Debian Local Groups (Debian France, Debian Brazil and Debian Taiwan, just to name a few), but what can we do to help support upcoming local groups or help spark interest in more parts of the world?

In this BoF, we’ll discuss the possibility of setting up a local group support team or a new delegation that will keep track of local teams, manage budgets and get new local teams bootstrapped.


DiceKeys is a physical mechanism for creating and storing a 192-bit key. The idea is that you roll a special set of twenty-five dice, put them into a plastic jig, and then use an app to convert those dice into a key. You can then use that key for a variety of purposes, and regenerate it from the dice if you need to.

This week Stuart Schechter, a computer scientist at the University of California, Berkeley, is launching DiceKeys, a simple kit for physically generating a single super-secure key that can serve as the basis for creating all the most important passwords in your life for years or even decades to come. With little more than a plastic contraption that looks a bit like a Boggle set and an accompanying web app to scan the resulting dice roll, DiceKeys creates a highly random, mathematically unguessable key. You can then use that key to derive master passwords for password managers, as the seed to create a U2F key for two-factor authentication, or even as the secret key for cryptocurrency wallets. Perhaps most importantly, the box of dice is designed to serve as a permanent, offline key to regenerate that master password, crypto key, or U2F token if it gets lost, forgotten, or broken.


Schechter is also building a separate app that will integrate with DiceKeys to allow users to write a DiceKeys-generated key to their U2F two-factor authentication token. Currently the app works only with the open-source SoloKey U2F token, but Schechter hopes to expand it to be compatible with more commonly used U2F tokens before DiceKeys ship out. The same API that allows that integration with his U2F token app will also allow cryptocurrency wallet developers to integrate their wallets with DiceKeys, so that with a compatible wallet app, DiceKeys can generate the cryptographic key that protects your crypto coins too.

Here's the DiceKeys website and app. Here's a short video demo. Here's a longer SOUPS talk.

Preorder a set here.

Note: I am an adviser on the project.

Another news article. Slashdot thread. Hacker News thread. Reddit thread.

Planet DebianSven Hoexter: google cloud buster images without python 2


I have to stand corrected. noahm@ wrote me, because the Debian Cloud Image maintainer only ever included python explicitly in Azure images. The most likely explanation for the change in the Google images is that Google just ported the last parts of their own software to python 3, and subsequently removed python 2.

With some relieve one can just conclude it's only our own fault that we did not build our own images, which include all our own dependencies. Take it as reminder to always build your own images. Always. Be it VMs or docker. Build your own image.

Original Post

Fun in the morning, we realized that the Debian Cloud image builds dropped python 2 and that propagated to the Google provided Debian/buster images. So in case you use something like ansible, and so far assumed python 2 as the default interpreter, and installed additional python 2 modules to support ansible modules, you now have to either install python 2 again or just move to python 3k.

We just try to suffer it through now, and set interpreter_python = auto in our ansible.cfg to anticipate the new default behaviour, which is planned for ansible 2.12. See also

Other lesson to learn here: The GCE Debian stable images are not stable. Blends in nicely with this rant, though it's not 100% a Google Cloud foul this time.

Worse Than FailureCodeSOD: Sudon't

There are a few WTFs in today's story. Let's get the first one out of the way: Jan S downloaded a shell script and ran it as root, without reading it. Now, let's be fair, that's honestly a pretty mild WTF; we've all done something similar, and popular software tools still tell you to install them with a curl … | sh, and then sudo themselves extra permissions in the script.

The software being installed in this case is a tool for accessing Bitlocker encrypted drives from Linux. And the real WTF for this one is the install script, which we'll dig into in a moment. This is not, however, some small scale open source project thrown together by hobbyists, but instead released by Initech's "Data Recovery" business. In this case, this is the open source core of a larger data recovery product- if you're willing to muck around with low level commands and configs, you can do it for free, but if you want a vaguely usable UI, get ready to pony up $40.

With that in mind, let's take a look at the script. We're going to do this in chunks, because nearly everything is wrong. You might think I'm exaggerating, but here's the first two lines of the script:

#!/bin/bash home_dir="/home/"${USER}"/initech.bitlocker"

That is not how you find out the user's home directory. We'll usually use ${HOME}, or since the shebang tells us this is definitely bash, we could just use ~. Jan also points out that while a username probably shouldn't have a space, it's possible, and since the ${USER} isn't in quotes, this breaks in that case.

echo ${home_dir} install_dir=$1 if [ ! -d "${install_dir}" ]; then install_dir=${home_dir} if [ ! -d "${install_dir}" ]; then echo "create dir : "${install_dir} mkdir ${install_dir}

Who wants indentation in their scripts? And if a script supports arguments, should we tell the user about it? Of course not! Just check to see if they supplied an argument, and if they did, we'll treat that as the install directory.

As a bonus, the mkdir line protects people like Jan who run this script as root, at least if their home directory is /root, which is common. When it tries to mkdir /home/root/initech.bitlocker, the script fails there.

echo "Install software to ${install_dir}" cp -rf ./* ${install_dir}"/"

Once again, the person who wrote this script doesn't seem to understand what the double quotes in Bash are for, but the real magic is the next segment:

echo "Copy runtime environment ..." sudo cp -f ./ /usr/lib/ sudo cp -f ./ /usr/lib64 sudo cp -f ./ /usr/lib/ sudo cp -f ./ /usr/lib64

Did you have libssl already installed in your system? Well now you have this version! Hope that's okay for you. We like our version of libssl and libcrypto so much we're copying them into your library directories twice. They probably meant to copy libcrypto and libssl to both lib and lib64, but messed up.

Well, that is assuming you already have a lib64 directory, because if you don't, you now have a lib64 file which contains the data from

This is the installer for a piece of software which has been released as part of a product that Initech wants to sell, and they don't successfully install it.

sudo ln -s ${install_dir}/mount.bitlocker /usr/bin/mount.bitlocker sudo ln -s ${install_dir}/bitlocker.creator /usr/bin/create.bitlocker sudo ln -s ${install_dir}/ /usr/bin/ sudo ln -s ${install_dir}/ /usr/bin/initech.bitlocker.mount sudo ln -s ${install_dir}/ /usr/bin/initech.bitlocker

Hey, here's an install step with no critical mistakes, assuming that no other package or tool has tried to claim those names in /usr/bin, which is probably true (Jan actually checked this using dpkg -S … to see if any packages wanted to use that path).

source /etc/os-release case $ID in debian|ubuntu|devuan) echo "Installing dependent package - curl ..." sudo apt-get install curl -y echo "Installing dependent package - openssl ..." sudo apt-get install openssl -y echo "Installing dependent package - fuse ..." sudo apt-get install fuse -y echo "Installing dependent package - gksu ..." sudo apt-get install gksu -y ;;

Here's the first branch of our case. They've learned to indent. They've chosen to slap the -y flag on all the apt-get commands, which means the user isn't going to get a choice about installing these packages, which is mildly annoying. It's also worth noting that sourceing /etc/os-release can be considered harmful, but clearly "not doing harm" isn't high on this script's agenda.

centos|fedora|rhel) yumdnf="yum" if test "$(echo "$VERSION_ID >= 22" | bc)" -ne 0; then yumdnf="dnf" fi echo "Installing dependent package - curl ..." sudo $yumdnf install -y curl echo "Installing dependent package - openssl ..." sudo $yumdnf install -y openssl echo "Installing dependent package - fuse ..." sudo $yumdnf install -y fuse3-libs.x86_64 ;;

So, maybe they just don't think if supports additional indentation? They indent the case fine. I'm not sure what their thinking is.

Speaking of if, look closely at that version check: test "$(echo "$VERSION_ID >= 22" | bc)" -ne 0.

Now, this is almost clever. If your Linux version number uses decimal values, like 18.04, you can't do a simple if [ "$VERSION_ID" -ge 22]…: you'd get an integer expression expected error. So using bc does make sense…ish. It would be good to check if, y'know, bc were actually installed- it probably is, but you don't know- and it might be better to actually think about the purpose of the check.

They don't actually care what version of Redhat Linux you're running. What they're checking is if your version uses yum for package management, or its successor dnf. A more reliable check would be to simply see if dnf is a valid command, and if not, fallback to yum.

Let's finish out the case statement:

*) exit 1 ;; esac

So if your system doesn't use an apt based package manager or a yum/dnf based package manager, this just bails at this point. No error message, just an error number. You know it failed, and you don't know why, and it failed after copying a bunch of crap around your system.

So first it mostly installs itself, then it checks to see if it can successfully install all of its dependencies. And if it fails, does it clean up the changes it made? You better believe it doesn't!

echo "" echo "Initech BitLocker Loader has been installed to "${install_dir}" successfully." echo "Run initech.bitlocker --help to learn more about Initech BitLocker Loader"

This is a pretty optimistic statement, and while yes, it has theoretically been installed to ${install_dir}, assuming that we've gotten this far, it's really installed to your /usr/bin directory.

The real extra super-special puzzle to me is that it interfaces with your package manager to install dependencies. But it also installs its own versions of libcrypto and libssl, which don't come from your package manager. Ignoring the fact that it probably *installs them into the wrong places*, it seems bad. Suspicious, bad, and troubling.

Jan didn't send us the uninstall script, and honestly, I assume there isn't one. But if there is one, you know it probably tries to do rm -rf /${SOME_VAR_THAT_MIGHT_BE_EMPTY} somewhere in there. Which, in consideration, is probably the safest way to uninstall this software anyway.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianUlrike Uhlig: Code reviews: from nitpicking to cooperation

After we gave our talk at DebConf 20, Doing things together, there were 5 minutes left for the live Q&A. Pollo asked a question that I think is interesting and deserves a longer answer: How can we still have a good code review process without making it a "you need to be perfect" scenario? I often find picky code reviews help me write better code.

I find it useful to first disentangle what code reviews are good for, how we do them, why we do them that way, and how we can potentially improve processes.

What are code reviews good for?

Code review and peer review are great methods for cooperation aiming at:

  • Ensuring that the code works as intended
  • Ensuring that the task was fully accomplished and no detail left out
  • Ensuring that no security issues have been introduced
  • Making sure the code fits the practices of the team and is understandable and maintainable by others
  • Sharing insights, transferring knowledge between code author and code reviewer
  • Helping the code author to learn to write better code

Looking at this list, the last point seems to be more like a nice side effect of all the other points. :)

How do code reviews happen in our communities?

It seems to be a common assumption that code reviews are—and have to be—picky and perfectionist. To me, this does not actually seem to be a necessity to accomplish the above mentioned goals. We might want to work with precision—a quality which is different from perfection. Perfection can hardly be a goal: perfection does not exist.

Perfectionist dynamics can lead to failing to call something "good enough" or "done". Sometimes, a disproportionate amount of time is invested in writing (several) code reviews for minor issues. In some cases, strong perfectionist dynamics of a reviewer can create a feeling of never being good enough along with a loss of self esteem for otherwise skilled code authors.

When do we cross the line?

When going from cooperation, precision, and learning to write better code, to nitpicking, we are crossing a line: nitpicking means to pedantically search for others' faults. For example, I once got one of my Git commits at work criticized merely for its commit message that was said to be "ugly" because I "use[d] the same word twice" in it.

When we are nitpicking, we might not give feedback in an appreciative, cooperative way, we become fault finders instead. From there it's a short way to operating on the level of blame.

Are you nitpicking to help or are you nitpicking to prove something? Motivations matter.

How can we improve code reviewing?

When we did something wrong, we can do better next time. When we are told that we are wrong, the underlying assumption is that we cannot change (See Brené Brown, The difference between blame and shame). We can learn to go beyond blame.

Negative feedback rarely leads to improvement if the environment in which it happens lacks general appreciation and confirmation. We can learn to give helpful feedback. It might be harder to create an appreciative environment in which negative feedback is a possibility for growth. One can think of it like of a relationship: in a healthy relationship we can tell each other when something does not work and work it out—because we regularly experience that we respect, value, and support each other.

To be able to work precisely, we need guidelines, tools, and time. It's not possible to work with precision if we are in a hurry, burnt out, or working under a permanent state of exception. The same is true for receiving picky feedback.

On DebConf's IRC channel, after our talk, marvil07 said: On picky code reviews, something that I find useful is automation on code reviews; i.e. when a bot is stating a list of indentation/style errors it feels less personal, and also saves time to humans to provide more insightful changes. Indeed, we can set up routines that do automatic fault checking (linting). We can set up coding guidelines. We can define what we call "done" or "good enough".

We can negotiate with each other how we would like code to be reviewed. For example, one could agree that a particularly perfectionist reviewer should point out only functional faults. They can spare their time and refrain from writing lengthy reviews about minor esthetic issues that have never made it into a guideline. If necessary, author and reviewer can talk about what can be improved on the long term during a retrospective. Or, on the contrary, one could explicitly ask for a particularly detailed review including all sorts of esthetic issues to learn the best practices of a team applied to one's own code.

In summary: let's not lose sight of what code reviews are good for, let's have a clear definition of "done", let's not confuse precision with perfection, let's create appreciative work environments, and negotiate with each other how reviews are made.

I'm sure you will come up with more ideas. Please do not hesitate to share them!

Planet DebianNorbert Preining: Social Equality and Free Software – BoF at DebConf20

Shortly after yesterday’s start of the Debian Conference 2020, I had the honor to participate in a BoF on social equality in free software, led by the OSI vice president and head of the FOSSASIA community, Hong Phuc Dang. The group of discussants consisted of OSS representatives from a wide variety of countries (India, Indonesia, China, Hong Kong, Germany, Vietnam, Singapore, Japan).

After a short introduction by Hong Phuc we turned to a self-introduction and “what is equality for me” round. This brought up already a wide variety of issues that need to be addressed if we want to counter inequality in free software (culture differences, language barriers, internet connection, access to services, onboarding difficulties, political restrictions, …).

Unfortunately, on-air time was rather restricted, but even after the DebConf related streaming time slot was finished, we continued discussing problems and possible approaches for another two hours. We have agreed to continue our collaboration and meetings in the hope that we, in particular the FOSSASIA community, can support those in need to counter inequality.

Concluding, I have to say I am very happy to be part of the FOSSASIA community – where real diversity is lived and everyone strives for and tries to increase social equality. In the DebConf IRC chat I was asked why at FOSSASIA we have about a 50:50 quote between women and men, in contrast to the usual 10:90 predominant in most software communities including Debian. For me this boils down to many reasons, one being competent female leadership, Hong Phuc is inspiring and competent to a degree I haven’t seen in anyone else. Another reason is of course that software development is, especially in developing countries, one of the few “escape pods” for any gender, and thus fully embraced by normally underrepresented groups. Finally, but this is a typical chicken-egg problem, the FOSSASIA community is not doing any specific gender politics, but simply remains open and friendly to everyone. I think Debian, and in particular the diversity movement in Debian – can learn a lot from the FOSSASIA community. At the end we are all striving for more equality in our projects and in the realm of free software as a whole!

Thanks again for all the participants for the very inspiring discussion, and I am looking forward to our next meetings!

Planet DebianArnaud Rebillout: Send emails from your terminal with msmtp

In this tutorial, we'll configure everything needed to send emails from the terminal. We'll use msmtp, a lightweight SMTP client. For the sake of the example, we'll use a GMail account, but any other email provider can do. Your OS is expected to be Debian, as usual on this blog, although it doesn't really matter. We will also see how to store the credentials for the email account in the system keyring. And finally, we'll go the extra mile, and see how to configure various command-line utilities so that they automatically use msmtp to send emails. Even better, we'll make msmtp the default email sender, to actually avoid configuring these utilities one by one.


Strong prerequisites (if you don't recognize yourself here, you probably landed on the wrong page):

  • You run Linux on your computer (let's assume a Debian-like distro).
  • You want to send emails from your terminal.

Weak prerequisites (if your setup doesn't match those points exactly, that's fine, you can still read on):

  • Your email account is a GMail one.
  • Your desktop environment is GNOME.

GMail account setup

For a GMail account, there's a bit of configuration to do. For other email providers, I have no idea, maybe you can just skip this part, or maybe you will have to go through a similar procedure.

If you want an external program (msmtp in this case) to talk to the GMail servers on your behalf, and send emails, you can't just use your usual GMail password. Instead, GMail requires you to generate so-called app passwords, one for each application that needs to access your GMail account.

This approach has several advantages:

  • it will basically work, GMail won't block you because it thinks that you're trying to sign in from an unknown device, a weird location or whatever.
  • your main GMail password remains secret, you won't have to write it down in any configuration file or anywhere else.
  • you can change your main GMail password, no breakage, apps will still work as each of them use their own passwords.
  • you can revoke an app password anytime, without impacting anything else.

So app passwords are a good idea, it just requires a bit of work to set it up. Let's see what it takes.

First, 2-Step Verification must be enabled on your GMail account. Visit, and if that's not the case, enable it. You'll need to authorize all of your devices (computer(s), phone(s) and so on), and it can be a bit tedious, granted. But you only have to do it once in a lifetime, and after it's done, you're left with a more secure account, so it's not that bad, right?

Enabling the 2-Step Verification will unlock the feature we need: App passwords. Visit, and under "Signing in to Google", click "App passwords", and generate one. An app password is a 16 characters string, something like qwertyuiopqwerty. It's supposed to be used from only one place, ie. from ONE application that is installed on ONE device. That's why it's common to give it a name of the form application@device, so in our case it could be msmtp@laptop, but really it's free form, choose whatever name suits you, as long as it makes sense to you.

So let's give a name to this app password, write it down for now, and we're done with the GMail config.

Send your first email

Time to get started with msmtp.

First thing first, installation, trivial:

sudo apt install msmtp

Let's try to send an email. At this point, we did not create any configuration file for msmtp yet, so we have to provide every details on the command line.

# Write a dummy email
cat << EOF > message.txt
Subject: Cafe Sua Da

Iced-coffee with condensed milk

# Send it
cat message.txt | msmtp \
    --auth=on --tls=on \
    --host \
    --port 587 \
    --user YOUR_LOGIN \
    --read-envelope-from \

# msmtp prompts you for your password:
# this is where goes the app password!

Obviously, in this example you should replace the uppercase words with the real thing, that is, your email login, and real email addresses.

Also, let me insist, you must enter the app password that was generated previously, not your real GMail password.

And it should work already, this email should have been sent and received by now.

So let me explain quickly what happened here.

In the file message.txt, we provided From: (the email address of the person sending the email) and To: (the destination email address). Then we asked msmtp to re-use those values to set the envelope of the email with --read-envelope-from and --read-recipients.

What about the other parameters?

  • --auth=on because we want to authenticate with the server.
  • --tls=on because we want to make sure that the communication with the server is encrypted.
  • --host and --port tells where to find the server. If you don't use GMail, adjust that accordingly.
  • --user is obviously your GMail username.

For more details, you should refer to the msmtp documentation.

Write a configuration file

So we could send an email, that's cool already.

However the command to do that was a bit long, and we don't want to juggle with all these arguments every time we send an email. So let's write down all of that into a configuration file.

msmtp supports two locations: ~/.msmtprc and ~/.config/msmtp/config, at your preference. In this tutorial we'll use ~/.msmtprc for brevity:

cat << 'EOF' > ~/.msmtprc
tls on

account gmail
auth on
port 587

account default : gmail

And for a quick explanation:

  • under defaults are the default values for all the following accounts.
  • under account are the settings specific to this account, until another account line is found.
  • finally, the last line defines which account is the default.

All in all it's pretty simple, and it's becoming easier to send an email:

# Write a dummy email. Note that the
# header 'From:' is no longer needed,
# it's already in '~/.msmtprc'.
cat << 'EOF' > message.txt
Subject: Flat White

The milky way for coffee

# Send it
cat message.txt | msmtp \
    --account default \

Actually, --account default is not needed, as it's the default anyway if you don't provide a --account argument. Furthermore --read-recipients can be shortened as -t. So we can make it real short now:

msmtp -t < message.txt

At this point, life is good! Except for one thing maybe: we still have to type the password every time we send an email. Surely it must be possible to avoid that annoyance...

Store your password in the system keyring

For this part, we'll make use of the libsecret tool to store the password in the system keyring via the Secret Service API. It means that your desktop environment should implement the Secret Service specification, which is the case for both GNOME and KDE.

Note that GNOME provides Seahorse to have a look at your secrets, KDE has the KDE Wallet. There's also KeePassXC, which I have only heard of but never used. I guess it can be your password manager of choice if you use neither GNOME nor KDE.

For those running an up-to-date Debian unstable, you should have msmtp >= 1.8.11-2, and you're all good to go. For those having an older version than that however, you will have to install the package msmtp-gnome in order to have msmtp built with libsecret support. Note that this package depends on seahorse, hence it pulls in a good part of the GNOME stack when you install it. For those not running GNOME, that's unfortunate. All of this was discussed and fixed in #962689.

Alright! So let's just make sure that the libsecret tools are installed:

sudo apt install libsecret-tools

And now we can store our password in the system keyring with this command:

secret-tool store --label msmtp \
    host \
    service smtp \
    user YOUR_LOGIN

If this looks a bit too magic, and you want something more visual, you can actually fire a GUI like seahorse (for GNOME users), or kwalletmanager5 (for KDE users), and then you will see what passwords are stored in there.

Here's a screenshot of Seahorse, with a msmtp password stored:

seahorse with msmtp password

Let's try to send an email again:

msmtp -t < message.txt

No need for a password anymore, msmtp got it from the system keyring!

For more details on how msmtp handle the passwords, and to see what other methods are supported, refer to the extensive documentation.

Use-cases and integration

Let's go over a few use-cases, situations where you might end up sending emails from the command-line, and what configuration is required to make it work with msmtp.

Git Send-Email

Sending emails with git is a common workflow for some projects, like the Linux kernel. How does git send-email actually send emails? From the git-send-email manual page:

the built-in default is to search for sendmail in /usr/sbin, /usr/lib and $PATH if such program is available

It is possible to override this default though:

[...] Alternatively it can specify a full pathname of a sendmail-like program instead; the program must support the -i option.

So in order to use msmtp here, you'd add a snippet like that to your ~/.gitconfig file:

    smtpserver = /usr/bin/msmtp

For a full guide, you can also refer to

Debian developer tools

Tools like bts or reportbug are also good examples of command-line tools that need to send emails.

From the bts manual page:

Specify the sendmail command [...] Default is /usr/sbin/sendmail.

So if you want bts to send emails with msmtp instead of sendmail, you must use bts --sendmail='/usr/bin/msmtp -t'.

Note that bts also loads settings from the file /etc/devscripts.conf and ~/.devscripts, so you could also set BTS_SENDMAIL_COMMAND='/usr/bin/msmtp -t' in one of those files.

From the reportbug manual page:

Specify an alternate MTA, instead of /usr/sbin/sendmail (the default).

In order to use msmtp here, you'd write reportbug --mta=/usr/bin/msmtp.

Note that reportbug reads it settings from /etc/reportbug.conf and ~/.reportbugrc, so you could as well set mta /usr/bin/msmtp in one of those files.

So who is this sendmail again?

By now, you probably noticed that sendmail seems to be considered the default tool for the job, the "traditional" command that has been around for ages.

Rather than configuring every tool to use something else than sendmail, wouldn't it be simpler to actually replace sendmail by msmtp? Like, create a symlink that points to msmtp, something like ln -sr /usr/bin/msmtp /usr/sbin/sendmail? So that msmtp acts as a drop-in replacement for sendmail, and there's nothing else to configure?

Answer is yes, kind of. Actually, the first msmtp feature that is listed on the homepage is "Sendmail compatible interface (command line options and exit codes)". Meaning that msmtp is a drop-in replacement for sendmail, that seems to be the intent.

However, you should refrain from creating or modifying anything in /usr, as it's the territory of the package manager, apt. Any change in /usr might be overwritten by apt the next time you run an upgrade or install new packages.

In the case of msmtp, there is actually a package named msmtp-mta that will create this symlink for you. So if you really want a definitive replacement for sendmail, there you go:

sudo apt install msmtp-mta

From this point, sendmail is now a symlink /usr/sbin/sendmail → /usr/bin/msmtp, and there's no need to configure git, bts, reportbug or any other tool that would rely on sendmail. Everything should work "out of the box".


I hope that you enjoyed reading this article! If you have any comment, feel free to send me a short email, preferably from your terminal!


Planet DebianEnrico Zini: Doing things /together/

Here are the slides of mine and Ulrike's talk Doing things /together/.

Our thoughts about cooperation aspects of doing things together.

Sometimes in Debian we do work together with others, and sometimes we are a number of people who work alone, and happen to all upload their work in the same place.

In times when we have needed to take important decisions together, this distinction has become crucial, and some of us might have found that we were not as good at cooperation as we would have thought.

This talk is intended for everyone who is part of a larger community. We will show concepts and tools that we think could help understand and shape cooperation.

Video of the talk:

The slides have extensive notes: you can use ViewNotes in LibreOffice Impress to see them.

Here are the Inkscape sources for the graphs:

Here are links to resources quoted in the talk:

In the Q&A, pollo asked:

How can we still have a good code review process without making it a "you need to be perfect" scenario? I often find picky code reviews help me write better code.

Ulrike wrote a more detailed answer: Code reviews: from nitpicking to cooperation

Planet DebianVincent Bernat: Zero-Touch Provisioning for Cisco IOS

The official documentation to automatically upgrade and configure on first boot a Cisco switch running on IOS, like a Cisco Catalyst 2960-X Series switch, is scarce on details. This note explains how to configure the ISC DHCP Server for this purpose.

When booting for the first time, Cisco IOS sends a DHCP request on all ports:

Dynamic Host Configuration Protocol (Discover)
    Message type: Boot Request (1)
    Hardware type: Ethernet (0x01)
    Hardware address length: 6
    Hops: 0
    Transaction ID: 0x0000117c
    Seconds elapsed: 0
    Bootp flags: 0x8000, Broadcast flag (Broadcast)
    Client IP address:
    Your (client) IP address:
    Next server IP address:
    Relay agent IP address:
    Client MAC address: Cisco_6c:12:c0 (b4:14:89:6c:12:c0)
    Client hardware address padding: 00000000000000000000
    Server host name not given
    Boot file name not given
    Magic cookie: DHCP
    Option: (53) DHCP Message Type (Discover)
    Option: (57) Maximum DHCP Message Size
    Option: (61) Client identifier
        Length: 25
        Type: 0
        Client Identifier: cisco-b414.896c.12c0-Vl1
    Option: (55) Parameter Request List
        Length: 12
        Parameter Request List Item: (1) Subnet Mask
        Parameter Request List Item: (66) TFTP Server Name
        Parameter Request List Item: (6) Domain Name Server
        Parameter Request List Item: (15) Domain Name
        Parameter Request List Item: (44) NetBIOS over TCP/IP Name Server
        Parameter Request List Item: (3) Router
        Parameter Request List Item: (67) Bootfile name
        Parameter Request List Item: (12) Host Name
        Parameter Request List Item: (33) Static Route
        Parameter Request List Item: (150) TFTP Server Address
        Parameter Request List Item: (43) Vendor-Specific Information
        Parameter Request List Item: (125) V-I Vendor-specific Information
    Option: (255) End

It requests a number of options, including the Bootfile name option 67, the TFTP server address option 150 and the Vendor-Identifying Vendor-Specific Information Option 125—or VIVSO. Option 67 provides the name of the configuration file located on the TFTP server identified by option 150. Option 125 includes the name of the file describing the Cisco IOS image to use to upgrade the switch. This file only contains the name of the tarball embedding the image.1

Configuring the ISC DHCP Server to answer with the TFTP server address and the name of the configuration file is simple enough:

filename "";
option tftp-server-address;

However, if you want to also provide the image for upgrade, you have to specify a hexadecimal-encoded string:2

option vivso 00:00:00:09:24:05:22:63:32:39:36:30:2d:6c:61:6e:62:61:73:65:6b:39:2d:74:61:72:2e:31:35:30:2d:32:2e:53:45:31:31:2e:74:78:74;

Having a large hexadecimal-encoded string inside a configuration file is quite unsatisfying. Instead, the ISC DHCP Server allows you to express this information in a more readable way using the option space statement:

# Create option space for Cisco and encapsulate it in VIVSO/vendor space
option space cisco code width 1 length width 1;
option code 5 = text;
option code 9 = encapsulate cisco;

# Image description for Cisco IOS ZTP
option = "c2960-lanbasek9-tar.150-2.SE11.txt";

# Workaround for VIVSO option 125 not being sent
option vendor.iana code 0 = string;
option vendor.iana = 01:01:01;

Without the workaround mentioned in the last block, the ISC DHCP Server would not send back option 125. With such a configuration, it returns the following answer, including a harmless additional enterprise 0 encapsulated into option 125:

Dynamic Host Configuration Protocol (Offer)
    Message type: Boot Reply (2)
    Hardware type: Ethernet (0x01)
    Hardware address length: 6
    Hops: 0
    Transaction ID: 0x0000117c
    Seconds elapsed: 0
    Bootp flags: 0x8000, Broadcast flag (Broadcast)
    Client IP address:
    Your (client) IP address:
    Next server IP address:
    Relay agent IP address:
    Client MAC address: Cisco_6c:12:c0 (b4:14:89:6c:12:c0)
    Client hardware address padding: 00000000000000000000
    Server host name not given
    Boot file name:
    Magic cookie: DHCP
    Option: (53) DHCP Message Type (Offer)
    Option: (54) DHCP Server Identifier (
    Option: (51) IP Address Lease Time
    Option: (1) Subnet Mask (
    Option: (6) Domain Name Server
    Option: (3) Router
    Option: (150) TFTP Server Address
        Length: 4
        TFTP Server Address:
    Option: (125) V-I Vendor-specific Information
        Length: 49
        Enterprise: Reserved (0)
        Enterprise: ciscoSystems (9)
            Length: 36
            Option 125 Suboption: 5
                Length: 34
                Data: 63323936302d6c616e626173656b392d7461722e3135302d…
    Option: (255) End

  1. The reason of this indirection is still puzzling me. I suppose it could be because updating the image name directly in option 125 is quite a hassle. ↩︎

  2. It contains the following information:

    • 0x00000009: Cisco’s Enterprise Number,
    • 0x24: length of the enclosed data,
    • 0x05: Cisco’s auto-update sub-option,
    • 0x22: length of the sub-option data, and
    • filename of the image description (c2960-lanbasek9-tar.150-2.SE11.txt).


Planet DebianPhilipp Kern: Self-service buildd givebacks now use Salsa auth

As client certificates are on the way out and Debian's SSO solution is effectively not maintained any longer, I switched self-service buildd givebacks over to Salsa authentication. It lives again at For authorization you still need to be in the "debian" group for now, i.e. be a regular Debian member.

For convenience the package status web interface now features an additional column "Actions" with generated "giveback" links.

Please remember to file bugs if you give builds back because of flakiness of the package rather than the infrastructure and resist the temptation to use this excessively to let your package migrate. We do not want to end up with packages that require multiple givebacks to actually build in stable, as that would hold up both security and stable updates needlessly and complicate development.


Planet DebianNorbert Preining: Converting html to mp4

Such an obvious problem, convert a piece of html/js/css, often with animations, to a video (mp4 or similar). We were just put before this problem for the TUG 2020 online conference. Searching the internet it turned up mostly web services, some of them even with lots of money to pay. At the end (below I will give a short history) it turned out to be rather simple.

The key is to use timesnap, a tool to take screenshots from web pages. It is actively maintained, and internally uses puppeteer, which in turn uses Google Chrome browser headless. This also means that rendering quality is very high.

So having an html file available, with all the necessary assets, either online or local, one simply creates enough single screenshots per second so that they can be assembled later on into a video with ffmpeg.

In our case, we wanted our leaders to last 10secs before the actual presentation video starts. I decided to render at 30fps, which left me with the simple invocation:

timesnap Leader.html --viewport=1920,1080 --fps=30 --duration=10 --output-pattern="leader-%03d.png"

followed by conversion of the various png images to an mp4:

ffmpeg -r 30 -f image2 -s 1920x1080 -i leader-%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p leader.mp4

The -r is the fps, so needs to agree with the --fps above. Also the --viewport and -s values should better agree. -crf is the video quality, and -pix_fmt the pixel format.

With that very simple and quick invocation a nice leader video was ready!


It was actually more complicated than normal. For similar problems, it usually takes me about 5min of googling and a bit of scripting, but this time, it was actually a long way. Simply searching for “convert html to mp4” doesn’t give a lot but web services, often paid for. At some point I came up with the idea to use Electron and led to Electron Recorder, which looked promising, but didn’t work.

A bit more searching led me to PhantomJS, which is not developed anymore, but there was some explanation how to dump frames using phantomjs and merge them using ffmpeg, very similar to the above. Unfortunately, the rendering of the html page by phantomjs was broken, and thus not usable.

Thus I ventured off into searching for alternatives of PhantomJS, which brought me to puppeteer, and from there it wasn’t too long a way that pointed me at timesnap.

Till now it is surprising to me that such a basic task is neither well documented, so hopefully this page helps some users.

Planet DebianJelmer Vernooij: Debian Janitor: > 60,000 Lintian Issues Automatically Fixed

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

Scheduling Lintian Fixes

To determine which packages to process, the Janitor looks at the import of lintian output across the archive that is available in UDD [1]. It will prioritize those packages with the most and more severe issues that it has fixers for.

Once a package is selected, it will clone the packaging repository and run lintian-brush on it. Lintian-brush provides a framework for applying a set of “fixers” to a package. It will run each of a set of “fixers” in a pristine version of the repository, and handles most of the heavy lifting.

The Inner Workings of a Fixer

Each fixer is just an executable which gets run in a clean checkout of the package, and can make changes there. Most of the fixers are written in Python or shell, but they can be in any language.

The contract for fixers is pretty simple:

  • If the fixer exits with non-zero, the changes are reverted and fixer is considered to have failed
  • If it exits with zero and made changes, then it should write a summary of its changes to standard out

If a fixer is uncertain about the changes it has made, it should report so on standard output using a pseudo-header. By default, lintian-brush will discard any changes with uncertainty but if you are running it locally you can still apply them by specifying --uncertain.

The summary message on standard out will be used for the commit message and (possibly) the changelog message, if the package doesn’t use gbp dch.

Example Fixer

Let’s look at an example. The package priority “extra” is deprecated since Debian Policy 4.0.1 (released August 2 017) – see Policy 2.5 "Priorities". Instead, most packages should use the “optional” priority.

Lintian will warn when a package uses the deprecated “extra” value for the “Priority” - the associated tag is priority-extra-is-replaced-by-priority-optional. Lintian-brush has a fixer script that can automatically replace “extra” with “optional”.

On systems that have lintian-brush installed, the source for the fixer lives in /usr/share/lintian-brush/fixers/, but here is a copy of it for reference:


from debmutate.control import ControlEditor
from lintian_brush.fixer import report_result, fixed_lintian_tag

with ControlEditor() as updater:
    for para in updater.paragraphs:
        if para.get("Priority") == "extra":
            para["Priority"] = "optional"
                para, 'priority-extra-is-replaced-by-priority-optional')

report_result("Change priority extra to priority optional.")

This fixer is written in Python and uses the debmutate library to easily modify control files while preserving formatting — or back out if it is not possible to preserve formatting.

All the current fixers come with tests, e.g. for this particular fixer the tests can be found here:

For more details on writing new fixers, see the README for lintian-brush.

For more details on debugging them, see the manual page.

Successes by fixer

Here is a list of the fixers currently available, with the number of successful merges/pushes per fixer:

Lintian Tag Previously merged/pushed Ready but not yet merged/pushed
uses-debhelper-compat-file 4906 4161
upstream-metadata-file-is-missing 4281 3841
package-uses-old-debhelper-compat-version 4256 3617
upstream-metadata-missing-bug-tracking 2438 2995
out-of-date-standards-version 2062 2936
upstream-metadata-missing-repository 1936 2987
trailing-whitespace 1720 2295
insecure-copyright-format-uri 1791 1093
package-uses-deprecated-debhelper-compat-version 1391 1287
vcs-obsolete-in-debian-infrastructure 872 782
homepage-field-uses-insecure-uri 527 1111
vcs-field-not-canonical 850 655
debian-changelog-has-wrong-day-of-week 224 376
debian-watch-uses-insecure-uri 314 242
useless-autoreconf-build-depends 112 428
priority-extra-is-replaced-by-priority-optional 315 194
debian-rules-contains-unnecessary-get-orig-source-target 35 428
tab-in-license-text 125 320
debian-changelog-line-too-long 186 190
debian-rules-sets-dpkg-architecture-variable 69 166
debian-rules-uses-unnecessary-dh-argument 42 182
package-lacks-versioned-build-depends-on-debhelper 125 95
unversioned-copyright-format-uri 43 136
package-needs-versioned-debhelper-build-depends 127 50
binary-control-field-duplicates-source 34 134
renamed-tag 73 69
vcs-field-uses-insecure-uri 14 109
uses-deprecated-adttmp 13 91
debug-symbol-migration-possibly-complete 12 88
copyright-refers-to-symlink-license 51 48
debian-control-has-unusual-field-spacing 33 66
old-source-override-location 32 62
out-of-date-copyright-format 20 62
public-upstream-key-not-minimal 43 30
older-source-format 17 54
custom-compression-in-debian-source-options 12 57
copyright-refers-to-versionless-license-file 29 39
tab-in-licence-text 33 31
global-files-wildcard-not-first-paragraph-in-dep5-copyright 28 33
out-of-date-copyright-format-uri 9 50
field-name-typo-dep5-copyright 29 29
copyright-does-not-refer-to-common-license-file 13 42
debhelper-but-no-misc-depends 9 45
debian-watch-file-is-missing 11 41
debian-control-has-obsolete-dbg-package 8 40
possible-missing-colon-in-closes 31 13
unnecessary-testsuite-autopkgtest-field 32 9
missing-debian-source-format 7 33
debhelper-tools-from-autotools-dev-are-deprecated 9 29
vcs-field-mismatch 8 29
debian-changelog-file-contains-obsolete-user-emacs-setting 33 0
patch-file-present-but-not-mentioned-in-series 24 9
copyright-refers-to-versionless-license-file 22 9
debian-control-has-empty-field 25 6
missing-build-dependency-for-dh-addon 10 20
obsolete-field-in-dep5-copyright 15 13
xs-testsuite-field-in-debian-control 20 7
ancient-python-version-field 13 12
unnecessary-team-upload 19 5
misspelled-closes-bug 6 16
field-name-typo-in-dep5-copyright 1 20
transitional-package-not-oldlibs-optional 4 17
maintainer-script-without-set-e 9 11
dh-clean-k-is-deprecated 4 14
no-dh-sequencer 14 4
missing-vcs-browser-field 5 12
space-in-std-shortname-in-dep5-copyright 6 10
xc-package-type-in-debian-control 4 11
debian-rules-missing-recommended-target 4 10
desktop-entry-contains-encoding-key 1 13
build-depends-on-obsolete-package 4 9
license-file-listed-in-debian-copyright 1 12
missing-built-using-field-for-golang-package 9 4
unused-license-paragraph-in-dep5-copyright 4 7
missing-build-dependency-for-dh_command 6 4
comma-separated-files-in-dep5-copyright 3 6
systemd-service-file-refers-to-var-run 4 5
copyright-not-using-common-license-for-apache2 3 5
debian-tests-control-autodep8-is-obsolete 2 6
dh-quilt-addon-but-quilt-source-format 2 6
no-homepage-field 3 5
font-packge-not-multi-arch-foreign 1 6
homepage-in-binary-package 1 4
vcs-field-bitrotted 1 3
built-using-field-on-arch-all-package 2 1
copyright-should-refer-to-common-license-file-for-apache-2 1 2
debian-pyversions-is-obsolete 3 0
debian-watch-file-uses-deprecated-githubredir 1 1
executable-desktop-file 1 1
skip-systemd-native-flag-missing-pre-depends 1 1
vcs-field-uses-not-recommended-uri-format 1 1
init.d-script-needs-depends-on-lsb-base 1 0
maintainer-also-in-uploaders 1 0
public-upstream-keys-in-multiple-locations 1 0
wrong-debian-qa-group-name 1 0
Total 29656 32209


[1]temporarily unavailable due to Debian bug #960156 – but the Janitor is relying on historical data

For more information about the Janitor's lintian-fixes efforts, see the landing page


CryptogramFriday Squid Blogging: Rhode Island's State Appetizer Is Calamari

Rhode Island has an official state appetizer, and it's calamari. Who knew?

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityFBI, CISA Echo Warnings on ‘Vishing’ Threat

The Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) on Thursday issued a joint alert to warn about the growing threat from voice phishing or “vishing” attacks targeting companies. The advisory came less than 24 hours after KrebsOnSecurity published an in-depth look at a crime group offering a service that people can hire to steal VPN credentials and other sensitive data from employees working remotely during the Coronavirus pandemic.

“The COVID-19 pandemic has resulted in a mass shift to working from home, resulting in increased use of corporate virtual private networks (VPNs) and elimination of in-person verification,” the alert reads. “In mid-July 2020, cybercriminals started a vishing campaign—gaining access to employee tools at multiple companies with indiscriminate targeting — with the end goal of monetizing the access.”

As noted in Wednesday’s story, the agencies said the phishing sites set up by the attackers tend to include hyphens, the target company’s name, and certain words — such as “support,” “ticket,” and “employee.” The perpetrators focus on social engineering new hires at the targeted company, and impersonate staff at the target company’s IT helpdesk.

The joint FBI/CISA alert (PDF) says the vishing gang also compiles dossiers on employees at the specific companies using mass scraping of public profiles on social media platforms, recruiter and marketing tools, publicly available background check services, and open-source research. From the alert:

“Actors first began using unattributed Voice over Internet Protocol (VoIP) numbers to call targeted employees on their personal cellphones, and later began incorporating spoofed numbers of other offices and employees in the victim company. The actors used social engineering techniques and, in some cases, posed as members of the victim company’s IT help desk, using their knowledge of the employee’s personally identifiable information—including name, position, duration at company, and home address—to gain the trust of the targeted employee.”

“The actors then convinced the targeted employee that a new VPN link would be sent and required their login, including any 2FA [2-factor authentication] or OTP [one-time passwords]. The actor logged the information provided by the employee and used it in real-time to gain access to corporate tools using the employee’s account.”

The alert notes that in some cases the unsuspecting employees approved the 2FA or OTP prompt, either accidentally or believing it was the result of the earlier access granted to the help desk impersonator. In other cases, the attackers were able to intercept the one-time codes by targeting the employee with SIM swapping, which involves social engineering people at mobile phone companies into giving them control of the target’s phone number.

The agencies said crooks use the vished VPN credentials to mine the victim company databases for their customers’ personal information to leverage in other attacks.

“The actors then used the employee access to conduct further research on victims, and/or to fraudulently obtain funds using varying methods dependent on the platform being accessed,” the alert reads. “The monetizing method varied depending on the company but was highly aggressive with a tight timeline between the initial breach and the disruptive cashout scheme.”

The advisory includes a number of suggestions that companies can implement to help mitigate the threat from these vishing attacks, including:

• Restrict VPN connections to managed devices only, using mechanisms like hardware checks or installed certificates, so user input alone is not enough to access the corporate VPN.

• Restrict VPN access hours, where applicable, to mitigate access outside of allowed times.

• Employ domain monitoring to track the creation of, or changes to, corporate, brand-name domains.

• Actively scan and monitor web applications for unauthorized access, modification, and anomalous activities.

• Employ the principle of least privilege and implement software restriction policies or other controls; monitor authorized user accesses and usage.

• Consider using a formalized authentication process for employee-to-employee communications made over the public telephone network where a second factor is used to
authenticate the phone call before sensitive information can be discussed.

• Improve 2FA and OTP messaging to reduce confusion about employee authentication attempts.

• Verify web links do not have misspellings or contain the wrong domain.

• Bookmark the correct corporate VPN URL and do not visit alternative URLs on the sole basis of an inbound phone call.

• Be suspicious of unsolicited phone calls, visits, or email messages from unknown individuals claiming to be from a legitimate organization. Do not provide personal information or information about your organization, including its structure or networks, unless you are certain of a person’s authority to have the information. If possible, try to verify the caller’s identity directly with the company.

• If you receive a vishing call, document the phone number of the caller as well as the domain that the actor tried to send you to and relay this information to law enforcement.

• Limit the amount of personal information you post on social networking sites. The internet is a public resource; only post information you are comfortable with anyone seeing.

• Evaluate your settings: sites may change their options periodically, so review your security and privacy settings regularly to make sure that your choices are still appropriate.

CryptogramYet Another Biometric: Bioacoustic Signatures

Sound waves through the body are unique enough to be a biometric:

"Modeling allowed us to infer what structures or material features of the human body actually differentiated people," explains Joo Yong Sim, one of the ETRI researchers who conducted the study. "For example, we could see how the structure, size, and weight of the bones, as well as the stiffness of the joints, affect the bioacoustics spectrum."


Notably, the researchers were concerned that the accuracy of this approach could diminish with time, since the human body constantly changes its cells, matrices, and fluid content. To account for this, they acquired the acoustic data of participants at three separate intervals, each 30 days apart.

"We were very surprised that people's bioacoustics spectral pattern maintained well over time, despite the concern that the pattern would change greatly," says Sim. "These results suggest that the bioacoustics signature reflects more anatomical features than changes in water, body temperature, or biomolecule concentration in blood that change from day to day."

It's not great. A 97% accuracy is worse than fingerprints and iris scans, and while they were able to reproduce the biometric in a month it almost certainly changes as we age, gain and lose weight, and so on. Still, interesting.

Worse Than FailureError'd: Just a Suggestion

"Sure thing Google, I guess I'll change my language to... let's see...Ah, how about English?" writes Peter G.


Marcus K. wrote, "Breaking news: tt tttt tt,ttt!"


Tim Y. writes, "Nothing makes my day more than someone accidentially leaving testing mode enabled (and yes, the test number went through!)"


"I guess even thinning brows and psoriasis can turn political these days," Lawrence W. wrote.


Strahd I. writes, "It was evident at the time that King Georges VI should have gone asked for a V12 instead."


"Well, gee, ZDNet, why do you think I enabled this setting in the first place?" Jeroen V. writes.


[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianReproducible Builds (diffoscope): diffoscope 157 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 157. This version includes the following changes:

[ Chris Lamb ]

* Try obsensibly "data" files named .pgp against pgpdump to determine whether
  they are PGP files. (Closes: reproducible-builds/diffoscope#211)
* Don't raise an exception when we encounter XML files with "<!ENTITY>"
  declarations inside the DTD, or when a DTD or entity references an external
  resource. (Closes: reproducible-builds/diffoscope#212)
* Temporarily drop gnumeric from Build-Depends as it has been removed from
  testing due to Python 2.x deprecation. (Closes: #968742)
* Codebase changes:
  - Add support for multiple file extension matching; we previously supported
    only a single extension to match.
  - Move generation of debian/tests/control.tmp to an external script.
  - Move to our assert_diff helper entirely in the PGP tests.
  - Drop some unnecessary control flow, unnecessary dictionary comprehensions
    and some unused imports found via pylint.
* Include the filename in the "... not identified by any comparator"
  logging message.

You find out more by visiting the project homepage.


Planet DebianBits from Debian: Lenovo, Infomaniak, Google and Amazon Web Services (AWS), Platinum Sponsors of DebConf20

We are very pleased to announce that Lenovo, Infomaniak, Google and Amazon Web Services (AWS), have committed to supporting DebConf20 as Platinum sponsors.


As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world.


Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).


Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware.

Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform.


Amazon Web Services (AWS) is one of the world's most comprehensive and broadly adopted cloud platform, offering over 175 fully featured services from data centers globally (in 77 Availability Zones within 24 geographic regions). AWS customers include the fastest-growing startups, largest enterprises and leading government agencies.

With these commitments as Platinum Sponsors, Lenovo, Infomaniak, Google and Amazon Web Services are contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much for your support of DebConf20!

Participating in DebConf20 online

The 21st Debian Conference is being held Online, due to COVID-19, from August 23rd to 29th, 2020. There are 7 days of activities, running from 10:00 to 01:00 UTC. Visit the DebConf20 website at to learn about the complete schedule, watch the live streaming and join the different communication channels for participating in the conference.

Planet DebianRitesh Raj Sarraf: LUKS Headless Laptop

As we grow old, so do our computing machines. And just like we don’t decommission ourselves, so should be the case of the machines. They should be semi-retired, delegating major tasks to newer machines while they can still serve some less demaning work: File Servers, UPNP Servers et cetera.

It is common on a Debian installer based derivative, and otherwise too, to use block encryption on Linux. With machines from this decade, I think we’ve always had CPU extension for encryption.

So, as would be the usual case, all my laptops are block encrypted. But as they reach the phase of their life to retire and serving as a headless boss, it becomes cumbersome to keep feeding it a password and all the logistics involved to feed it. As such, I wanted to get rid of feeding it the password.

Then, there’s also the case of bad/faulty hardware, many of which mostly can temporarily fix their functionality when reset, which usually is to reboot the machine. I still recollect words of my Linux Guru - Dhiren Raj Bhandari - that many of the unexplainable errors can be resolved by just rebooting the machine. This was more than 20 years ago in the prime era of Microsoft Windows OS and the context back then was quite different, but yes, some bits of that saying still apply today.

So I wanted my laptop, which had LUKS set up for 2 disks, to go password-less now. I stumbled across a slightly dated article where the author achieved similar results with keyscript. So the thing was doable.

To my delight, Debian cryptsetup has the best setup and documentation in place to do it with just adding keyfiles

rrs@lenovo:~$ dd if=/dev/random of=sda7.key bs=1 count=512
512+0 records in
512+0 records out
512 bytes copied, 0.00540209 s, 94.8 kB/s
19:19 ♒♒♒   ☺ 😄    

rrs@lenovo:~$ dd if=/dev/random of=sdb1.key bs=1 count=512
512+0 records in
512+0 records out
512 bytes copied, 0.00536747 s, 95.4 kB/s
19:20 ♒♒♒   ☺ 😄    

rrs@lenovo:~$ sudo cryptsetup luksAddKey /dev/sda7 sda7.key 
[sudo] password for rrs: 
Enter any existing passphrase: 
No key available with this passphrase.
19:20 ♒♒♒    ☹ 😟=> 2  

rrs@lenovo:~$ sudo cryptsetup luksAddKey /dev/sda7 sda7.key 
Enter any existing passphrase: 
19:20 ♒♒♒   ☺ 😄    

rrs@lenovo:~$ sudo cryptsetup luksAddKey /dev/sdb1 sdb1.key 
Enter any existing passphrase: 
19:21 ♒♒♒   ☺ 😄    

and the nice integration in crypttab to ensure your keys propagate to initramfs

rrs@lenovo:~$ cat /etc/cryptsetup-initramfs/conf-hook 
# Configuration file for the cryptroot initramfs hook.

# The value of this variable is interpreted as a shell pattern.
# Matching key files from the crypttab(5) are included in the initramfs
# image.  The associated devices can then be unlocked without manual
# intervention.  (For instance if /etc/crypttab lists two key files
# /etc/keys/{root,swap}.key, you can set KEYFILE_PATTERN="/etc/keys/*.key"
# to add them to the initrd.)
# If KEYFILE_PATTERN if null or unset (default) then no key file is
# copied to the initramfs image.
# Note that the glob(7) is not expanded for crypttab(5) entries with a
# 'keyscript=' option.  In that case, the field is not treated as a file
# name but given as argument to the keyscript.
# WARNING: If the initramfs image is to include private key material,
# you'll want to create it with a restrictive umask in order to keep
# non-privileged users at bay.  For instance, set UMASK=0077 in
# /etc/initramfs-tools/initramfs.conf

19:44 ♒♒♒   ☺ 😄    

The whole thing took me around 20-25 minutes, including drafting this post. From Retired Head and Password Prompt to Headless and Password-less. The beauty of Debian and FOSS

LongNowPeople slept on comfy grass beds 200,000 years ago

The oldest beds known to science now date back nearly a quarter of a million years: traces of silicate from woven grasses found in the back of Border Cave (in South Africa, which has a nearly continuous record of occupation dating back to 200,000 BCE).

Ars Technica reports:

Most of the artifacts that survive from more than a few thousand years ago are made of stone and bone; even wooden tools are rare. That means we tend to think of the Paleolithic in terms of hard, sharp stone tools and the bones of butchered animals. Through that lens, life looks very harsh—perhaps even harsher than it really was. Most of the human experience is missing from the archaeological record, including creature comforts like soft, clean beds.

Given recent work on the epidemic of modern orthodontic issues caused in part by sleeping with “bad oral posture” due to too-soft bedding, it seems like the bed may be another frontier for paleo re-thinking of high-tech life. (See also the controversies over barefoot running, prehistoric diets, and countless other forms of atavism emerging from our future-shocked society.) When technological innovation shuffles the “pace layers” of human existence, changing the built environment faster than bodies can adapt, sometimes comfort’s micro-scale horizon undermines the longer, slower beat of health.

Another plus to making beds of grass is their disposability and integration with the rest of ancient life:

Besides being much softer than the cave floor, these ancient beds were probably surprisingly clean. Burning dirty bedding would have helped cut down on problems with bedbugs, lice, and fleas, not to mention unpleasant smells. [Paleoanthropologist Lyn] Wadley and her colleagues suggest that people at Border Cave may even have raked some extra ashes in from nearby hearths ‘to create a clean, odor-controlled base for bedding.’

And charcoal found in the bedding layers includes bits of an aromatic camphor bush; some modern African cultures use another closely related camphor bush in their plant bedding as an insect repellent. The ash may have helped, too; Wadley and her colleagues note that ‘several ethnographies report that ash repels crawling insects, which cannot easily move through the fine powder because it blocks their breathing and biting apparatus and eventually leaves them dehydrated.’

Finding beds as old as Homo sapiens itself revives the (not quite as old) debate about what makes us human. Defining our humanity as “artists” or “ritualists” seems to weave together modern definitions of technology and craft, ceremony and expression, just as early people wove together sedges for a place to sleep. At least, they are the evidence of a much more holistic, integrated way of life — one that found every possible synergy between day and night, cooking and sleeping:

Imagine that you’ve just burned your old, stale bedding and laid down a fresh layer of grass sheaves. They’re still springy and soft, and the ash beneath is still warm. You curl up and breathe in the tingly scent of camphor, reassured that the mosquitoes will let you sleep in peace. Nearby, a hearth fire crackles and pops, and you stretch your feet toward it to warm your toes. You nudge aside a sharp flake of flint from the blade you were making earlier in the day, then drift off to sleep.

CryptogramCopying a Key by Listening to It in Action

Researchers are using recordings of keys being used in locks to create copies.

Once they have a key-insertion audio file, SpiKey's inference software gets to work filtering the signal to reveal the strong, metallic clicks as key ridges hit the lock's pins [and you can hear those filtered clicks online here]. These clicks are vital to the inference analysis: the time between them allows the SpiKey software to compute the key's inter-ridge distances and what locksmiths call the "bitting depth" of those ridges: basically, how deeply they cut into the key shaft, or where they plateau out. If a key is inserted at a nonconstant speed, the analysis can be ruined, but the software can compensate for small speed variations.

The result of all this is that SpiKey software outputs the three most likely key designs that will fit the lock used in the audio file, reducing the potential search space from 330,000 keys to just three. "Given that the profile of the key is publicly available for commonly used [pin-tumbler lock] keys, we can 3D-print the keys for the inferred bitting codes, one of which will unlock the door," says Ramesh.

Worse Than FailureCodeSOD: A Backwards For

Aurelia is working on a project where some of the code comes from a client. In this case, it appears that the client has very good reasons for hiring an outside vendor to actually build the application.

Imagine you have some Java code which needs to take an array of integers and iterate across them in reverse, to concatenate a string. Oh, and you need to add one to each item as you do this.

You might be thinking about some combination of a map/reverseString.join operation, or maybe a for loop with a i-- type decrementer.

I’m almost certain you aren’t thinking about this.

public String getResultString(int numResults) {
	StringBuffer sb = null;
	for (int result[] = getResults(numResults); numResults-- > 0;) {
		int i = result[numResults];
		if( i == 0){
			int j = i + 1; 
			if (sb == null)
				sb = new StringBuffer();
			int j = i + 1; 
			if (sb == null)
				sb = new StringBuffer();
	return sb.toString();

I really, really want you to look at that for loop: for (int result[] = getResults(numResults); numResults-- > 0;)

Just look at that. It’s… not wrong. It’s not… bad. It’s just written by an alien wearing a human skin suit. Our initializer actually populates the array we’re going to iterate across. Our bounds check also contains the decrement operation. We don’t have a decrement clause.

Then, if i == 0 we’ll do the exact same thing as if i isn’t 0, since our if and else branches contain the same code.

Increment i, and store the result in j. Why we don’t use the ++i or some other variation to be in-line with our weird for loop, I don’t know. Maybe they were done showing off.

Then, if our StringBuffer is null, we create one, otherwise we append a ",". This is one solution to the contatenator’s comma problem. Again, it’s not wrong, it’s just… unusual.

But this brings us to the thing which is actually, objectively, honestly bad. The indenting.

			if (sb == null)
				sb = new StringBuffer();

Look at that last line. Does that make you angry? Look more closely. Look for the curly brackets. Oh, you don’t see any? Very briefly, when I was looking at this code, I thought, “Wait, does this discard the first item?” No, it just eschews brackets and then indents wrong to make sure we’re nice and confused when we look at the code.

It should read:

			if (sb == null)
				sb = new StringBuffer();
[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


CryptogramUsing Disinformation to Cause a Blackout

Interesting paper: "How weaponizing disinformation can bring down a city's power grid":

Abstract: Social media has made it possible to manipulate the masses via disinformation and fake news at an unprecedented scale. This is particularly alarming from a security perspective, as humans have proven to be one of the weakest links when protecting critical infrastructure in general, and the power grid in particular. Here, we consider an attack in which an adversary attempts to manipulate the behavior of energy consumers by sending fake discount notifications encouraging them to shift their consumption into the peak-demand period. Using Greater London as a case study, we show that such disinformation can indeed lead to unwitting consumers synchronizing their energy-usage patterns, and result in blackouts on a city-scale if the grid is heavily loaded. We then conduct surveys to assess the propensity of people to follow-through on such notifications and forward them to their friends. This allows us to model how the disinformation may propagate through social networks, potentially amplifying the attack impact. These findings demonstrate that in an era when disinformation can be weaponized, system vulnerabilities arise not only from the hardware and software of critical infrastructure, but also from the behavior of the consumers.

I'm not sure the attack is practical, but it's an interesting idea.

Krebs on SecurityVoice Phishers Targeting Corporate VPNs

The COVID-19 epidemic has brought a wave of email phishing attacks that try to trick work-at-home employees into giving away credentials needed to remotely access their employers’ networks. But one increasingly brazen group of crooks is taking your standard phishing attack to the next level, marketing a voice phishing service that uses a combination of one-on-one phone calls and custom phishing sites to steal VPN credentials from employees.

According to interviews with several sources, this hybrid phishing gang has a remarkably high success rate, and operates primarily through paid requests or “bounties,” where customers seeking access to specific companies or accounts can hire them to target employees working remotely at home.

And over the past six months, the criminals responsible have created dozens if not hundreds of phishing pages targeting some of the world’s biggest corporations. For now at least, they appear to be focusing primarily on companies in the financial, telecommunications and social media industries.

“For a number of reasons, this kind of attack is really effective,” said Allison Nixon, chief research officer at New York-based cyber investigations firm Unit 221B. “Because of the Coronavirus, we have all these major corporations that previously had entire warehouses full of people who are now working remotely. As a result the attack surface has just exploded.”


A typical engagement begins with a series of phone calls to employees working remotely at a targeted organization. The phishers will explain that they’re calling from the employer’s IT department to help troubleshoot issues with the company’s virtual private networking (VPN) technology.

The employee phishing page bofaticket[.]com. Image:

The goal is to convince the target either to divulge their credentials over the phone or to input them manually at a website set up by the attackers that mimics the organization’s corporate email or VPN portal.

Zack Allen is director of threat intelligence for ZeroFOX, a Baltimore-based company that helps customers detect and respond to risks found on social media and other digital channels. Allen has been working with Nixon and several dozen other researchers from various security firms to monitor the activities of this prolific phishing gang in a bid to disrupt their operations.

Allen said the attackers tend to focus on phishing new hires at targeted companies, and will often pose as new employees themselves working in the company’s IT division. To make that claim more believable, the phishers will create LinkedIn profiles and seek to connect those profiles with other employees from that same organization to support the illusion that the phony profile actually belongs to someone inside the targeted firm.

“They’ll say ‘Hey, I’m new to the company, but you can check me out on LinkedIn’ or Microsoft Teams or Slack, or whatever platform the company uses for internal communications,” Allen said. “There tends to be a lot of pretext in these conversations around the communications and work-from-home applications that companies are using. But eventually, they tell the employee they have to fix their VPN and can they please log into this website.”


The domains used for these pages often invoke the company’s name, followed or preceded by hyphenated terms such as “vpn,” “ticket,” “employee,” or “portal.” The phishing sites also may include working links to the organization’s other internal online resources to make the scheme seem more believable if a target starts hovering over links on the page.

Allen said a typical voice phishing or “vishing” attack by this group involves at least two perpetrators: One who is social engineering the target over the phone, and another co-conspirator who takes any credentials entered at the phishing page and quickly uses them to log in to the target company’s VPN platform in real-time.

Time is of the essence in these attacks because many companies that rely on VPNs for remote employee access also require employees to supply some type of multi-factor authentication in addition to a username and password — such as a one-time numeric code generated by a mobile app or text message. And in many cases, those codes are only good for a short duration — often measured in seconds or minutes.

But these vishers can easily sidestep that layer of protection, because their phishing pages simply request the one-time code as well.

A phishing page (helpdesk-att[.]com) targeting AT&T employees. Image:

Allen said it matters little to the attackers if the first few social engineering attempts fail. Most targeted employees are working from home or can be reached on a mobile device. If at first the attackers don’t succeed, they simply try again with a different employee.

And with each passing attempt, the phishers can glean important details from employees about the target’s operations, such as company-specific lingo used to describe its various online assets, or its corporate hierarchy.

Thus, each unsuccessful attempt actually teaches the fraudsters how to refine their social engineering approach with the next mark within the targeted organization, Nixon said.

“These guys are calling companies over and over, trying to learn how the corporation works from the inside,” she said.


All of the security researchers interviewed for this story said the phishing gang is pseudonymously registering their domains at just a handful of domain registrars that accept bitcoin, and that the crooks typically create just one domain per registrar account.

“They’ll do this because that way if one domain gets burned or taken down, they won’t lose the rest of their domains,” Allen said.

More importantly, the attackers are careful to do nothing with the phishing domain until they are ready to initiate a vishing call to a potential victim. And when the attack or call is complete, they disable the website tied to the domain.

This is key because many domain registrars will only respond to external requests to take down a phishing website if the site is live at the time of the abuse complaint. This requirement can stymie efforts by companies like ZeroFOX that focus on identifying newly-registered phishing domains before they can be used for fraud.

“They’ll only boot up the website and have it respond at the time of the attack,” Allen said. “And it’s super frustrating because if you file an abuse ticket with the registrar and say, ‘Please take this domain away because we’re 100 percent confident this site is going to be used for badness,’ they won’t do that if they don’t see an active attack going on. They’ll respond that according to their policies, the domain has to be a live phishing site for them to take it down. And these bad actors know that, and they’re exploiting that policy very effectively.”

A phishing page (github-ticket[.]com) aimed at siphoning credentials for a target organization’s access to the software development platform Github. Image:


Both Nixon and Allen said the object of these phishing attacks seems to be to gain access to as many internal company tools as possible, and to use those tools to seize control over digital assets that can quickly be turned into cash. Primarily, that includes any social media and email accounts, as well as associated financial instruments such as bank accounts and any cryptocurrencies.

Nixon said she and others in her research group believe the people behind these sophisticated vishing campaigns hail from a community of young men who have spent years learning how to social engineer employees at mobile phone companies and social media firms into giving up access to internal company tools.

Traditionally, the goal of these attacks has been gaining control over highly-prized social media accounts, which can sometimes fetch thousands of dollars when resold in the cybercrime underground. But this activity gradually has evolved toward more direct and aggressive monetization of such access.

On July 15, a number of high-profile Twitter accounts were used to tweet out a bitcoin scam that earned more than $100,000 in a few hours. According to Twitter, that attack succeeded because the perpetrators were able to social engineer several Twitter employees over the phone into giving away access to internal Twitter tools.

Nixon said it’s not clear whether any of the people involved in the Twitter compromise are associated with this vishing gang, but she noted that the group showed no signs of slacking off after federal authorities charged several people with taking part in the Twitter hack.

“A lot of people just shut their brains off when they hear the latest big hack wasn’t done by hackers in North Korea or Russia but instead some teenagers in the United States,” Nixon said. “When people hear it’s just teenagers involved, they tend to discount it. But the kinds of people responsible for these voice phishing attacks have now been doing this for several years. And unfortunately, they’ve gotten pretty advanced, and their operational security is much better now.”

A phishing page (vzw-employee[.]com) targeting employees of Verizon. Image: DomainTools


While it may seem amateurish or myopic for attackers who gain access to a Fortune 100 company’s internal systems to focus mainly on stealing bitcoin and social media accounts, that access — once established — can be re-used and re-sold to others in a variety of ways.

“These guys do intrusion work for hire, and will accept money for any purpose,” Nixon said. “This stuff can very quickly branch out to other purposes for hacking.”

For example, Allen said he suspects that once inside of a target company’s VPN, the attackers may try to add a new mobile device or phone number to the phished employee’s account as a way to generate additional one-time codes for future access by the phishers themselves or anyone else willing to pay for that access.

Nixon and Allen said the activities of this vishing gang have drawn the attention of U.S. federal authorities, who are growing concerned over indications that those responsible are starting to expand their operations to include criminal organizations overseas.

“What we see now is this group is really good on the intrusion part, and really weak on the cashout part,” Nixon said. “But they are learning how to maximize the gains from their activities. That’s going to require interactions with foreign gangs and learning how to do proper adult money laundering, and we’re already seeing signs that they’re growing up very quickly now.”


Many companies now make security awareness and training an integral part of their operations. Some firms even periodically send test phishing messages to their employees to gauge their awareness levels, and then require employees who miss the mark to undergo additional training.

Such precautions, while important and potentially helpful, may do little to combat these phone-based phishing attacks that tend to target new employees. Both Allen and Nixon — as well as others interviewed for this story who asked not to be named — said the weakest link in most corporate VPN security setups these days is the method relied upon for multi-factor authentication.

A U2F device made by Yubikey, plugged into the USB port on a computer.

One multi-factor option — physical security keys — appears to be immune to these sophisticated scams. The most commonly used security keys are inexpensive USB-based devices. A security key implements a form of multi-factor authentication known as Universal 2nd Factor (U2F), which allows the user to complete the login process simply by inserting the USB device and pressing a button on the device. The key works without the need for any special software drivers.

The allure of U2F devices for multi-factor authentication is that even if an employee who has enrolled a security key for authentication tries to log in at an impostor site, the company’s systems simply refuse to request the security key if the user isn’t on their employer’s legitimate website, and the login attempt fails. Thus, the second factor cannot be phished, either over the phone or Internet.

In July 2018, Google disclosed that it had not had any of its 85,000+ employees successfully phished on their work-related accounts since early 2017, when it began requiring all employees to use physical security keys in place of one-time codes.

Probably the most popular maker of security keys is Yubico, which sells a basic U2F Yubikey for $20. It offers regular USB versions as well as those made for devices that require USB-C connections, such as Apple’s newer Mac OS systems. Yubico also sells more expensive keys designed to work with mobile devices. [Full disclosure: Yubico was recently an advertiser on this site].

Nixon said many companies will likely balk at the price tag associated with equipping each employee with a physical security key. But she said as long as most employees continue to work remotely, this is probably a wise investment given the scale and aggressiveness of these voice phishing campaigns.

“The truth is some companies are in a lot of pain right now, and they’re having to put out fires while attackers are setting new fires,” she said. “Fixing this problem is not going to be simple, easy or cheap. And there are risks involved if you somehow screw up a bunch of employees accessing the VPN. But apparently these threat actors really hate Yubikey right now.”

Kevin RuddWashington Post: China’s thirst for coal is economically shortsighted and environmentally reckless

First published in the Washington Post on 19 August 2020

Carbon emissions have fallen in recent months as economies have been shut down and put into hibernation. But whether the world will emerge from the pandemic in a stronger or weaker position to tackle the climate crisis rests overwhelmingly on the decisions that China will take.

China, as part of its plans to restart its economy, has already approved the construction of new coal-fired power plants accounting for some 17 gigawatts of energy this year, sending a collective shiver down the spines of environmentalists. This is more coal plants than it approved in the previous two years combined, and the total capacity now under development in China is larger than the remaining fleet operating in the United States.

At the same time, China has touted investments in so-called “new infrastructure,” such as electric-vehicle charging stations and rail upgrades, as integral to its economic recovery. But frankly, none of this will matter much if these new coal-fired power plants are built.

To be fair, the decisions to proceed with these coal projects largely rest in the hands of China’s provincial and regional governments and not in Beijing. However, this does not mean the central government has no power, nor that it won’t wear the reputational damage if the plants become a reality.

First, it is hard to see how China could meet one of its own commitments under the 2015 Paris climate agreement to peak its emissions by 2030 if these new plants are built. The pledge relies on China retiring much of its existing and relatively young coal fleet, which has been operational only for an average of 14 years. Bringing yet more coal capacity online now is therefore either economically shortsighted or environmentally reckless.

It would also put at risk the world’s collective long-term goal under the Paris agreement to keep temperature increases within 1.5 degrees Celsius, which the Intergovernmental Panel on Climate Change has said requires halving of global emissions between 2018 and 2030 and reaching net-zero emissions by the middle of the century.

It also is completely contrary to China’s own domestic interests, including President Xi Jinping’s desire to grow the economy, improve energy security and clean up the environment (or, as he says, to “make our skies blue again”).

But perhaps most importantly for the geopolitical hard heads in Beijing, it also risks unravelling the goodwill China has built up in recent years for staying the course on the fight against climate change in the face of the Trump administration’s retreat. This will especially be the case in the eyes of many vulnerable developing countries, including the world’s lowest-lying island nations that could face even greater risks if these plants are built.

For his part, former vice president Joe Biden has already got China’s thirst for coal in his sights. He speaks of the need for the United States to focus on how China is “exporting more dirty coal” through its support of carbon-intensive projects in its Belt and Road InitiativeStudies have found a Chinese role in more than 100 gigawatts of additional coal plants under construction across Asia and Africa, and even in Eastern Europe. It is hard to see how the first few months of a Biden administration would not make this an increasingly uncomfortable reality for Beijing at precisely the time the world would be welcoming with open the arms the return of U.S. climate leadership.

As a new paper published by the Asia Society Policy Institute highlights, China’s decisions on coal will also be among the most closely watched as it finalizes its next five-year plan, due out in 2021, as well as its mid-century decarbonization strategy and enhancements to its Paris targets ahead of the 2021 United Nations Climate Change Conference in Glasgow, Scotland. And although China may also have an enormously positive story to tell — continuing to lead the world in the deployment of renewable energy in 2019 — it is China’s decisions on coal that will loom large.

(Photo: Gwendolyn Stansbury/IFPRI)

The post Washington Post: China’s thirst for coal is economically shortsighted and environmentally reckless appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Shallow Perspective

There are times where someone writes code which does nothing. There are times where someone writes code which does something, but nothing useful. This is one of those times.

Ray H was going through some JS code, and found this “useful” method.

mapRowData (data) {
  if (isNullOrUndefined(data)) return null;
  return => x);

Technically, this isn’t a “do nothing” method. It converts undefined values to null, and it returns a shallow copy of an array, assuming that you passed in an array.

The fact that it can return a null value or an array is one of those little nuisances that we accept, but probably should code around (without more context, it’s probably fine if this returned an empty array on bad inputs, for example).

But Ray adds: “Where this is used, it could just use the array data directly and get the same result.” Yes, it’s used in a handful of places, and in each of those places, there’s no functional difference between the original array and the shallow copy.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Rondam RamblingsHere we go again

Here is a snapshot of the current map of temporary flight restrictions (TFRs) issued by the FAA across the western U.S.:Almost every one of those red shapes is a major fire burning.  Compare that to a similar snapshot taken two years ago at about this same time of year.The regularity of these extreme heat and fire events is starting to get really scary.


LongNowA Tribute to Michael McElligott, creator of “Conversations at The Interval”

It is with great sadness that we share the news that our dear friend and colleague Michael McElligott is in hospice care. We want to take this moment to appreciate all that Michael has done for Long Now.

Most of the Long Now community knows Michael as the face of the Conversations at The Interval speaking series, which began in 02014 with the opening of Long Now’s Interval bar/cafe. But he did much more than host the talks. 

Michael had been a volunteer and associate of Long Now since 02006; he helped at events and Seminars, wrote for the blog and newsletter, and was a technical advisor. In 02013 he officially joined the staff to help raise funds for the construction of The Interval, run social media, and design and produce the Conversations at The Interval lecture series.

For the first five years of the series, each of the talks was painstakingly produced by Michael. This included finding speakers, developing the talk with the speakers, helping curate all the media associated with each talk, and oftentimes hosting the talks. Many of the production ideas explored in this series by Michael became adopted across other Long Now programs, and we are so thankful we got to work with him.

You can watch a playlist of all of Michael’s Interval talks here.

Planet DebianLisandro Damián Nicanor Pérez Meyer: Stepping down as Qt 6 maintainers

After quite some time maintaining Qt in Debian both Dmitry Shachnev and I decided to not maintain Qt 6 when it's published (expected in December 2020, see We will do our best to keep the Qt 5 codebase up and running.

We **love** Qt, but it's a huge codebase and requires time and build power, both things that we are currently lacking, so we decided it's time for us to step down and pass the torch. And a new major version seems the right point to do that.

We will be happy to review and/or sponsor other people's work or even occasionally do uploads, but we can't promise to do it regularly.

Some things we think potential Qt 6 maintainers should be familiar with are, of course, C++ packaging (specially symbols files) and CMake, as Qt 6 will be built with it.

We also encourage prospective maintainers to remove the source's -everywhere-src suffixes and just keep the base names as source package names: qtbase6, qtdeclarative6, etc.

It has been an interesting ride all these years, we really hope you enjoyed using Qt.

Thanks for everything,

Dmitry and Lisandro.

Note 20200818 12:12 ARST: I was asked if the move has anything to do with code quality or licensing. The answer is a huge no, Qt is a **great** project which we love. As stated before it's mostly about lack of free time to properly maintain it.


Planet DebianMolly de Blanc: Updates

We are currently working on a second draft of the Declaration of Digital Autonomy. We’re also working on some next steps, which I hadn’t really thought about existing before. Videos from GUADEC and HOPE are now online. We’ll be speaking at DebConf on August 29th.

I’ll be starting school soon, so I expect a lot of the content of what I’ll be writing (as well as the style) to shift a bit to reflect what I’m studying and how I’m expected to write for my program.

Kevin RuddMonocle 24 Radio: The Big Interview


The post Monocle 24 Radio: The Big Interview appeared first on Kevin Rudd.

CryptogramVaccine for Emotet Malware

Interesting story of a vaccine for the Emotet malware:

Through trial and error and thanks to subsequent Emotet updates that refined how the new persistence mechanism worked, Quinn was able to put together a tiny PowerShell script that exploited the registry key mechanism to crash Emotet itself.

The script, cleverly named EmoCrash, effectively scanned a user's computer and generated a correct -- but malformed -- Emotet registry key.

When Quinn tried to purposely infect a clean computer with Emotet, the malformed registry key triggered a buffer overflow in Emotet's code and crashed the malware, effectively preventing users from getting infected.

When Quinn ran EmoCrash on computers already infected with Emotet, the script would replace the good registry key with the malformed one, and when Emotet would re-check the registry key, the malware would crash as well, preventing infected hosts from communicating with the Emotet command-and-control server.


The Binary Defense team quickly realized that news about this discovery needed to be kept under complete secrecy, to prevent the Emotet gang from fixing its code, but they understood EmoCrash also needed to make its way into the hands of companies across the world.

Compared to many of today's major cybersecurity firms, all of which have decades of history behind them, Binary Defense was founded in 2014, and despite being one of the industry's up-and-comers, it doesn't yet have the influence and connections to get this done without news of its discovery leaking, either by accident or because of a jealous rival.

To get this done, Binary Defense worked with Team CYMRU, a company that has a decades-long history of organizing and participating in botnet takedowns.

Working behind the scenes, Team CYMRU made sure that EmoCrash made its way into the hands of national Computer Emergency Response Teams (CERTs), which then spread it to the companies in their respective jurisdictions.

According to James Shank, Chief Architect for Team CYMRU, the company has contacts with more than 125 national and regional CERT teams, and also manages a mailing list through which it distributes sensitive information to more than 6,000 members. Furthermore, Team CYMRU also runs a biweekly group dedicated to dealing with Emotet's latest shenanigans.

This broad and well-orchestrated effort has helped EmoCrash make its way around the globe over the course of the past six months.


Either by accident or by figuring out there was something wrong in its persistence mechanism, the Emotet gang did, eventually, changed its entire persistence mechanism on Aug. 6 -- exactly six months after Quinn made his initial discovery.

EmoCrash may not be useful to anyone anymore, but for six months, this tiny PowerShell script helped organizations stay ahead of malware operations -- a truly rare sight in today's cyber-security field.

Kevin RuddABC Late Night Live: US-China Relations

17 AUGUST 2020

Main topic: Foreign Affairs article ‘Beware the Guns of August — in Asia’


Image: The USS Ronald Reagan steams through the San Bernardino Strait, July 3, 2020, crossing from the Philippine Sea into the South China Sea. (Navy Petty Officer 3rd Class Jason Tarleton)

The post ABC Late Night Live: US-China Relations appeared first on Kevin Rudd.

Planet DebianJonathan Dowland: Come Together

Primal Scream — Come Together

This one rarely returns to its proper place, instead living in the small pile of records permanently next to my turntable. I'm a late convert to Primal Scream: I first heard the 10 minute Andrew Weatherall mix of Come Together on Tom Robinson's 6Music show. It's a remarkable record, more so to think that it's quite hard, in isolation, to actually hear Primal Scream's contribution. This is very much Weatherall's track, and (to me, at least) it does a great job of encapsulating the house music explosion of the time.

It's interesting to hear Terry Farley's mix, partially because the band's contribution is more evident, so you can get a glimpse of the material that Weatherall had to work with.

RIP Andrew Weatherall, 1963-2020.

Worse Than FailureCodeSOD: Carbon Copy

I avoid writing software that needs to send emails. It's just annoying code to build, interfacing with mailservers is shockingly frustrating, and honestly, users don't tend to like the emails that my software tends to send. Once upon a time, it was a system which would tell them it was time to calibrate a scale, and the business requirements were basically "spam them like three times a day the week a scale comes do," which shockingly everyone hated.

But Krista inherited some code that sends email. The previous developer was a "senior", but probably could have had a little more supervision and maybe some mentoring on the C# language.

One commit added this method, for sending emails:

private void SendEmail(ExportData exportData, String subject, String fileName1, String fileName2) { try { if (String.IsNullOrEmpty(exportData.Email)) { WriteToLog("No email address - message not sent"); } else { MailMessage mailMsg = new MailMessage(); mailMsg.To.Add(new MailAddress(exportData.Email, exportData.PersonName)); mailMsg.Subject = subject; mailMsg.Body = "Exported files attached"; mailMsg.Priority = MailPriority.High; mailMsg.BodyEncoding = Encoding.ASCII; mailMsg.IsBodyHtml = true; if (!String.IsNullOrEmpty(exportData.EmailCC)) { string[] ccAddress = exportData.EmailCC.Split(';'); foreach (string address in ccAddress) { mailMsg.CC.Add(new MailAddress(address)); } } if (File.Exists(fileName1)) mailMsg.Attachments.Add(new Attachment(fileName1)); if (File.Exists(fileName2)) mailMsg.Attachments.Add(new Attachment(fileName2)); send(mailMsg); mailMsg.Dispose(); } } catch (Exception ex) { WriteToLog(ex.ToString()); } }

That's not so bad, as these things go, though one has to wonder about parameters like fileName1 and fileName2. Do they only ever send exactly two files? Well, maybe when this method was written, but a few commits later, an overloaded version gets added:

private void SendEmail(ExportData exportData, String subject, String fileName1, String fileName2, String fileName3) { try { if (String.IsNullOrEmpty(exportData.Email)) { WriteToLog("No email address - message not sent"); } else { MailMessage mailMsg = new MailMessage(); mailMsg.To.Add(new MailAddress(exportData.Email, exportData.PersonName)); mailMsg.Subject = subject; mailMsg.Body = "Exported files attached"; mailMsg.Priority = MailPriority.High; mailMsg.BodyEncoding = Encoding.ASCII; mailMsg.IsBodyHtml = true; if (!String.IsNullOrEmpty(exportData.EmailCC)) { string[] ccAddress = exportData.EmailCC.Split(';'); foreach (string address in ccAddress) { mailMsg.CC.Add(new MailAddress(address)); } } if (File.Exists(fileName1)) mailMsg.Attachments.Add(new Attachment(fileName1)); if (File.Exists(fileName2)) mailMsg.Attachments.Add(new Attachment(fileName2)); if (File.Exists(fileName3)) mailMsg.Attachments.Add(new Attachment(fileName3)); send(mailMsg); mailMsg.Dispose(); } } catch (Exception ex) { WriteToLog(ex.ToString()); } }

And then, a few commits later, someone decided that they needed to send four files, sometimes.

private void SendEmail(ExportData exportData, String subject, String fileName1, String fileName2, String fileName3, String fileName4) { try { if (String.IsNullOrEmpty(exportData.Email)) { WriteToLog("No email address - message not sent"); } else { MailMessage mailMsg = new MailMessage(); mailMsg.To.Add(new MailAddress(exportData.Email, exportData.PersonName)); mailMsg.Subject = subject; mailMsg.Body = "Exported files attached"; mailMsg.Priority = MailPriority.High; mailMsg.BodyEncoding = Encoding.ASCII; mailMsg.IsBodyHtml = true; if (!String.IsNullOrEmpty(exportData.EmailCC)) { string[] ccAddress = exportData.EmailCC.Split(';'); foreach (string address in ccAddress) { mailMsg.CC.Add(new MailAddress(address)); } } if (File.Exists(fileName1)) mailMsg.Attachments.Add(new Attachment(fileName1)); if (File.Exists(fileName2)) mailMsg.Attachments.Add(new Attachment(fileName2)); if (File.Exists(fileName3)) mailMsg.Attachments.Add(new Attachment(fileName3)); if (File.Exists(fileName4)) mailMsg.Attachments.Add(new Attachment(fileName4)); send(mailMsg); mailMsg.Dispose(); } } catch (Exception ex) { WriteToLog(ex.ToString()); } }

Each time someone discovered a new case where they wanted to include a different number of attachments, the previous developer copy/pasted the same code, with minor revisions.

Krista wrote a single version which used a paramarray, which replaced all of these versions (and any other possible versions), without changing the calling semantics.

Though the real WTF is probably still forcing the BodyEncoding to be ASCII at this point in time. There's a whole lot of assumptions about your dataset which are probably not true, or at least no reliably true.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Rondam RamblingsIrit Gat, Ph.D. 25 November 1966 - 11 August 2020

With a heavy heart I bear witness to the untimely passing of Dr. Irit Gat last Tuesday at the age of 53.  Irit was the Dean of Behavioral and Social Sciences at Antelope Valley College in Lancaster, California.  She was also my younger sister.  She died peacefully of natural causes.I am going to miss her.  A lot.  I'm going to miss her smile.  I'm going to miss the way she said "Hey bro" when we

Planet DebianIan Jackson: Doctrinal obstructiveness in Free Software

Any software system has underlying design principles, and any software project has process rules. But I seem to be seeing more often, a pathological pattern where abstract and shakily-grounded broad principles, and even contrived and sophistic objections, are used to block sensible changes.

Today I will go through an example in detail, before ending with a plea:

PostgreSQL query planner, WITH [MATERIALIZED] optimisation fence

Background history

PostgreSQL has a sophisticated query planner which usually gets the right answer. For good reasons, the pgsql project has resisted providing lots of knobs to control query planning. But there are a few ways to influence the query planner, for when the programmer knows more than the planner.

One of these is the use of a WITH common table expression. In pgsql versions prior to 12, the planner would first make a plan for the WITH clause; and then, it would make a plan for the second half, counting the WITH clause's likely output as a given. So WITH acts as an "optimisation fence".

This was documented in the manual - not entirely clearly, but a careful reading of the docs reveals this behaviour:

The WITH query will generally be evaluated as written, without suppression of rows that the parent query might discard afterwards.

Users (authors of applications which use PostgreSQL) have been using this technique for a long time.

New behaviour in PostgreSQL 12

In PostgreSQL 12 upstream were able to make the query planner more sophisticated. In particular, it is now often capable of looking "into" the WITH common table expression. Much of the time this will make things better and faster.

But if WITH was being used for its side-effect as an optimisation fence, this change will break things: queries that ran very quickly in earlier versions might now run very slowly. Helpfully, pgsql 12 still has a way to specify an optimisation fence: specifying WITH ... AS MATERIALIZED in the query.

So far so good.

Upgrade path for existing users of WITH fence

But what about the upgrade path for existing users of the WITH fence behaviour? Such users will have to update their queries to add AS MATERIALIZED. This is a small change. Having to update a query like this is part of routine software maintenance and not in itself very objectionable. However, this change cannnot be made in advance because pgsql versions prior to 12 will reject the new syntax.

So the users are in a bit of a bind. The old query syntax can be unuseably slow with the new database and the new syntax is rejected by the old database. Upgrading both the database and the application, in lockstep, is a flag day upgrade, which every good sysadmin will want to avoid.

A solution to this problem

Colin Watson proposed a very simple solution: make the earlier PostgreSQL versions accept the new MATERIALIZED syntax. This is correct since the new syntax specifies precisely the actual behaviour of the old databases. It has no deleterious effect on any users of older pgsql versions. It makes it possible to add the new syntax to the application, before doing the database upgrade, decoupling the two upgrades.

Colin Watson even provided an implementation of this proposal.

The solution is rejected by upstream

Unfortunately upstream did not accept this idea. You can read the whole thread yourself if you like. But in summary, the objections were (italic indicates literal quotes):

  • New features don't gain a backpatch. This is a project policy. Of course this is not a new feature, and if it is an exception should be made. This was explained clearly in the thread.
  • I'm not sure the "we don't want to upgrade application code at the same time as the database" is really tenable. This is quite an astonishing statement, particularly given the multiple users who said they wanted to do precisely that.
  • I think we could find cases where we caused worse breaks between major versions. Paraphrasing: "We've done worse things in the past so we should do this bad thing too". WTF?
  • One disadvantage is that this will increase confusion for users, who'll get used to the behavior on 12, and then they'll get confused on older releases. This seems highly contrived. Surely the number of people who are likely to suffer this confusion is tiny. Providing the new syntax in old versions (including of course the appropriate changes to the docs everywhere) might well make such confusion less rather than more likely.
  • [Poster is concerned about] 11.6 and up allowing a syntax that 11.0-11.5 don't. People are likely to write code relying on this and then be surprised when it doesn't work on a slightly older server. And, similarly: we'll then have a lot more behavior differences between minor releases. Again this seems a contrived and unconvincing objection. As that first poster even notes: Still, is that so much different from cases where we fix a bug that prevented some statement from working? No, it isn't.
  • if we started looking we'd find many changes every year that we could justify partially or completely back-porting on similar grounds ... we'll certainly screw it up sometimes. This is a slippery slope argument. But there is no slippery slope: in particular, the proposed change does not change any of the substantive database logic, and the upstream developers will hardly have any difficulty rejecting future more risky backport proposals.
  • if you insist on using the same code with pre-12 and post-12 releases, this should be achievable (at least in most cases) by using the "offset 0" trick. What? First I had heard of it but this in fact turns out to be true! Read more about this, below...

I find these extremely unconvincing, even taken together. Many of them are very unattractive things to hear one's upstream saying.

At best they are knee-jerk and inflexible application of very general principles. The authors of these objections seem to have lost sight of the fact that these principles have a purpose. When these kind of software principles work against their purposes, they should be revised, or exceptions made.

At worst, it looks like a collective effort to find reasons - any reasons, no matter how bad - not to make this change.

The OFFSET 0 trick

One of the responses in the thread mentions OFFSET 0. As part of writing the queries in the Xen Project CI system, and preparing for our system upgrade, I had carefully read the relevant pgsql documentation. This OFFSET 0 trick was new to me.

But, now that I know the answer, it is easy to provide the right search terms and find, for example, this answer on stackmumble. Apparently adding a no-op OFFSET 0 to the subquery defeats the pgsql 12 query planner's ability to see into the subquery.

I think OFFSET 0 is the better approach since it's more obviously a hack showing that something weird is going on, and it's unlikely we'll ever change the optimiser behaviour around OFFSET 0 ... wheras hopefully CTEs will become inlineable at some point CTEs became inlineable by default in PostgreSQL 12.
So in fact there is a syntax for an optimisation fence that is accepted by both earlier and later PostgreSQL versions. It's even recommended by pgsql devs. It's just not documented, and is described by pgsql developers as a "hack". Astonishingly, the fact that it is a "hack" is given as a reason to use it!

Well, I have therefore deployed this "hack". No doubt it will stay in our codebase indefinitely.

Please don't be like that!

I could come up with a lot more examples of other projects that have exhibited similar arrogance. It is becoming a plague! But every example is contentious, and I don't really feel I need to annoy a dozen separate Free Software communities. So I won't make a laundry list of obstructiveness.

If you are an upstream software developer, or a distributor of software to users (eg, a distro maintainer), you have a lot of practical power. In theory it is Free Software so your users could just change it themselves. But for a user or downstream, carrying a patch is often an unsustainable amount of work and risk. Most of us have patches we would love to be running, but which we haven't even written because simply running a nonstandard build is too difficult, no matter how technically excellent our delta.

As an upstream, it is very easy to get into a mindset of defending your code's existing behaviour, and to turn your project's guidelines into inflexible rules. Constant exposure to users who make silly mistakes, and rudely ask for absurd changes, can lead to core project members feeling embattled.

But there is no need for an upstream to feel embattled! You have the vast majority of the power over the software, and over your project communication fora. Use that power consciously, for good.

I can't say that arrogance will hurt you in the short term. Users of software with obstructive upstreams do not have many good immediate options. But we do have longer-term choices: we can choose which software to use, and we can choose whether to try to help improve the software we use.

After reading Colin's experience, I am less likely to try to help improve the experience of other PostgreSQL users by contributing upstream. It doesn't seem like there would be any point. Indeed, instead of helping the PostgreSQL community I am now using them as an example of bad practice. I'm only half sorry about that.

comment count unavailable comments

CryptogramRobocall Results from a Telephony Honeypot

A group of researchers set up a telephony honeypot and tracked robocall behavior:

NCSU researchers said they ran 66,606 telephone lines between March 2019 and January 2020, during which time they said to have received 1,481,201 unsolicited calls -- even if they never made their phone numbers public via any source.

The research team said they usually received an unsolicited call every 8.42 days, but most of the robocall traffic came in sudden surges they called "storms" that happened at regular intervals, suggesting that robocallers operated using a tactic of short-burst and well-organized campaigns.

In total, the NCSU team said it tracked 650 storms over 11 months, with most storms being of the same size.

Research paper. USENIX talk. Slashdot thread.

Planet DebianNorbert Preining: KDE Apps 20.08 now available for Debian

KDE Apps bundle 20.08 has been released recently, and some of the packages are already updated in Debian/unstable. I have updated also all my packages to 20.08 and they are now available for x86_64, i586, and hopefully aarch64 (some issues remaining here still).

With the new release 20.08 I have also switched to versioned app repositories, so you need to update the apt sources directive. The new one is

deb ./

and similar for Testing.

Packages from the “other” repo that depend on apps, that is in particular Digikam, are currently rebuild and will be coinstallable soon.

Just to make sure, here is the full set of repositories I use on my computers:

deb ./
deb ./
deb ./
deb ./
deb ./


Worse Than FailureCodeSOD: Perls Can Change

Tomiko* inherited some web-scraping/indexing code from Dennis. The code started out just scanning candidate profiles for certain keywords, but grew, mutated, and eventually turned into something that also needed to download their CVs.

Now, Dennis was, as Tomiko puts it, "an interesting engineer". "Any agreed upon standard, he would aggressively oppose, and this can be seen in this code."

"This code" also happens to be in Perl, the "best" language for developers who don't like standards. And, it also happens to be connected to this infrastructure.

So let's start with the code, because this is the rare CodeSOD where the code itself isn't the WTF:

foreach my $n (0 .. @{$lines} - 1) { next if index($lines->[$n], 'RT::Spider::Deepweb::Controller::Factory->make(') == -1; # Don't let other cv_id survive. $lines->[$n] =~ s/,\s*cv_id\s*=>[^,)]+//; $lines->[$n] =~ s/,\s*cv_type\s*=>[^,)]+// if defined $cv_type; # Insert the new options. $lines->[$n] =~ s/\)/$opt)/; }

Okay, so it's a pretty standard for-each loop. We skip lines if they contain… wait, that looks like a Perl expression- RT::Spider::Deepweb::Controller::Factory->make('? Well, let's hold onto that thought, but keep trucking on.

Next, we do a few find-and-replace operations to ensure that we Don't let other cv_id survive. I'm not really sure what exactly that's supposed to mean, but Tomiko says, "Dennis never wrote a single meaningful comment".

Well, the regexes are pretty standard character-salad expressions; ugly, but harmless. If you take this code in isolation, it's not good, but it doesn't look terrible. Except, there's that next if line. Why are we checking to see if the input data contains a Perl expression?

Because our input data is a Perl script. Dennis was… efficient. He already had code that would download the candidate profiles. Instead of adding new code to download CVs, instead of refactoring the existing code so that it was generic enough to download both, Dennis instead decided to load the profile code into memory, scan it with regexes, and then eval it.

As Tomiko says: "You can't get more Perl than that."

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Krebs on SecurityMicrosoft Put Off Fixing Zero Day for 2 Years

A security flaw in the way Microsoft Windows guards users against malicious files was actively exploited in malware attacks for two years before last week, when Microsoft finally issued a software update to correct the problem.

One of the 120 security holes Microsoft fixed on Aug. 11’s Patch Tuesday was CVE-2020-1464, a problem with the way every supported version of Windows validates digital signatures for computer programs.

Code signing is the method of using a certificate-based digital signature to sign executable files and scripts in order to verify the author’s identity and ensure that the code has not been changed or corrupted since it was signed by the author.

Microsoft said an attacker could use this “spoofing vulnerability” to bypass security features intended to prevent improperly signed files from being loaded. Microsoft’s advisory makes no mention of security researchers having told the company about the flaw, which Microsoft acknowledged was actively being exploited.

In fact, CVE-2020-1464 was first spotted in attacks used in the wild back in August 2018. And several researchers informed Microsoft about the weakness over the past 18 months.

Bernardo Quintero is the manager at VirusTotal, a service owned by Google that scans any submitted files against dozens of antivirus services and displays the results. On Jan. 15, 2019, Quintero published a blog post outlining how Windows keeps the Authenticode signature valid after appending any content to the end of Windows Installer files (those ending in .MSI) signed by any software developer.

Quintero said this weakness would particularly acute if an attacker were to use it to hide a malicious Java file (.jar). And, he said, this exact attack vector was indeed detected in a malware sample sent to VirusTotal.

“In short, an attacker can append a malicious JAR to a MSI file signed by a trusted software developer (like Microsoft Corporation, Google Inc. or any other well-known developer), and the resulting file can be renamed with the .jar extension and will have a valid signature according Microsoft Windows,” Quintero wrote.

But according to Quintero, while Microsoft’s security team validated his findings, the company chose not to address the problem at the time.

“Microsoft has decided that it will not be fixing this issue in the current versions of Windows and agreed we are able to blog about this case and our findings publicly,” his blog post concluded.

Tal Be’ery, founder of Zengo, and Peleg Hadar, senior security researcher at SafeBreach Labs, penned a blog post on Sunday that pointed to a file uploaded to VirusTotal in August 2018 that abused the spoofing weakness, which has been dubbed GlueBall. The last time that August 2018 file was scanned at VirusTotal (Aug 14, 2020), it was detected as a malicious Java trojan by 28 of 59 antivirus programs.

More recently, others would likewise call attention to malware that abused the security weakness, including this post in June 2020 from the Security-in-bits blog.


Be’ery said the way Microsoft has handled the vulnerability report seems rather strange.

“It was very clear to everyone involved, Microsoft included, that GlueBall is indeed a valid vulnerability exploited in the wild,” he wrote. “Therefore, it is not clear why it was only patched now and not two years ago.”

Asked to comment on why it waited two years to patch a flaw that was actively being exploited to compromise the security of Windows computers, Microsoft dodged the question, saying Windows users who have applied the latest security updates are protected from this attack.

“A security update was released in August,” Microsoft said in a written statement sent to KrebsOnSecurity. “Customers who apply the update, or have automatic updates enabled, will be protected. We continue to encourage customers to turn on automatic updates to help ensure they are protected.”

Update, 12:45 a.m. ET: Corrected attribution on the June 2020 blog article about GlueBall exploits in the wild.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 13)

Here’s part thirteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.


Planet DebianArnaud Rebillout: Modify Vim syntax files for your taste

In this short how-to, we'll see how to make small modifications to a Vim syntax file, in order to change how a particular file format is highlighted. We'll go for a simple use-case: modify the Markdown syntax file, so that H1 and H2 headings (titles and subtitles, if you prefer) are displayed in bold. Of course, this won't be exactly as easy as expected, but no worries, we'll succeed in the end.

The calling

Let's start with a screenshot: how Vim displays Markdown files for me, someone who use the GNOME terminal with the Solarized light theme.

Vim - Markdown file with original highlighting

I'm mostly happy with that, except for one or two little details. I'd like to have the titles displayed in bold, for example, so that they're easier to spot when I skim through a Markdown file. It seems like a simple thing to ask, so I hope there can be a simple solution.

The first steps

Let's learn the basics.

In Vim world, the rules to highlight files formats are defined in the directory /usr/share/vim/vim82/syntax (I bet you'll have to adjust this path depending on the version of Vim that is installed on your system).

And so, for the Markdown file format, the rules are defined in the file /usr/share/vim/vim82/syntax/markdown.vim.

The first thing we could do is to have a look at this file, try to make sense of it, and maybe start to make some modifications.

But wait a moment. You should know that modifying a system file is not a great idea. First because your changes will be lost as soon as an update kicks in and the package manager replaces this file by a new version. Second, because you will quickly forget what files you modified, and what were your modifications, and if you do that too much, you might experience what is called "maintenance headache" in the long run.

So instead, maybe you DO NOT modify this file, and instead you copy it in your personal Vim folder, more precisely in ~/.vim/syntax. Create this directory if it does not exist:

mkdir -p ~/.vim/syntax
cp /usr/share/vim/vim82/syntax/markdown.vim ~/.vim/syntax

The file in your personal folder takes precedence over the system file of the same name in /usr/share/vim/vim82/syntax/, it is a replacement for the existing syntax files. And so from now on, Vim uses the file ~/.vim/syntax/markdown.vim, and this is where we can make our modifications.

(And by the way, this is explained in the Vim faq-24.12)

And so, it's already nice to know all of that, but wait, there's even better.

There's is another location of interest, and it is ~/.vim/after/syntax. You can drop syntax files in this directory, and these files are treated as additions to the existing syntax. So if you only want to make slight modifications, that's the way to go.

(And by the way, this is explained in the Vim faq-24.11)

So let's forget about a syntax replacement in ~/.vim/syntax/markdown.vim, and instead let's go for some syntax additions in ~/.vim/after/syntax/markdown.vim.

mkdir -p ~/.vim/after/syntax
touch ~/.vim/after/syntax/markdown.vim

Now, let's answer the initial question: how do we modify the highlighting rules for Markdown files, so that the titles are displayed in bold? First, we have to understand where are the rules that define the highlighting for titles. Here there are, from the file /usr/share/vim/vim82/syntax/markdown.vim:

hi def link markdownH1 htmlH1
hi def link markdownH2 htmlH2
hi def link markdownH3 htmlH3

You should know that H1 means Heading 1, and so on, and so we want to make H1 and H2 bold. What we can see here is that the headings in the Markdown files are highlighted like the headings in HTML files, and this is obviously defined in the file /usr/share/vim/vim82/syntax/html.vim. So let's have a look into this file:

hi def link htmlH1 Title
hi def link htmlH2 htmlH1
hi def link htmlH3 htmlH2

Let's keep digging a bit. Where is Title defined? For those using the default color scheme like me, this is defined straight in the Vim source code, in the file src/highlight.c.

CENT("Title term=bold ctermfg=DarkMagenta",
     "Title term=bold ctermfg=DarkMagenta gui=bold guifg=Magenta"),

And for those using custom color schemes, it might be defined in a file under /usr/share/vim/vim82/colors/.

Alright, so how do we override that? We can just define this kind of rules in our syntax additions file at ~/.vim/after/syntax/markdown.vim:

hi link markdownH1 markdownHxBold
hi link markdownH2 markdownHxBold
hi markdownHxBold  term=bold ctermfg=DarkMagenta gui=bold guifg=Magenta cterm=bold

As you can see, the only addition we made, compared to what's defined in src/highlight.c, is cterm=bold. And that's already enough to achieve the initial goal, make the titles (ie. H1 and H2) bold. The result can be seen in the following screenshot:

Vim - Markdown file with modified highlighting

The rabbit hole

So we could stop right here, and life would be easy and good.

However, with this solution there's still something that is not perfect. We use the color DarkMagenta as defined in the default color scheme. What I didn't mention however, is that this is applicable for a light background. If you have a dark background though, dark magenta won't be easy to read.

Actually, if you look a bit more into src/highlight.c, you will see that the default color scheme comes in two variants, one for a light background, and one for a dark background.

And so the definition for Title for a dark background is as follow:

CENT("Title term=bold ctermfg=LightMagenta",
     "Title term=bold ctermfg=LightMagenta gui=bold guifg=Magenta"),

Hmmm, so how do we do that in our syntax file? How can we support both light and dark background, so that the color is right in both cases?

After a bit of research, and after looking at other syntax files, it seems that the solution is to check for the value of the background option, and so our syntax file becomes:

hi link markdownH1 markdownHxBold
hi link markdownH2 markdownHxBold
if &background == "light"
  hi markdownHxBold term=bold ctermfg=DarkMagenta gui=bold guifg=Magenta cterm=bold
  hi markdownHxBold term=bold ctermfg=LightMagenta gui=bold guifg=Magenta cterm=bold

In case you wonder, in Vim script you prefix Vim options with &, and so you get the value of the background option by writing &background. You can learn this kind of things in the Vim scripting cheatsheet.

And so, it's easy enough, except for one thing: it doesn't work. The headings always show up in DarkMagenta, even for a dark background.

This is why I called this paragraph "the rabbit hole", by the way.

So... Well after trying a few things, I noticed that in order to make it work, I would have to reload the syntax files with :syntax on.

At this point, the most likely explanation is that the background option is not set yet when the syntax files are loaded at startup, hence it needs to be reloaded manually afterward.

And after muuuuuuch research, I found out that it's actually possible to set a hook for when an option is modified. Meaning, it's possible to execute a function when the background option is modified. Quite cool actually.

And so, there it goes in my ~/.vimrc:

" Reload syntax when the background changes 
autocmd OptionSet background if exists("g:syntax_on") | syntax on | endif

For humans, this line reads as:

  1. when the background option is modified -- autocmd OptionSet background
  2. check if the syntax is on -- if exists("g:syntax_on")
  3. if that's the case, reload it -- syntax on

With that in place, my Markdown syntax overrides work for both dark and light background. Champagne!

The happy end

To finish, let me share my actual additions to the markdown.vim syntax. It makes H1 and H2 bold, along with their delimiters, and it also colors the inline code and the code blocks.

" H1 and H2 headings -> bold
hi link markdownH1 markdownHxBold
hi link markdownH2 markdownHxBold
" Heading delimiters (eg '#') and rules (eg '----', '====') -> bold
hi link markdownHeadingDelimiter markdownHxBold
hi link markdownRule markdownHxBold
" Code blocks and inline code -> highlighted
hi link markdownCode htmlH1

" The following test requires this addition to your vimrc:
" autocmd OptionSet background if exists("g:syntax_on") | syntax on | endif
if &background == "light"
  hi markdownHxBold term=bold ctermfg=DarkMagenta gui=bold guifg=Magenta cterm=bold
  hi markdownHxBold term=bold ctermfg=LightMagenta gui=bold guifg=Magenta cterm=bold

And here's how it looks like with a light background:

Vim - Markdown file with final highlighting (light)

And a dark background:

Vim - Markdown file with final highlighting (dark)

That's all, that's very little changes compared to the highlighting from the original syntax file, and now that we understand how it's supposed to be done, it's not much effort to achieve it.

It's just that finding the workaround to make it work for both light and dark background took forever, and leaves the usual, unanswered question: bug or feature?


Planet DebianEnrico Zini: Historical links

Saint Guinefort was a dog who lived in France in the 13th century, worshipped through history as a saint until less than a century ago. The recurrence is soon, on the 22th of August.

Many think middle ages were about superstition, and generally a bad period. Black Death, COVID, and Why We Keep Telling the Myth of a Renaissance Golden Age and Bad Middle Ages tells a different, fascinating story.

Another fascinating middle age story is that of Christine de Pizan, author of The Book of the City of Ladies. This is a very good lecture about her (in Italian): Come pensava una donna nel Medioevo? 2 - Christine de Pizan. You can read some of her books at the Memory of the World library.

If you understand Italian, Alessandro Barbero gives fascinating lectures. You can find them index in a timeline, or in a map.

Still from around the middle ages, we get playing cards: see Playing Cards Around the World and Through the Ages.

If you want to go have a look in person, and you overshoot with your time machine, here's a convenient route planner for antique Roman roads.

View all historical links that I have shared.

Planet DebianBits from Debian: Debian turns 27!

Today is Debian's 27th anniversary. We recently wrote about some ideas to celebrate the DebianDay, you can join the party or organise something yourselves :-)

Today is also an opportunity for you to start or resume your contributions to Debian. For example, you can scratch your creative itch and suggest a wallpaper to be part of the artwork for the next release, have a look at the DebConf20 schedule and register to participate online (August 23rd to 29th, 2020), or put a Debian live image in a DVD or USB and give it to some person near you, who still didn't discover Debian.

Our favorite operating system is the result of all the work we do together. Thanks to everybody who has contributed in these 27 years, and happy birthday Debian!

Planet DebianAndrej Shadura: Useful FFmpeg commands for video editing

As a response to Antonio Terceiro’s blog post, I’m publishing some FFmpeg commands I’ve been using recently.

Embedding subtitles

Sometimes you have a video with subtitles in multiple languages and you don’t want to clutter the directory with a lot of similarly-named files — or maybe you want to be able to easily transfer the video and subtitles at once. In this case, it may be useful to embed to subtitles directly into the video container file.

ffmpeg -i video.mp4 -i -map 0:v -map 0:a -c copy -map 1 \
        -c:s:0 mov_text -metadata:s:s:0 language="eng" video-out.mp4

This commands recodes the subtitle file into a format appropriate for the MP4 container and embeds it with a metadata element telling the video player what language it is in. You can add multiple subtitles at once, or you can also transcode the audio to AAC while doing so (I found that a lot of Android devices can’t play Ogg Vorbis streams):

ffmpeg -i video.mp4 -i -i -map 0:v -map 0:a \
        -c:v copy -c:a aac -map 1 -c:s:0 mov_text -metadata:s:s:0 language="deu" \
                           -map 2 -c:s:1 mov_text -metadata:s:s:1 language="eng" video-out.mp4

‘Hard’ subtitles

Sometimes you need to play the video with subtitles on devices not supporting them. In that case, it may be useful to ‘hardcode’ the subtitles directly into the video stream:

ffmpeg -i video.mp4 -vf video-out.mp4

Unfortunately, if you also want to apply more transformations to the video, it starts getting tricky, the -vf option is no longer enough:

ffmpeg -i video.mp4 -i overlay.jpg -filter:a "volume=10" \
        -filter_complex '[0:v][1:v]overlay[outv];[outv]' \

This command adds an overlay to the video stream (in my case I overlaid a full frame over the original video offering some explanations), increases the volume ten times and adds hard subtitles.

P.S. You can see the practical application of the above in this video with a head of one of the electoral commissions in Belarus forcing the members of the staff to manipulate the voting results. I transcribed the video in both Russian and English and encoded the English subtitles into the video.

LongNowKathryn Cooper’s Wildlife Movement Photography

Amazing wildlife photography by Kathryn Cooper reveals the brushwork of birds and their flocks through sky, hidden by the quickness of the human eye.

“Staple Newk” by Kathryn Cooper.

Ever since Eadweard Muybridge’s pioneering photography of animal locomotion in 01877 and 01878 (including the notorious “horse shot by pistol” frames from an era less concerned with animal experiments), the trend has been to unpack our lived experience of movement into serial, successive frames. The movie camera smears one pace layer out across another, lets the eye scrub over one small moment.

“UFO” by Kathryn Cooper.

In contrast, time-lapse and long exposure camerawork implodes the arc of moments, an integral calculus that gathers the entire gesture. Cooper’s flock photography is less the autopsy of high-speed video and more the graceful enzo drawn by a zen master.

Learn More

LongNowPuzzling artifacts found at Europe’s oldest battlefield

Bronze-Age crime scene forensics: newly discovered artifacts only deepen the mystery of a 3,300-year-old battle. What archaeologists previously thought to be a local skirmish looks more and more like a regional conflict that drew combatants in from hundreds of kilometers away…but why?

Much like the total weirdness of the Ediacaran fauna of 580 million years ago, this oldest Bronze-Age battlefield is the earliest example of its kind in the record…and firsts are always difficult to analyze:

Among the stash are also three bronze cylinders that may have been fittings for bags or boxes designed to hold personal gear—unusual objects that until now have only been discovered hundreds of miles away in southern Germany and eastern France.

‘This was puzzling for us,’ says Thomas Terberger, an archaeologist at the University of Göttingen in Germany who helped launch the excavation at Tollense and co-authored the paper. To Terberger and his team, that lends credence to their theory that the battle wasn’t just a northern affair.

Anthony Harding, an archaeologist and Bronze Age specialist who was not involved with the research: ‘Why would a warrior be going round with a lot of scrap metal?’ he asks. To interpret the cache—which includes distinctly un-warlike metalworking gear—as belonging to warriors is ‘a bit far-fetched to me,’ he says.

Planet DebianSteinar H. Gunderson: Numbering Scrabble leaves

I've toyed a bit with Quackle, a Scrabble AI, recently. I don't have any particular use for it, but it's a fun exercise in optimization; unlike chess AIs, where everything is hyper-optimized and tuned to death, it seems Scrabble AIs still have some low-hanging fruit to pick, so I've been sending some patches.

One interesting sub-problem is that of looking up superleaves. A leave in Scrabble (not a leaf!) is what remains on your rack after you play a word, and that needs to be taken into account. If, for instance, you lay a (single-letter) word and are left with ACEHLP, that is great, because those tiles go really well together with alomst everything else (as well as each other) and will give you a high chance of playing a bingo later. But if you're left with IIIIU, your next move will likely be to exchange tiles, so that's a low-scoring leave. (Of course, you'll never be able to choose between those two specific leaves, but you could easily have to choose between e.g. ERS and JUU. The former is great, the latter is hard to work with.)

Superleaves come from a table (I don't know how they were calculated in the first place), and there are roughly a million of them for English. Great! We can just stick them into a hash table, done deal. Back when I worked in Google, someone was only half-joking when they said “if you're at a Google interview and don't know the answer to the interview question, it's probably hash table”… but can we do better? In particular, std::unordered_map (Quackle is C++) isn't fantastic in most implementations, especially since we'd like to stay within the L2 cache if possible, so perhaps we could just replace the entire thing with a flat array? Also, well, it's an interesting problem in its own right.

For the array, we'd need a way (not involving a hash table!) to give each leave a position in the array, and calculate that position quickly. Note that this is distinct from enumerating all possible leaves, which is easy with some recursion; this is numbering one given leave.

So what we are after is a minimal perfect hash function for leaves, except just reimplementing one of those algorithms didn't appeal to me. Let's sum up some requirements:

  • We'd like to map each leave into a unique integer between 0 and 914,623 inclusive. (We could probably accept some holes if need be.) No two different leaves can map to the same integer, but we don't care about which goes where.
  • There are 27 different tiles; the 26 English letters and the blank.
  • There are 100 tiles in all; the tile distribution is known ahead of time. This means you cannot have e.g. a leave CCCCD, because there are only two Cs in the bag.
  • Leaves are 1 to 6 letters long.
  • Order does not matter; QU and UQ are the same.
  • Computation must be about as fast as computing the hash of the string.
  • Any tables involved should be small (think a couple hundred bytes) and fast to precompute, as we'd like to adapt the algorithm to any language and tile distribution.

For simplicity right now, let's say we fix the length of the leaves to six letters; it's trivial just to have six tables and append them, so this takes away some complexity and none of the generality.

Second, we can convert our “ordering does not matter” specification into imposing order on the rack. Simply sort the leave before the rest of the algorithm runs; Quackle already does this. In a sense, we've chosen a canonical representation for each leave, and forbid all others.

Even so, I fiddled with this problem for a while, and finding the right angle of attack wasn't immediately easy. My first thought was that this should be easy with some combinatorics; the first tile (call it a) can have a value 0..26, then the second one (b) can be perhaps 10..26 if the first tile is J, so that leave would be a + 27 * (b - 10) + 27 * 17 * (c - ...) + … it doesn't really work out. You end with something that's uniquely decodable, which shows that there are no collisions, but there are many integers that don't map to anything, so you get a way too large array. (If you see it as a compression problem, some leaves get shorter bit strings than others since e.g. having the leave start with W means there's only one possible rack WWXYYZ, which isn't what we want here.)

I also considered “shapes” of racks, e.g. for three-letter racks, you can have either three different ones (1–1–1), one duplicated and then a different one (2–1), the other way around (1–2) or three equals (3). But it didn't seem to be going anywhere either, and the annoying fact that there's a max on each tile doesn't make the combinatorics solution any easier.

The insight that helped me eventually was a trivial one: If you can count, you can place! If you're in a queue in the supermarket, and you can see that there are five people in front of you, you know you're number six in the queue. This requires precise counting of subsets, though, and a good way of ordering those subsets. (We could probably do with overcounting in certain situations, but let's not go there.) So let's start with counting how many leaves there are (even though I already mentioned there are 914,624 :-) ) in all.

First, I'm going to turn the problem a bit on its head. Remember how we turned the “ordering doesn't matter” rule into a constraint; now we'll be doing the same thing for the “all Cs are the same, but there are only two of them” constraint. We'll pretend we've numbered all the tiles with some invisible ink; our C tiles are now called C1 and C2, our A tiles are A1..A9 and so on. We impose the rule that you cannot have an (n+1) tile in your leave unless you also have the n tile; so, you cannot have C2 unless you have C1, you cannot have U3 without both U1 and U2, and so on. (This means you cannot have A7, A8 or A9 at all, since leaves are not long enough, but we won't be using that.) Again, we've gone towards a more canonical representation, so now our job is to see how many ways we can pick out six tiles out of 100 with our two constraints.

This just screams dynamic programming, and the recursion is not difficult in this case. We'll create a function called N(T, L) that counts “how many leaves of length L can we create, if tile number T is the first tile?”. For L=1, the answer is obviously 1, so N(T, 1) = 1. For L > 1, we can define it recursively; for every T' > T (remember ordering!) that doesn't interfere with our canonicality constraint, we get N(T', L - 1) leaves, so just sum up those. Legal values for T' are easy to figure out; we can pick the one next to T (e.g. if T was E2, T' can be E3) and apart from that, only F1, G1, H1 and so on.

So the sum over all possible N(T, 6) (where T is the tile number for A1, B1, C1, etc.) will give us the number of different 6-letter leaves; or N(-1, 7) if you want. This is 737,311 six-letter leaves, 148,150 five-letter leaves and so on down to 27 one-letter leaves; in total, 914,624 if we allow any length.

Great, so that allowed us to count. Now for the numbering problem, which is fairly similar. The idea is that we'll impose a restriction on the leaves in the form of which tiles you're allowed to start on (which, due to the ordering, restricts which ones you're allowed to use), and then gradually loosen up that restriction. More restrictive leaves come first; the most restrictive choice is to start with W1, which gives us WWXYYZ as the most restricted leave, which is number 0. There's only one possibility with W (we can count that using our function N!), so we know that if we start with V1, our position has to be 1 or greater! And if we start with U1, we can call on our function again and see that there are 12 leaves that start with V1 or W1, so our position has to be at least 12.

This really gives the rest of the algorithm away. First, convert the leave into tile numbers (e.g. AEER becomes A1, E1, E2, R1, which are tiles number 0, 18, 19, 71). Then, for the first tile (T=0), count out how leaves can be made without allowing that tile, by summing N(T', 4) for all legal T' > 0. (The sums can be easily precomputed in a table.) Now we know where the numbering of the A... leaves start, so we just need to figure out where the E.. sub-leaves start. So for the second tile, count how many sub-leaves can be made without allowing that tile, by summing N(T', 3) for all legal T' > 18. And so on. It's just one addition and a small table lookup per tile, which is fairly fast. Mission accomplished!

Oh, and the most important optimization? Don't do the lookup twice…

Planet DebianGunnar Wolf: DebConf20 talk recorded

Following Antonio Terceiro’s post on tips for using ffmpeg for editing video, I will also share a bit of my experience producing my video for my session in DebConf20.

I recorded my talk today. As Terceiro mentioned, even though I’m used to speaking in front of my webcam (i.e. for my classes and some smaller conferences I’ve worked on during the COVID lockdown), it does feel a bit weird to present a live talk to… nobody :-|

OK, one step back. Why are we doing this? Because our hardworking friends of the DebConf20 video team recommended so. In order to minimize connecitvity issues from the variety of speakers throughout the world, we were requested to pre-record the exposition part of our talks, send them to the video team (deadline: today 2020-08-16, in case you still owe yours!), and make sure to be present at the end of the talk for the Q&A session. Of course, for a 45 minute talk, I prepared a 30 minute presentation, saving time for said Q&A session.

Anyway, I used the excellent OBS studiolive video mixing/editing program (of course, Debian packages are available. This allowed me to set up several predefined views (combinations and layouts of the presentation, webcam, and maybe some other sources) and professionally and elegantly switch between them on the fly.

I am still a newbie with OBS, but I surely see it becoming a part of my day to day streaming. Of course, my setup still was obvious (me looking right every now and then to see or control OBS, as I work on a dual-monitor setup…)

Anyway, the experience was very good, much smoother and faster than what I usually have to do when editing video. But just as I was finishing thanking the (future) audience and closing the recording… I had to tell the camera, “oh, fuck!”

The button labeled “Start Recording”… Had not been pressed. So, did I just lose 30 minutes of my life, plus a half-decent delivered talk? No, fortunately not. I had previously been playing with OBS, and configured some things. The button I did press was “Start Streaming”.

So, my talk (swearing included, of course) was dutifully streamed over to my YouTube channel. It seems up to five people got a sneak preview as to what will my DebConf participation be (of course, I’ve de-listed the video). I pulled it with the always-handy youtube-dl, edited out my curses using kdenlive, and pushed it to the DebConf video server.

Oh, make sure you follow the advice for recording presentations. It has all the relevant advice, the settings you should use, and much more welcome information if you are new to this.

So… Next week, DebConf20! Be there or be square!


Planet DebianAntonio Terceiro: Useful ffmpeg commands for editing video

For DebConf20, we are recommending that speakers pre-record the presentation part of their talks, and will have live Q&A. We had a smaller online MiniDebConf a couple of months ago, where for instance I had connectivity issues during my talk, so even though it feels too artificial, I guess pre-recording can decrease by a lot the likelihood of a given talk going bad.

Paul Gevers and I submitted a short 20 min talk giving an update on autopkgtest, and friends. We will provide the latest updates on autopkgtest, autodep8, debci,, and its integration with the Debian testing migration software, britney.

We agreed on a split of the content, each one recorded their part, and I offered to join them together. The logical chaining of the topics is such that we can't just concatenate the recordings, so we need to interlace our parts.

So I set out to do a full video editing work. I have done this before, although in a simpler way, for one of the MiniDebconfs we held in Curitiba. In that case, it was just cutting the noise at the beginning and the end of the recording, and adding beginning and finish screens with sponsors logos etc.

The first issue I noticed was that both our recordings had a decent amount of audio noise. To extract the audio track from the videos, I resorted to How can I extract audio from video with ffmpeg? on Stack Overflow:

ffmpeg -i input-video.avi -vn -acodec copy output-audio.aac

I then edited the audio with Audacity. I passed a noise reduction filter a couple of times, then a compressor filter to amplify my recording on mine, as Paul's already had a good volume. And those are my more advanced audio editing skills, which I acquired doing my own podcast.

I now realized I could have just muted the audio tracks from the original clip and align the noise-free audio with it, but I ended up creating new video files with the clean audio. Another member of the Stack Overflow family came to the rescue, in How to merge audio and video file in ffmpeg. To replace the audio stream, we can do something like this:

ffmpeg -i video.mp4 -i audio.wav -c:v copy -c:a aac -map 0:v:0 -map 1:a:0 output.mp4

Paul's recording had a 4:3 aspect ratio, while the requested format is 16:9. This late in the game, there was zero chance I would request him to redo the recording. So I decided to add those black bars on the side to make it the right aspect when showing full screen. And yet again the quickest answer I could find came from the Stack Overflow empire: ffmpeg: pillarbox 4:3 to 16:9:

ffmpeg -i "input43.mkv" -vf "scale=640x480,setsar=1,pad=854:480:107:0" [etc..]

The final editing was done with pitivi, which is what I have used before. I'm a very basic user, but I could do what I needed. It was basically splitting the clips at the right places, inserting the slides as images and aligning them with the video, and making most our video appear small in the corner when presenting the slides.

P.S.: all the command lines presented here are examples, basically copied from the linked Q&As, and have to be adapted to your actual input and output formats.

Planet DebianSylvain Beucler: Planet upgrade logo

The system running was upgraded/reinstalled to Debian 10 "buster" :)
Documentation was updated.

Let me know if you notice any issue -

For the next upgrade, we'll have to decide whether to takeover Planet Venus and upgrade it to Python 3, or migrate to another Planet software.
Suggestions/help welcome :)

Planet DebianJelmer Vernooij: Debian Janitor: 8,200 landed changes landed so far

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

The bot has been submitting merge requests for about seven months now. The rollout has happened gradually across the Debian archive, and the bot is now enabled for all packages maintained on Salsa, GitLab, GitHub and Launchpad.

There are currently over 1,000 open merge requests, and close to 3,400 merge requests have been merged so far. Direct pushes are enabled for a number of large Debian teams, with about 5,000 direct pushes to date. That covers about 11,000 lintian tags of varying severities (about 75 different varieties) fixed across Debian.

Janitor pushes over time Janitor merges over time

For more information about the Janitor's lintian-fixes efforts, see the landing page


CryptogramFriday Squid Blogging: Editing the Squid Genome

Scientists have edited the genome of the Doryteuthis pealeii squid with CRISPR. A first.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityMedical Debt Collection Firm R1 RCM Hit in Ransomware Attack

R1 RCM Inc. [NASDAQ:RCM], one of the nation’s largest medical debt collection companies, has been hit in a ransomware attack.

Formerly known as Accretive Health Inc., Chicago-based R1 RCM brought in revenues of $1.18 billion in 2019. The company has more than 19,000 employees and contracts with at least 750 healthcare organizations nationwide.

R1 RCM acknowledged taking down its systems in response to a ransomware attack, but otherwise declined to comment for this story.

The “RCM” portion of its name refers to “revenue cycle management,” an industry which tracks profits throughout the life cycle of each patient, including patient registration, insurance and benefit verification, medical treatment documentation, and bill preparation and collection from patients.

The company has access to a wealth of personal, financial and medical information on tens of millions of patients, including names, dates of birth, Social Security numbers, billing information and medical diagnostic data.

It’s unclear when the intruders first breached R1’s networks, but the ransomware was unleashed more than a week ago, right around the time the company was set to release its 2nd quarter financial results for 2020.

R1 RCM declined to discuss the strain of ransomware it is battling or how it was compromised. Sources close to the investigation tell KrebsOnSecurity the malware is known as Defray.

Defray was first spotted in 2017, and its purveyors have a history of specifically targeting companies in the healthcare space. According to Trend Micro, Defray usually is spread via booby-trapped Microsoft Office documents sent via email.

“The phishing emails the authors use are well-crafted,” Trend Micro wrote. For example, in an attack targeting a hospital, the phishing email was made to look like it came from a hospital IT manager, with the malicious files disguised as patient reports.

Email security company Proofpoint says the Defray ransomware is somewhat unusual in that it is typically deployed in small, targeted attacks as opposed to large-scale “spray and pray” email malware campaigns.

“It appears that Defray may be for the personal use of specific threat actors, making its continued distribution in small, targeted attacks more likely,” Proofpoint observed.

A recent report (PDF) from Corvus Insurance notes that ransomware attacks on companies in the healthcare industry have slowed in recent months, with some malware groups even dubiously pledging they would refrain from targeting these firms during the COVID-19 pandemic. But Corvus says that trend is likely to reverse in the second half of 2020 as the United States moves cautiously toward reopening.

Corvus found that while services that scan and filter incoming email for malicious threats can catch many ransomware lures, an estimated 75 percent of healthcare companies do not use this technology.

CryptogramUpcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

CryptogramDrovorub Malware

The NSA and FBI have jointly disclosed Drovorub, a Russian malware suite that targets Linux.

Detailed advisory. Fact sheet. News articles. Reddit thread.

Planet DebianRussell Coker: Jitsi on Debian

I’ve just setup an instance of the Jitsi video-conference software for my local LUG. Here is an overview of how to set it up on Debian.

Firstly create a new virtual machine to run it. Jitsi is complex and has lots of inter-dependencies. It’s packages want to help you by dragging in other packages and configuring them. This is great if you have a blank slate to start with, but if you already have one component installed and running then it can break things. It wants to configure the Prosody Jabber server and a web server and my first attempt at an install failed when it tried to reconfigure the running instances of Prosody and Apache.

Here’s the upstream install docs [1]. They cover everything fairly well, but I’ll document the configuration I wanted (basic public server with password required to create a meeting).

Basic Installation

The first thing to do is to get a short DNS name like People will type that every time they connect and will thank you for making it short.

Using Certbot for certificates is best. It seems that you need them for and

apt install curl certbot
/usr/bin/letsencrypt certonly --standalone -d, -m
curl | gpg --dearmor > /etc/apt/jitsi-keyring.gpg
echo "deb [signed-by=/etc/apt/jitsi-keyring.gpg] stable/" > /etc/apt/sources.list.d/jitsi-stable.list
apt-get update
apt-get -y install jitsi-meet

When apt installs jitsi-meet and it’s dependencies you get asked many questions for configuring things. Most of it works well.

If you get the nginx certificate wrong or don’t have the full chain then phone clients will abort connections for no apparent reason, it seems that you need to edit /etc/nginx/sites-enabled/ to use the following ssl configuration:

ssl_certificate /etc/letsencrypt/live/;
ssl_certificate_key /etc/letsencrypt/live/;

Then you have to edit /etc/prosody/conf.d/ to use the following ssl configuration:

key = "/etc/letsencrypt/live/";
certificate = "/etc/letsencrypt/live/";

It seems that you need to have an /etc/hosts entry with the public IP address of your server and the names “ j”. Jitsi also appears to use the names “” but they aren’t required for a basic setup, I guess you could add them to /etc/hosts to avoid the possibility of strange errors due to it not finding an internal host name. There are optional features of Jitsi which require some of these names, but so far I’ve only used the basic functionality.

Access Control

This section describes how to restrict conference creation to authenticated users.

The secure-domain document [2] shows how to restrict access, but I’ll summarise the basics.

Edit /etc/prosody/conf.avail/ and use the following line in the main VirtualHost section:

        authentication = "internal_hashed"

Then add the following section:

VirtualHost ""
        authentication = "anonymous"
        c2s_require_encryption = false
        modules_enabled = {

Edit /etc/jitsi/meet/ and add the following line:

        anonymousdomain: '',

Edit /etc/jitsi/jicofo/ and add the following line:

Then run commands like the following to create new users who can create rooms:

prosodyctl register admin

Then restart most things (Prosody at least, maybe parts of Jitsi too), I rebooted the VM.

Now only the accounts you created on the Prosody server will be able to create new meetings. You should be able to add, delete, and change passwords for users via prosodyctl while it’s running once you have set this up.


Once I gave up on the idea of running Jitsi on the same server as anything else it wasn’t particularly difficult to set up. Some bits were a little fiddly and hopefully this post will be a useful resource for people who have trouble understanding the documentation. Generally it’s not difficult to install if it is the only thing running on a VM.

Planet DebianMarkus Koschany: My Free Software Activities in July 2020

Welcome to Here is my monthly report (+ the first week in August) that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Last month GCC 10 became the new default compiler for Debian 11 and compilation errors are now release critical. The change affected dozens of games in the archive but fortunately most of them are rather easy to fix and a quick workaround is available. I uploaded several packages with patches from Reiner Herrmann including blastem, freegish, gngb, phlipple, xaos, xboard, gamazons and freesweep. I could add to this list atomix, teg, neverball and biniax2. I am quite confident we can fix the rest of those FTBFS bugs before the freeze.
  • Finally freeorion 0.4.10 was released last month. Among new gameplay changes and bug fixes, freeorion’s Python 2 code was ported to Python 3.
  • Due to the ongoing Python 2 removal pygame-sdl2 in unstable could no longer be built from source and I had to upload the new Python 3 version from experimental. This in turn breaks renpy, a framework for developing visual-novel type games. At the moment it is uncertain if there will be a Python 3 version of renpy for Debian 11 in time while this issue is still being worked on upstream.
  • I uploaded a new upstream release of mgba, a Game Boy Advance emulator, for Ryan Tandy.

Debian Java


  • I fixed the GCC 10 FTBFS in iftop and packaged a new upstream release of osmo, a lean and lightweight personal organizer.
  • New versions of privacybadger, binaryen, wabt and most importantly ublock-origin are also available now. Since the new binary packages webext-ublock-origin-firefox and webext-ublock-origin-chromium were finally accepted into the archive, I am planning to package version 1.29.0 now.

Debian LTS

This was my 53. month as a paid contributor and I have been paid to work 15 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-2278-2. Issued a regression update for squid3. It was discovered that the patch for CVE-2019-12523 interrupted the communication between squid and icap or ecap services. The setup is most commonly used with clamav or similar antivirus scanners. I debugged the problem and created a new patch to address the error. In this process I also updated the patch for CVE-2019-12529 to use more code from Debian’s cryptographic nettle library. I also enabled the test suite by default now and corrected a failing test.
  • I have been working on fixing CVE-2020-15049 in squid3. The upstream patch for the 4.x series appears to be simple but to completely address the underlying problem, squid3 requires a backport of the new HttpHeader parsing code which has improved a lot over the last couple of years. The patch is complete but requires more testing. A new update will follow soon.


Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 „Jessie“. This was my 26. month and I have been paid to work 13,25 hours on ELTS.

  • ELA-242-1. Issued a security update for tomcat7 fixing 1 CVE.
  • ELA-243-1. Issued a security update for tomcat8 fixing 1 CVE.
  • ELA-253-1. Issued a security update for imagemagick fixing 18 CVE.
  • ELA-254-1. Issued a security update for libssh fixing 1 CVE.

Thanks for reading and see you next time.

LongNowHow to Be in Time

Photograph: Scott Thrift.

“We already have timepieces that show us how to be on time. These are timepieces that show us how to be in time.”

– Scott Thrift

Slow clocks are growing in popularity, perhaps as a tonic for or revolt against the historical trend of ever-faster timekeeping mechanisms.

Given that bell tower clocks were originally used to keep monastic observances of the sacred hours, it seems appropriate to restore some human agency in timing and give kairos back some of the territory it lost to the minute and second hands so long ago…

Scott Thrift’s three conceptual timepieces measure with only one hand each, counting 24 hour, one-month, and one-year cycles with each revolution. Not quite 10,000 years, but it’s a consumer-grade start.

“Right now we’re living in the long-term effects of short-term thinking. I don’t think it’s possible really for us to commonly think long term if the way that we tell time is with a short-term device that just shows the seconds, minutes, and hours. We’re precluded to seeing things in the short term.”

-Scott Thrift

Planet DebianJonathan Carter: bashtop, now in buster-backports

Recently, I discovered bashtop, yet another fancy top-like utility that’s mostly written in bash (it uses some python3-psutil and shells out to other common system utilities). I like its use of high-colour graphics and despite being written in bash, it’s not as resource heavy as I would have expected and also quite snappy (even on a raspberry pi). While writing this post, I also discovered that the author of bashtop ported it to Python and that the python version is called bpytop (hmm, doesn’t quite have the same ring to it), which is even faster and less resource intensive than the bash version (although I haven’t tried that yet, I guess I will soon…).

I set out to package it, but someone beat me to it, but since I’m also on the backports team these days, I went ahead and backported it for buster. So if you have backports enabled, you can now install it using “apt install bashtop -t buster-backports”.

Dylan Aïssi, who packaged bashtop in Debian, has already filed an ITP for bpytop, so we’ll soon have yet another top-like tool in our collection :-)

Planet DebianSven Hoexter: Retrieve WLAN PSK via nmcli

Note to myself so I do not have to figure that out every few month when I've to dig out a WLAN PSK from my existing configuration.

Step 1: Figure out the UUID of the network:

$ nmcli con show
NAME                  UUID                                  TYPE      DEVICE          
br-59d010130b86       d8672d3d-7cf6-484f-9ef8-e6ec3e73bef7  bridge    br-59d010130b86 
FRITZ!Box 7411        1ed1cec1-f586-4e75-ba6d-c9f6f4cba6e2  wifi      wlp4s0

Step 2: Request to view the PSK for this network based on the UUID

$ nmcli --show-secrets --fields 802-11-wireless-security.psk con show '1ed1cec1-f586-4e75-ba6d-c9f6f4cba6e2'
802-11-wireless-security.psk:           0815471123420511111

Planet DebianJonathan Dowland: Generic Haskell

When I did the work described earlier in template haskell, I also explored generic programming in Haskell to solve a particular problem. StrIoT is a program generator: it outputs source code, which may depend upon other modules, which need to be imported via declarations at the top of the source code files.

The data structure that StrIoT manipulates contains information about what modules are loaded to resolve the names that have been used in the input code, so we can walk that structure to automatically derive an import list. The generic programming tools I used for this are from Scrap Your Boilerplate (SYB), a module written to complement a paper of the same name. In this code snippet, everything and mkQ are from SYB:

extractNames :: Data a => a -> [Name]
extractNames = everything (++) (\a -> mkQ [] f a)
     where f = (:[])

The input must be any type which implements typeclass Data, as must all its members (and their members etc.): This holds for the Template Haskell Exp types. The output is a normal list of Names. The utility function has a more specific type Name -> [Name]. This is all that's needed to walk over the heterogeneous data structures and do something specific (f) when we encounter a Name.

Post-processing the Names to get a list of modules is simple

 nub . catMaybes . map nameModule . concatMap extractNames

Unfortunately, there's a weird GHC behaviour relating to the module names for some Prelude functions that makes the above less useful in practice. For example, the Prelude function words :: String -> [String] can normally be used without an explicit import (since it's a Prelude function). However, once round-tripped through a Name, it becomes GHC.OldList.words. Attempting to import GHC.OldList fails in some contexts, because it's a hidden module or package. I've been meaning to investigate further and, if necessary, file a GHC bug about this.

For this reason I've scrapped all the above and gone with a different plan. We go back to requiring the user to specify their required import list explicitly. We then walk over the Exp data type prior to code generation and decanonicalize all the Names. I also use generic programming/SYB to do this:

unQualifyNames :: Exp -> Exp
unQualifyNames = everywhere (\a -> mkT f a)
     where f :: Name -> Name
           f n = if n == '(.)
            then mkName "."
            else (mkName . last . splitOn "." . pprint) n

I've had to special-case composition (.) since that code-point is also used as the delimiter between package, module and function. Otherwise this looks very similar to the earlier function, except using everywhere and mkT (make transformation) instead of everything and mkQ (make query).

Worse Than FailureError'd: New Cat Nullness

"Honest! If I could give you something that had a 'cat' in it, I would!" wrote Gordon P.


"You'd think Outlook would hage told me sooner about these required updates," Carlos writes.


Colin writes, "Asking for a friend, does balsamic olive oil still have to be changed every 3,000 miles?"


"I was looking for Raspberry Pi 4 cases on my local when I stumbled upon a pretty standard, boring WTF. Desparate to find an actual picture of the case I was after, I changed to and I guess I got what I wanted," George wrote. (Here are the short versions:


Kevin wrote, "Ah, I get it. Shiny and blinky ads are SO last decade. Real container advertisers nowadays get straight to the point!"


"I noticed this in the footer of an email from my apartment management company and well, I'm intrigued at the possibility of 'rewards'," wrote Peter C.


[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianKeith Packard: picolibc-news

Picolibc Updates

I thought work on picolibc would slow down at some point, but I keep finding more things that need work. I spent a few weeks working in libm and then discovered some important memory allocation bugs in the last week that needed attention too.

Cleaning up the Picolibc Math Library

Picolibc uses the same math library sources as newlib, which includes code from a range of sources:

  • SunPro (Sun Microsystems). This forms the bulk of the common code for the math library, with copyright dates stretching back to 1993. This code is designed for processors with FPUs and uses 'float' for float functions and 'double' for double functions.

  • NetBSD. This is where the complex functions came from, with Copyright dates of 2007.

  • FreeBSD. fenv support for aarch64, arm and sparc

  • IBM. SPU processor support along with a number of stubs for long-double support where long double is the same as double

  • Various processor vendors have provided processor-specific code for exceptions and a few custom functions.

  • Szabolcs Nagy and Wilco Dijkstra (ARM). These two re-implemented some of the more important functions in 2017-2018 for both float and double using double precision arithmetic and fused multiply-add primitives to improve performance for systems with hardware double precision support.

The original SunPro math code had been split into two levels at some point:

  1. IEEE-754 functions. These offer pure IEEE-754 semantics, including return values and exceptions. They do not set the POSIX errno value. These are all prefixed with __ieee754_ and can be called directly by applications if desired.

  2. POSIX functions. These can offer POSIX semantics, including setting errno and returning expected values when errno is set.

New Code Sponsored by ARM

Szabolcs Nagy and Wilco Dijkstra's work in the last few years has been to improve the performance of some of the core math functions, which is much appreciated. They've adopted a more modern coding style (C99) and written faster code at the expense of a larger memory foot print.

One interesting choice was to use double computations for the float implementations of various functions. This makes these functions shorter and more accurate than versions done using float throughout. However, for machines which don't have HW double, this pulls in soft double code which adds considerable size to the resulting binary and slows down the computations, especially if the platform does support HW float.

The new code also takes advantage of HW fused-multiply-add instructions. Those offer more precision than a sequence of primitive instructions, and so the new code can be much shorter as a result.

The method used to detect whether the target machine supported fma operations was slightly broken on 32-bit ARM platforms, where those with 'float' fma acceleration but without 'double' fma acceleration would use the shorter code sequence, but with an emulated fma operation that used the less-precise sequence of operations, leading to significant reductions in the quality of the resulting math functions.

I fixed the double fma detection and then also added float fma detection along with implementations of float and double fma for ARM and RISC-V. Now both of those platforms get fma-enhanced math functions where available.

Errno Adventures

I'd submitted patches to newlib a while ago that aliased the regular math library names to the __ieee754_ functions when the library was configured to not set errno, which is pretty common for embedded environments where a shared errno is a pain anyways.

Note the use of the word “can” in remark about the old POSIX wrapper functions. That's because all of these functions are run-time switchable between “_IEEE_” and “_POSIX_” mode using the _LIB_VERSION global symbol. When left in the usual _IEEE_ mode, none of this extra code was ever executed, so these wrapper functions never did anything beyond what the underlying __ieee754_ functions did.

The new code by Nagy and Dijkstra changed how functions are structured to eliminate the underlying IEEE-754 api. These new functions use tail calls to various __math_ error reporting functions. Those can be configured at library build time to set errno or not, centralizing those decisions in a few functions.

The result of this combination of source material is that in the default configuration, some library functions (those written by Nagy and Dijkstra) would set errno and others (the old SunPro code) would not. To disable all errno references, the library would need to be compiled with a set of options, -D_IEEE_LIBM to disable errno in the SunPro code and -DWANT_ERRNO=0 to disable errno in the new code. To enable errno everywhere, you'd set -D_POSIX_MODE to make the default value for _LIB_VERSION be _POSIX_ instead of _IEEE_.

To clean all of this up, I removed the run-time _LIB_VERSION variable and made that compile-time. In combination with the earlier work to alias the __ieee754_ functions to the regular POSIX names when _IEEE_LIBM was defined this means that the old SunPro POSIX functions now only get used when _IEEE_LIBM is not defined, and in that case the _LIB_VERSION tests always force use of the errno setting code. In addition, I made the value of WANT_ERRNO depend on whether _IEEE_LIBM was defined, so now a single definition (-D_IEEE_LIBM) causes all of the errno handling from libm to be removed, independent of which code is in use.

As part of this work, I added a range of errno tests for the math functions to find places where the wrong errno value was being used.


As an alternative to errno, C also provides for IEEE-754 exceptions through the fenv functions. These have some significant advantages, including having independent bits for each exception type and having them accumulate instead of sharing errno with a huge range of other C library functions. Plus, they're generally implemented in hardware, so you get exceptions for both library functions and primitive operations.

Well, you should get exceptions everywhere, except that the GCC soft float libraries don't support them at all. So, errno can still be useful if you need to know what happened in your library functions when using soft floats.

Newlib has recently seen a spate of fenv support being added for various architectures, so I decided that it would be a good idea to add some tests. I added tests for both primitive operations, and then tests for library functions to check both exceptions and errno values. Oddly, this uncovered a range of minor mistakes in various math functions. Lots of these were mistakes in the SunPro POSIX wrapper functions where they modified the return values from the __ieee754_ implementations. Simply removing those value modifications fixed many of those errors.

Fixing Memory Allocator bugs

Picolibc inherits malloc code from newlib which offers two separate implementations, one big and fast, the other small and slow(er). Selecting between them is done while building the library, and as Picolibc is expected to be used on smaller systems, the small and slow one is the default.

Contributed by someone from ARM back in 2012/2013, nano-mallocr reminds me of the old V7 memory allocator. A linked list, sorted in address order, holds discontiguous chunks of available memory.

Allocation is done by searching for a large enough chunk in the list. The first one large enough is selected, and if it is large enough, a chunk is split off and left on the free list while the remainder is handed to the application. When the list doesn't have any chunk large enough, sbrk is called to get more memory.

Free operations involve walking the list and inserting the chunk in the right location, merging the freed memory with any immediately adjacent chunks to reduce fragmentation.

The size of each chunk is stored just before the first byte of memory used by the application, where it remains while the memory is in use and while on the free list. The free list is formed by pointers stored in the active area of the chunk, so the only overhead for chunks in use is the size field.

Something Something Padding

To deal with the vagaries of alignment, the original nano-mallocr code would allow for there to be 'padding' between the size field and the active memory area. The amount of padding could vary, depending on the alignment required for a particular chunk (in the case of memalign, that padding can be quite large). If present, nano-mallocr would store the padding value in the location immediately before the active area and distinguish that from a regular size field by a negative sign.

The whole padding thing seems mysterious to me -- why would it ever be needed when the allocator could simply create chunks that were aligned to the required value and a multiple of that value in size. The only use I could think of was for memalign; adding this padding field would allow for less over-allocation to find a suitable chunk. I didn't feel like this one (infrequent) use case was worth the extra complexity; it certainly caused me difficulty in reading the code.

A Few Bugs

In reviewing the code, I found a couple of easy-to-fix bugs.

  • calloc was not checking for overflow in multiplication. This is something I've only heard about in the last five or six years -- multiplying the size of each element by the number of elements can end up wrapping around to a small value which may actually succeed and cause the program to mis-behave.

  • realloc copied new_size bytes from the original location to the new location. If the new size was larger than the old, this would read off the end of the original allocation, potentially disclosing information from an adjacent allocation or walk off the end of physical memory and cause some hard fault.

Time For Testing

Once I had uncovered a few bugs in this code, I decided that it would be good to write a few tests to exercise the API. With the tests running on four architectures in nearly 60 variants, it seemed like I'd be able to uncover at least a few more failures:

  • Error tests. Allocating too much memory and make sure the correct errors were returned and that nothing obviously untoward happened.

  • Touch tests. Just call the allocator and validate the return values.

  • Stress test. Allocate lots of blocks, resize them and free them. Make sure, using 'mallinfo', that the malloc arena looked reasonable.

These new tests did find bugs. But not where I expected them. Which is why I'm so fond of testing.

GCC Optimizations

One of my tests was to call calloc and make sure it returned a chunk of memory that appeared to work or failed with a reasonable value. To my surprise, on aarch64, that test never finished. It worked elsewhere, but on that architecture it hung in the middle of calloc itself. Which looked like this:

void * nano_calloc(malloc_size_t n, malloc_size_t elem)
    ptrdiff_t bytes;
    void * mem;

    if (__builtin_mul_overflow (n, elem, &bytes))
    return NULL;
    mem = nano_malloc(bytes);
    if (mem != NULL) memset(mem, 0, bytes);
    return mem;

Note the naming here -- nano_mallocr uses nano_ prefixes in the code, but then uses #defines to change their names to those expected in the ABI. (No, I don't understand why either). However, GCC sees the real names and has some idea of what these functions are supposed to do. In particular, the pattern:

foo = malloc(n);
if (foo) memset(foo, '\0', n);

is converted into a shorter and semantically equivalent:

foo = calloc(n, 1);

Alas, GCC doesn't take into account that this optimization is occurring inside of the implementation of calloc.

Another sequence of code looked like this:

chunk->size = foo
nano_free((char *) chunk + CHUNK_OFFSET);

Well, GCC knows that the content of memory passed to free cannot affect the operation of the application, and so it converted this into:

nano_free((char *) chunk + CHUNK_OFFSET);

Remember that nano_mallocr stores the size of the chunk just before the active memory. In this case, nano_mallocr was splitting a large chunk into two pieces, setting the size of the left-over part and placing that on the free list. Failing to set that size value left whatever was there before for the size and usually resulted in the free list becoming quite corrupted.

Both of these problems can be corrected by compiling the code with a couple of GCC command-line switches (-fno-builtin-malloc and -fno-builtin-free).

Reworking Malloc

Having spent this much time reading through the nano_mallocr code, I decided to just go through it and make it easier for me to read today, hoping that other people (which includes 'future me') will also find it a bit easier to follow. I picked a couple of things to focus on:

  1. All newly allocated memory should be cleared. This reduces information disclosure between whatever code freed the memory and whatever code is about to use the memory. Plus, it reduces the effect of un-initialized allocations as they now consistently get zeroed memory. Yes, this masks bugs. Yes, this goes slower. This change is dedicated to Kees Cook, but please blame me for it not him.

  2. Get rid of the 'Padding' notion. Every time I read this code it made my brain hurt. I doubt I'll get any smarter in the future.

  3. Realloc could use some love, improving its efficiency in common cases to reduce memory usage.

  4. Reworking linked list walking. nano_mallocr uses a singly-linked free list and open-codes all list walking. Normally, I'd switch to a library implementation to avoid introducing my own bugs, but in this fairly simple case, I think it's a reasonable compromise to open-code the list operations using some patterns I learned while working at MIT from Bob Scheifler.

  5. Discover necessary values, like padding and the limits of the memory space, from the environment rather than having them hard-coded.


To get rid of 'Padding' in malloc, I needed to make sure that every chunk was aligned and sized correctly. Remember that there is a header on every allocated chunk which is stored before the active memory which contains the size of the chunk. On 32-bit machines, that size is 4 bytes. If the machine requires allocations to be aligned on 8-byte boundaries (as might be the case for 'double' values), we're now going to force the alignment of the header to 8-bytes, wasting four bytes between the size field and the active memory.

Well, the existing nano_mallocr code also wastes those four bytes to store the 'padding' value. Using a consistent alignment for chunk starting addresses and chunk sizes has made the code a lot simpler and easier to reason about while not using extra memory for normal allocation. Except for memalign, which I'll cover in the next section.


The original nano_realloc function was as simple as possible:

mem = nano_malloc(new_size);
if (mem) {
    memcpy(mem, old, MIN(old_size, new_size));
return mem;

However, this really performs badly when the application is growing a buffer while accumulating data. A couple of simple optimizations occurred to me:

  1. If there's a free chunk just after the original location, it could be merged to the existing block and avoid copying the data.

  2. If the original chunk is at the end of the heap, call sbrk() to increase the size of the chunk.

The second one seems like the more important case; in a small system, the buffer will probably land at the end of the heap at some point, at which point growing it to the size of available memory becomes quite efficient.

When shrinking the buffer, instead of allocating new space and copying, if there's enough space being freed for a new chunk, create one and add it to the free list.

List Walking

Walking singly-linked lists seem like one of the first things we see when learning pointer manipulation in C:

for (element = head; element; element = element->next)
    do stuff ...

However, this becomes pretty complicated when 'do stuff' includes removing something from the list:

prev = NULL;
for (element = head; element; element = element->next)
    if (found)
    prev = element

if (prev != NULL)
    prev->next = element->next;
    head = element->next;

An extra variable, and a test to figure out how to re-link the list. Bob showed me a simpler way, which I'm sure many people are familiar with:

for (ptr = &head; (element = *ptr); ptr = &(element->next))
    if (found)

*ptr = element->next;

Insertion is similar, as you would expect:

for (ptr = &head; (element = *ptr); ptr = &(element->next))
    if (found)

new_element->next = element;
*ptr = new_element;

In terms of memory operations, it's the same -- each 'next' pointer is fetched exactly once and the list is re-linked by performing a single store. In terms of reading the code, once you've seen this pattern, getting rid of the extra variable and the conditionals around the list update makes it shorter and less prone to errors.

In the nano_mallocr code, instead of using 'prev = NULL', it actually used 'prev = free_list', and the test for updating the head was 'prev == element', which really caught me unawares.

System Parameters

Any malloc implementation needs to know a couple of things about the system it's running on:

  1. Address space. The maximum range of possible addresses sets the limit on how large a block of memory might be allocated, and hence the size of the 'size' field. Fortunately, we've got the 'size_t' type for this, so we can just use that.

  2. Alignment requirements. These derive from the alignment requirements of the basic machine types, including pointers, integers and floating point numbers which are formed from a combination of machine requirements (some systems will fault if attempting to use memory with the wrong alignment) along with a compromise between memory usage and memory system performance.

I decided to let the system tell me the alignment necessary using a special type declaration and the 'offsetof' operation:

typedef struct {
    char c;
    union {
    void *p;
    double d;
    long long ll;
    size_t s;
    } u;
} align_t;

#define MALLOC_ALIGN        (offsetof(align_t, u))

Because C requires struct fields to be stored in order of declaration, the 'u' field would have to be after the 'c' field, and would have to be assigned an offset equal to the largest alignment necessary for any of its members. Testing on a range of machines yields the following alignment requirements:

Architecture Alignment
x86_64 8
aarch64 8
arm 8
x86 4

So, I guess I could have just used a constant value of '8' and not worried about it, but using the compiler-provided value means that running picolibc on older architectures might save a bit of memory at no real cost in the code.

Now, the header containing the 'size' field can be aligned to this value, and all allocated blocks can be allocated in units of this value.


memalign, valloc and pvalloc all allocate memory with restrictions on the alignment of the base address and length. You'd think these would be simple -- allocate a large chunk, align within that chunk and return the address. However, they also all require that the address can be successfully passed to free. Which means that the allocator needs to do some tricks to make it all work. Essentially, you allocate 'lots' of memory and then arrange that any bytes at the head and tail of the allocation can be returned to the free list.

The tail part is easy; if it's large enough to form a free chunk (which must contain the size and a 'next' pointer for the free list), it can be split off. Otherwise, it just sits at the end of the allocation being wasted space.

The head part is a bit tricky when it's not large enough to form a free chunk. That's where the 'padding' business came in handy; that can be as small as a 'size_t' value, which (on 32-bit systems) is only four bytes.

Now that we're giving up trying to reason about 'padding', any extra block at the start must be big enough to hold a free block, which includes the size and a next pointer. On 32-bit systems, that's just 8 bytes which (for most of our targets) is the same as the alignment value we're using. On 32-bit systems that can use 4-byte alignment, and on 64-bit systems, it's possible that the alignment required by the application for memalign and the alignment of a chunk returned by malloc might be off by too small an amount to create a free chunk.

So, we just allocate a lot of extra space; enough so that we can create a block of size 'toosmall + align' at the start and create a free chunk of memory out of that.

This works, and at least returns all of the unused memory back for other allocations.

Sending Patches Back to Newlib

I've sent the floating point fixes upstream to newlib where they've already landed on master. I've sent most of the malloc fixes, but I'm not sure they really care about seeing nano_mallocr refactored. If they do, I'll spend the time necessary to get the changes ported back to the newlib internal APIs and merged upstream.

Planet DebianJohn Goerzen: In Which COVID-19 Misinformation Leads To A Bunch of Graphs Made With Rust

A funny — and by funny, I mean sad — thing has happened. Recently the Kansas Department of Health and Environment (KDHE) has been analyzing data from the patchwork implementation of mask requirements in Kansas. They came to a conclusion that shouldn’t be surprising to anyone: masks help. They published a chart showing this. A right-wing propaganda publication got ahold of this, and claimed the numbers were “doctored” because there were two-different Y-axes.

I set about to analyze the data myself from public sources, and produced graphs of various kinds using a single Y-axis and supporting the idea that the graphs were not, in fact, doctored. Here’s one graph that’s showing that:

In order to do that, I had imported COVID-19 data from various public sources. Many states in the US are large enough to have significant variation in COVID-19 conditions, and many of the source people look at don’t show county-level data over time. I wanted to do that.

Eventually, I wrote covid19db, which ingests data from a number of public sources and generates a SQLite database file. Using Github Actions, this file is automatically updated every morning and available for download. Or, you can download the code and generate a database yourself locally.

Then, I wrote covid19ks, which generates various pretty graphs covering the data. These graphs, incidentally, turn out to highlight just how poorly the United States is doing compared to the rest of the industrialized world.

I hope that these resources, and especially covid19db, might be useful to others that would like to analyze the data. The code isn’t the prettiest since it was done in a hurry, but I think that functionally this is useful.

Planet DebianReproducible Builds (diffoscope): diffoscope 156 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 156. This version includes the following changes:

[ Chris Lamb ]
* Update PPU tests for compatibility with Free Pascal versions 3.2.0 or
  greater. (Closes: #968124)
* Emit a debug-level logging message when our ppudump(1) version does not
  match file header.
* Add and use an assert_diff helper that loads and compares a fixture output
  to avoid a bunch of test boilerplate.

[ Frazer Clews ]
* Apply some pylint suggestions to the codebase.

You find out more by visiting the project homepage.


CryptogramThe NSA on the Risks of Exposing Location Data

The NSA has issued an advisory on the risks of location data.

Mitigations reduce, but do not eliminate, location tracking risks in mobile devices. Most users rely on features disabled by such mitigations, making such safeguards impractical. Users should be aware of these risks and take action based on their specific situation and risk tolerance. When location exposure could be detrimental to a mission, users should prioritize mission risk and apply location tracking mitigations to the greatest extent possible. While the guidance in this document may be useful to a wide range of users, it is intended primarily for NSS/DoD system users.

The document provides a list of mitigation strategies, including turning things off:

If it is critical that location is not revealed for a particular mission, consider the following recommendations:

  • Determine a non-sensitive location where devices with wireless capabilities can be secured prior to the start of any activities. Ensure that the mission site cannot be predicted from this location.
  • Leave all devices with any wireless capabilities (including personal devices) at this non-sensitive location. Turning off the device may not be sufficient if a device has been compromised.
  • For mission transportation, use vehicles without built-in wireless communication capabilities, or turn off the capabilities, if possible.

Of course, turning off your wireless devices is itself a signal that something is going on. It's hard to be clandestine in our always connected world.

News articles.

CryptogramUAE Hack and Leak Operations

Interesting paper on recent hack-and-leak operations attributed to the UAE:

Abstract: Four hack-and-leak operations in U.S. politics between 2016 and 2019, publicly attributed to the United Arab Emirates (UAE), Qatar, and Saudi Arabia, should be seen as the "simulation of scandal" ­-- deliberate attempts to direct moral judgement against their target. Although "hacking" tools enable easy access to secret information, they are a double-edged sword, as their discovery means the scandal becomes about the hack itself, not about the hacked information. There are wider consequences for cyber competition in situations of constraint where both sides are strategic partners, as in the case of the United States and its allies in the Persian Gulf.

Planet DebianSven Hoexter: An Average IT Org

Supply chain attacks are a known issue, and also lately there was a discussion around the relevance of reproducible builds. Looking in comparison at an average IT org doing something with the internet, I believe the pressing problem is neither supply chain attacks nor a lack of reproducible builds. The real problem is the amount of prefabricated binaries supplied by someone else, created in an unknown build environment with unknown tools, the average IT org requires to do anything.

The Mess the World Runs on

By chance I had an opportunity to look at what some other people I know use, and here is the list I could compile by scratching just at the surface:

  • 80% of what HashiCorp releases. Vagrant, packer, nomad, terraform, just all of it. In the case of terraform of course with a bunch of providers and for Vagrant with machine images from the official registry.
  • Lots of ansible usecases, usually retrieved by pip.
  • Jenkins + a myriad of plugins from the Jenkins plugin registry.
  • All the tools/SDKs of a cloud provider du jour to interface with the Cloud. Mostly via 3rd party Debian repository.
  • docker (the repo for dockerd) and DockerHub
  • Mobile SDKs.
  • Kafka fetched somewhere from
  • Binary downloads from github. Many. Go and Rust make it possible.
  • Elastic, more or less the whole stack they offer via their Debian repo.
  • Postgres + the tools around it from the Debian repo.
  • because it's hard to keep up at times.
  • Maven Central.

Of course there are also all the script language repos - Python, Ruby, Node/Typescript - around as well.

Looking at myself, who's working in a different IT org but with a similar focus, I have the following lingering around on my for work laptop and retrieved it as a binary from a 3rd party:

  • dockerd from the docker repo
  • vscode from the microsoft repo
  • vivaldi from the vivaldi repo
  • Google Cloud SDK from the google repo
  • terraform + all the providers from hashicorp
  • govc form github
  • containerdiff from github(yes, by now included in Debian main)
  • github gh cli tool from github
  • wtfutil from github

Yes some of that is even non-free and might contain spyw^telemetry.

Takeway I

By guessing based on Pareto Principle probably 80% of the software mentioned above is also open source software. But, and here we leave Pareto behind, close to none is build by the average IT org from source.

Why should the average IT org care about advanced issues like supply chain attacks on source code and mitigations, when it already gets into very hot water the day DockerHub closes down, HashiCorp moves from open core to full proprietary or Elastic decides to no longer offer free binary builds?

The reality out there seems to be that infrastructure of "modern" IT orgs is managed similar to the Windows 95 installation of my childhood. You just grab running binaries from somewhere and run them. The main difference seems to be that you no longer have the inconvenience of downloading a .xls from geocities you've to rename to .rar and that it's legal.

Takeway II

In the end the binary supply is like a drug for the user, and somehow the Debian project is also just another dealer / middle man in this setup. There are probably a lot of open questions to think about in that context.

Are we the better dealer because we care about signed sources we retrieve from upstream and because we engage in reproducible build projects?

Are our own means of distributing binaries any better than a binary download from github via https with a manual checksum verification, or the Debian repo at

Is the approach of the BSD/Gentoo ports, where you have to compile at least some software from source, the better one?

Do I really want to know how some of the software is actually build?

Or some more candid ones like is gnutls a good choice for the https support in apt and how solid is the gnupg code base? Update: Regarding apt there seems to be some movement.

Kevin RuddBBC World: US-China Tensions

13 AUGUST 2020

Topics: Foreign Affairs article ‘Beware the Guns of August – in Asia’

Mike Embley
Beijing’s crackdown on Hong Kong’s democracy movement has attracted strong criticism both Washington and Beijing hitting key figures with sanctions and closing consulates in recent weeks. Is that not the only issue where the two countries don’t see eye to eye tensions have been escalating on a range of fronts, including the Chinese handling of the pandemic, the American decision to ban Huawei and Washington’s allegations of human rights abuses against Uighur Muslims in Xinjiang. So where is all this heading? Let’s try and find out we speak to Kevin Rudd, former Australian Prime Minister, of course, now the president of the Asia Society Policy Institute. Welcome very good to talk to you. You’ve been very vocal about China’s attitudes to democracy in Hong Kong, also the tit for tat sanctions between the US and China where do you think all this is heading?

Kevin Rudd
Well, if our prism for analysis is where does the US China relationship go? The bottom line is we haven’t seen this relationship in such fundamental disrepair in about half a century. And as a result, whether it’s Hong Kong, or whether it’s Taiwan or events unfolding in the South China Sea, this is pushing the relationship into greater and greater levels of crisis. What concerns those of us who study this professionally. And who know both systems of government reasonably well, both in Beijing and Washington, is that the probability of a crisis unfolding either in the Taiwan Straits or in the South China Sea is now growing. And the probability of escalation is now real into a serious shooting match. And the lesson of history is it’s very difficult to de escalate under those circumstances.

Mike Embley
Yes, I think you’ve spoken in terms of the risk of a hot war, actual war between the US and China. Are you serious?

Kevin Rudd
I am serious and I’ve not said this before. I’ve been a student of US-China relations for the last 35 years. And I’ve I take a genuinely sceptical approach to people who have sounded the alarms in previous periods of the relationship. But those of us who have observed this through the prism of history, I think have got a responsibility to say to decision makers both in Washington and in Beijing right now be careful what you wish for, because this is catapulting in a particular direction. When you look at the South China Sea in particular, there you have a huge amount of metal on metal, that is a large number of American ships and a large number of People’s Liberation Army Navy ships, similar number of aircraft, the rules of engagement, the standard operating procedures of these vessels are unbeknownst to the rest of us, we’ve had near misses before. What I’m pointing to is that if we actually have a collision, or a sinking or a crash, what then ensues in terms of crisis management on both sides when we last had this in 2001 2002 in the Bush administration, the state of the US China relationship was pretty good. Right now 20 years later, it is fundamentally appalling. That’s why many of us are deeply concerned, and are sounding this concern both to Beijing and Washington.

Mike Embley
And yet you know, of course, China is such a power economically and is making its presence felt in so many places in the world. There is a sense that really China can pretty much do what it wants, how do you avoid the kind of situation you’re describing?

Kevin Rudd
Well, the government in Beijing needs to understand the importance of restraint as well in terms of its own calculus of its own long term national interests. And that is China’s current cause of action across a range of fronts is in fact causing a massive international reaction against China now, unprecedented against again, the measures of the last 40 or 50 years. You now have fundamental dislocations in the relationship not just with Washington, but with Canada, with Australia, with United Kingdom, with Japan, with the Republic of Korea, and a whole bunch of others as well, including those in various parts of continental Europe. And so therefore, looking at this from the prism of Beijing’s own interests, there are those in Beijing who will be raising the argument, are we pushing too far too hard, too fast. And the responsibility of the rest of us is to say to that cautionary advice within Beijing, all power to your arm in restraining China from this course of action, but also in equal measure saying into our friends in Washington, particularly in a presidential election season, where Republicans and Democrats are seeking to outflank each other to the right, on China strategy, that this is no time to engage in, shall we say, symbolic acts for a domestic political purpose in the United States presidential election context, which can have real national security consequences in Southeast Asia and then globally.

Mike Embley
Mr. Rudd, you say very clearly what you hope will happen what you hope China will realize, what do you think actually will happen? Are you optimistic in a nutshell or pessimistic?

Kevin Rudd
The reason for me writing the piece I’ve just done in Foreign Affairs Magazine, which is entitled “Beware The Guns of August”, for those of us obviously familiar with what happened in August of 1914. Is that on balance I am pessimistic, that the political cultures in both capitals right now are fully seized of the risks that they are playing with on the high seas and over Taiwan as well. Hong Kong, the matters you were referring to before, frankly, add further to the deterioration of the surrounding political relationship between the two countries. But in terms of incendiary actions of a national security nature, it’s events in the Taiwan straits and it’s events on the high seas in the South China Sea, which are most likely to trigger this. And to answer your question directly right now, until we see the other side of the US presidential election. I remain on balance concerned and pessimistic.

Mike Embley
Right. Kevin Rudd Thank you very much for talking to us.

Kevin Rudd
Good to be with you.

The post BBC World: US-China Tensions appeared first on Kevin Rudd.

Kevin RuddAustralian Jewish News: Michael Gawenda and ‘The Powerbroker’

With the late Shimon Peres in 2012.

This article was first published by The Australian Jewish News on 13 August 2020.

The factional manoeuvrings of Labor’s faceless men a decade ago are convoluted enough without demonstrable misrepresentations by authors like Michael Gawenda in his biography of Mark Leibler, The Powerbroker.

Gawenda claims my memoir, The PM Years, blames the leadership coup on Leibler’s hardline faction of Australia’s Israel lobby, “plotting” in secret with Julia Gillard – a vision of “extreme, verging on conspiratorial darkness”. This is utter fabrication on his part. My simple challenge to Gawenda is to specify where I make such claims. He can’t. If he’d bothered to call me before publishing, I would have told him so.

Let me be clear: I have never claimed, nor do I believe, that Leibler or AIJAC were involved in the coup. It was conceived and executed almost entirely by factional warlords who blamed me for stymieing their individual ambitions.

It’s true my relationship with Leibler was strained in 2010 after Mossad agents stole the identities of four Australians living in Israel. Using false passports, they slipped into Dubai to assassinate a Hamas operative. They broke our laws and breached our trust.

The Mossad also jeopardised the safety of every Australian who travels on our passports in the Middle East. Unless this stopped, any Australian would be under suspicion, exposing them to arbitrary detention or worse.

More shocking, this wasn’t their first offence. The Mossad explicitly promised to stop abusing Australian passports after an incident in 2003, in a memorandum kept secret to spare Israel embarrassment. It didn’t work. They reoffended because they thought Australia was weak and wouldn’t complain.

We needed a proportional response to jolt Israeli politicians to act, without fundamentally damaging our valued relationship. Australia’s diplomatic, national security and intelligence establishments were unanimous: we should expel the Mossad’s representative in Canberra. This would achieve our goal but make little practical difference to Australia-Israel cooperation. Every minister in the national security committee agreed, including Gillard.

But obdurate elements of Australia’s Israel lobby accused us of overreacting. How could we treat our friend Israel like this? How did we know it was them? Wasn’t this just the usual murky business of espionage? According to Leibler, Diaspora leaders should “not criticise any Israeli government when it comes to questions of Israeli security”. Any violation of law, domestic or international, is acceptable. Never mind every citizen’s duty to uphold our laws and protect Australian lives.

I invited Leibler and others to dinner at the Lodge to reassure them the affair, although significant, wouldn’t derail the relationship. I sat politely as Leibler berated me. Boasting of his connections, he wanted to personally arrange meetings with the Mossad to smooth things over. We had, of course, already done this.

Apropos of nothing, Leibler then leaned over and, in what seemed to me a slightly menacing manner, suggested Julia was “looking very good in the polls” and “a great friend of Israel”. This surprised me, not least because I believed, however foolishly, that my deputy was loyal.

Leibler’s denials are absorbed wholly by Gawenda, solely on the basis of his notes. Give us a break, Michael – why would Leibler record such behaviour? It’s also meaningless that others didn’t hear him since, as often happens at dinners, multiple conversations occur around the table. The truth is it did happen, hence why I recorded it in my book. I have no reason to invent such an anecdote.

In fairness to Gillard, her eagerness to befriend Leibler reflected the steepness of her climb on Israel. She emerged from organisations that historically antagonised Israel – the Socialist Left and Australian Union of Students – and often overcompensated by swinging further towards AIJAC than longstanding Labor policy allowed.

By contrast, my reputation was well established, untainted by the anti-Israel sentiment sometimes found on the political left. A lifelong supporter of Israel and security for its people, I defied Labor critics by proudly leading Parliament in praise of the Jewish State’s achievements. I have consistently denounced the BDS campaign targeting Israeli businesses, both in office and since. My government blocked numerous shipments of potential nuclear components to Iran, and commissioned legal advice on charging president Mahmoud Ahmadinejad with incitement to genocide against the Jewish people. I’m as proud of this record as I am of my longstanding support for a two-state solution.

I have never considered that unequivocal support for Israel means unequivocal support for the policies of the Netanyahu government. For example, the annexation plan in the West Bank would be disastrous for Israel’s future security and fundamentally breach international law – a view shared by UK Conservative PM Boris Johnson. Israel, like the Australian Jewish community, is not monolithic; my concerns are shared by ordinary Israelis as well as many members of the Knesset.

Michael Gawenda is free to criticise me for things I’ve said and done (ironically, as editor of The Age, he didn’t consider me left-wing enough!), but his assertions in this account are flatly untrue.

The post Australian Jewish News: Michael Gawenda and ‘The Powerbroker’ appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Don't Stop the Magic

Don’t you believe in magic strings and numbers being bad? From the perspective of readability and future maintenance, constants are better. We all know this is true, and we all know that it can sometimes go too far.

Douwe Kasemier has a co-worker that has taken that a little too far.

For example, they have a Java method with a signature like this:

Document addDocument(Action act, boolean createNotification);

The Action type contains information about what action to actually perform, but it will result in a Document. Sometimes this creates a notification, and sometimes it doesn’t.

Douwe’s co-worker was worried about the readability of addDocument(myAct, true) and addDocument(myAct, false), so they went ahead and added some constants:

    private static final boolean NO_NOTIFICATION = false;
    private static final boolean CREATE_NOTIFICATION = true;

Okay, now, I don’t love this, but it’s not the worst thing…

public Document doActionWithNotification(Action act) {
  addDocument(act, CREATE_NOTIFICATION);

public Document doActionWithoutNotification(Action act) {
  addDocument(act, NO_NOTIFICATION);

Okay, now we’re just getting silly. This is at least diminishing returns of readability, if not actively harmful to making the code clear.

    private static final int SIX = 6;
    private static final int FIVE = 5;
    public String findId(String path) {
      String[] folders = path.split("/");
      if (folders.length >= SIX && (folders[FIVE].startsWith(PREFIX_SR) || folders[FIVE].startsWith(PREFIX_BR))) {
          return folders[FIVE].substring(PREFIX_SR.length());
      return null;

Ah, there we go. The logical conclusion: constants for 5 and 6. And yet they didn’t feel the need to make a constant for "/"?

At least this in maintainable, so that when the value of FIVE changes, the method doesn’t need to change.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


CryptogramSmart Lock Vulnerability

Yet another Internet-connected door lock is insecure:

Sold by retailers including Amazon, Walmart, and Home Depot, U-Tec's $139.99 UltraLoq is marketed as a "secure and versatile smart deadbolt that offers keyless entry via your Bluetooth-enabled smartphone and code."

Users can share temporary codes and 'Ekeys' to friends and guests for scheduled access, but according to Tripwire researcher Craig Young, a hacker able to sniff out the device's MAC address can help themselves to an access key, too.

UltraLoq eventually fixed the vulnerabilities, but not in a way that should give you any confidence that they know what they're doing.

EDITED TO ADD (8/12): More.

CryptogramCybercrime in the Age of COVID-19

The Cambridge Cybercrime Centre has a series of papers on cybercrime during the coronavirus pandemic.

EDITED TO ADD (8/12): Interpol report.

CryptogramTwitter Hacker Arrested

A 17-year-old Florida boy was arrested and charged with last week's Twitter hack.

News articles. Boing Boing post. Florida state attorney press release.

This is a developing story. Post any additional news in the comments.

EDITED TO ADD (8/1): Two others have been charged as well.

EDITED TO ADD (8/11): The online bail hearing was hacked.

Krebs on SecurityWhy & Where You Should Plant Your Flag

Several stories here have highlighted the importance of creating accounts online tied to your various identity, financial and communications services before identity thieves do it for you. This post examines some of the key places where everyone should plant their virtual flags.

As KrebsOnSecurity observed back in 2018, many people — particularly older folks — proudly declare they avoid using the Web to manage various accounts tied to their personal and financial data — including everything from utilities and mobile phones to retirement benefits and online banking services. From that story:

“The reasoning behind this strategy is as simple as it is alluring: What’s not put online can’t be hacked. But increasingly, adherents to this mantra are finding out the hard way that if you don’t plant your flag online, fraudsters and identity thieves may do it for you.”

“The crux of the problem is that while most types of customer accounts these days can be managed online, the process of tying one’s account number to a specific email address and/or mobile device typically involves supplying personal data that can easily be found or purchased online — such as Social Security numbers, birthdays and addresses.”

In short, although you may not be required to create online accounts to manage your affairs at your ISP, the U.S. Postal Service, the credit bureaus or the Social Security Administration, it’s a good idea to do so for several reasons.

Most importantly, the majority of the entities I’ll discuss here allow just one registrant per person/customer. Thus, even if you have no intention of using that account, establishing one will be far easier than trying to dislodge an impostor who gets there first using your identity data and an email address they control.

Also, the cost of planting your flag is virtually nil apart from your investment of time. In contrast, failing to plant one’s flag can allow ne’er-do-wells to create a great deal of mischief for you, whether it be misdirecting your service or benefits elsewhere, or canceling them altogether.

Before we dive into the list, a couple of important caveats. Adding multi-factor authentication (MFA) at these various providers (where available) and/or establishing a customer-specific personal identification number (PIN) also can help secure online access. For those who can’t be convinced to use a password manager, even writing down all of the account details and passwords on a slip of paper can be helpful, provided the document is secured in a safe place.

Perhaps the most important place to enable MFA is with your email accounts. Armed with access to your inbox, thieves can then reset the password for any other service or account that is tied to that email address.

People who don’t take advantage of these added safeguards may find it far more difficult to regain access when their account gets hacked, because increasingly thieves will enable multi-factor options and tie the account to a device they control.

Secondly, guard the security of your mobile phone account as best you can (doing so might just save your life). The passwords for countless online services can be reset merely by entering a one-time code sent via text message to the phone number on file for the customer’s account.

And thanks to the increasing prevalence of a crime known as SIM swapping, thieves may be able to upend your personal and financial life simply by tricking someone at your mobile service provider into diverting your calls and texts to a device they control.

Most mobile providers offer customers the option of placing a PIN or secret passphrase on their accounts to lessen the likelihood of such attacks succeeding, but these protections also usually fail when the attackers are social engineering some $12-an-hour employee at a mobile phone store.

Your best option is to reduce your overall reliance on your phone number for added authentication at any online service. Many sites now offer MFA options that are app-based and not tied to your mobile service, and this is your best option for MFA wherever possible.


First and foremost, all U.S. residents should ensure they have accounts set up online at the three major credit bureaus — Equifax, Experian and Trans Union.

It’s important to remember that the questions these bureaus will ask to verify your identity are not terribly difficult for thieves to answer or guess just by referencing public records and/or perhaps your postings on social media.

You will need accounts at these bureaus if you wish to freeze your credit file. KrebsOnSecurity has for many years urged all readers to do just that, because freezing your file is the best way to prevent identity thieves from opening new lines of credit in your name. Parents and guardians also can now freeze the files of their dependents for free.

For more on what a freeze entails and how to place or thaw one, please see this post. Beyond the big three bureaus, Innovis is a distant fourth bureau that some entities use to check consumer creditworthiness. Fortunately, filing a freeze with Innovis likewise is free and relatively painless.

It’s also a good idea to notify a company called ChexSystems to keep an eye out for fraud committed in your name. Thousands of banks rely on ChexSystems to verify customers who are requesting new checking and savings accounts, and ChexSystems lets consumers place a security alert on their credit data to make it more difficult for ID thieves to fraudulently obtain checking and savings accounts. For more information on doing that with ChexSystems, see this link.

If you placed a freeze on your file at the major bureaus more than a few years ago but haven’t revisited the bureaus’ sites lately, it might be wise to do that soon. Following its epic 2017 data breach, Equifax reconfigured its systems to invalidate the freeze PINs it previously relied upon to unfreeze a file, effectively allowing anyone to bypass that PIN if they can glean a few personal details about you. Experian’s site also has undermined the security of the freeze PIN.

I mentioned planting your flag at the credit bureaus first because if you plan to freeze your credit files, it may be wise to do so after you have planted your flag at all the other places listed in this story. That’s because these other places may try to check your identity records at one or more of the bureaus, and having a freeze in place may interfere with that account creation.


I can’t tell you how many times people have proudly told me they don’t bank online, and prefer to manage all of their accounts the old fashioned way. I always respond that while this is totally okay, you still need to establish an online account for your financial providers because if you don’t someone may do it for you.

This goes doubly for any retirement and pension plans you may have. It’s a good idea for people with older relatives to help those individuals set up and manage online identities for their various accounts — even if those relatives never intend to access any of the accounts online.

This process is doubly important for parents and relatives who have just lost a spouse. When someone passes away, there’s often an obituary in the paper that offers a great deal of information about the deceased and any surviving family members, and identity thieves love to mine this information.


Whether you’re approaching retirement, middle-aged or just starting out in your career, you should establish an account online at the U.S. Social Security Administration. Maybe you don’t believe Social Security money will actually still be there when you retire, but chances are you’re nevertheless paying into the system now. Either way, the plant-your-flag rules still apply.

Ditto for the Internal Revenue Service. A few years back, ID thieves who specialize in perpetrating tax refund fraud were massively registering people at the IRS’s website to download key data from their prior years’ tax transcripts. While the IRS has improved its taxpayer validation and security measures since then, it’s a good idea to mark your territory here as well.

The same goes for your state’s Department of Motor Vehicles (DMV), which maintains an alarming amount of information about you whether you have an online account there or not. Because the DMV also is the place that typically issues state drivers licenses, you really don’t want to mess around with the possibility that someone could register as you, change your physical address on file, and obtain a new license in your name.

Last but certainly not least, you should create an account for your household at the U.S. Postal Service’s Web site. Having someone divert your mail or delay delivery of it for however long they like is not a fun experience.

Also, the USPS has this nifty service called Informed Delivery, which lets residents view scanned images of all incoming mail prior to delivery. In 2018, the U.S. Secret Service warned that identity thieves have been abusing Informed Delivery to let them know when residents are about to receive credit cards or notices of new lines of credit opened in their names. Do yourself a favor and create an Informed Delivery account as well. Note that multiple occupants of the same street address can each have their own accounts.


Online accounts coupled with the strongest multi-factor authentication available also are important for any services that provide you with telephone, television and Internet access.

Strange as it may sound, plenty of people who receive all of these services in a bundle from one ISP do not have accounts online to manage their service. This is dangerous because if thieves can establish an account on your behalf, they can then divert calls intended for you to their own phones.

My original Plant Your Flag piece in 2018 told the story of an older Florida man who had pricey jewelry bought in his name after fraudsters created an online account at his ISP and diverted calls to his home phone number so they could intercept calls from his bank seeking to verify the transactions.

If you own a home, chances are you also have an account at one or more local utility providers, such as power and water companies. If you don’t already have an account at these places, create one and secure access to it with a strong password and any other access controls available.

These frequently monopolistic companies traditionally have poor to non-existent fraud controls, even though they effectively operate as mini credit bureaus. Bear in mind that possession of one or more of your utility bills is often sufficient documentation to establish proof of identity. As a result, such records are highly sought-after by identity thieves.

Another common way that ID thieves establish new lines of credit is by opening a mobile phone account in a target’s name. A little-known entity that many mobile providers turn to for validating new mobile accounts is the National Consumer Telecommunications and Utilities Exchange, or Happily, the NCTUE allows consumers to place a freeze on their file by calling their 800-number, 1-866-349-5355. For more information on the NCTUE, see this page.

Have I missed any important items? Please sound off in the comments below.

CryptogramCryptanalysis of an Old Zip Encryption Algorithm

Mike Stay broke an old zipfile encryption algorithm to recover $300,000 in bitcoin.

DefCon talk here.

Worse Than FailureTeleconference Horror

Jcacweb cam

In the spring of 2020, with very little warning, every school in the United States shut down due to the ongoing global pandemic. Classrooms had to move to virtual meeting software like Zoom, which was never intended to be used as the primary means of educating grade schoolers. The teachers did wonderfully with such little notice, and most kids finished out the year with at least a little more knowledge than they started. This story takes place years before then, when online schooling was seen as an optional add-on and not a necessary backup plan in case of plague.

TelEdu provided their take on such a thing in the form of a free third-party add-on for Moodle, a popular e-learning platform. Moodle provides space for teachers to upload recordings and handouts; TelEdu takes it one step further by adding a "virtual classroom" complete with a virtual whiteboard. The catch? You have to pay a subscription fee to use the free module, otherwise it's nonfunctional.

Initech decided they were on a tight schedule to implement a virtual classroom feature for their corporate training, so they went ahead and bought the service without testing it. They then scheduled a demonstration to the client, still without testing it. The client's 10-man team all joined to test out the functionality, and it wasn't long before the phone started ringing off the hook with complaints: slowness, 504 errors, blank pages, the whole nine yards.

That's where Paul comes in to our story. Paul was tasked with finding what had gone wrong and completing the integration. The most common complaint was that Moodle was being slow, but upon testing it himself, Paul found that only the TelEdu module pages were slow, not the rest of the install. So far so good. The code was open-source, so he went digging through to find out what in view.php was taking so long:

$getplan = telEdu_get_plan();
$paymentinfo = telEdu_get_payment_info();
$getclassdetail = telEdu_get_class($telEduclass->class_id);
$pricelist = telEdu_get_price_list($telEduclass->class_id);

Four calls to get info about the class, three of them to do with payment. Not a great start, but not necessarily terrible, either. So, how was the info fetched?

function telEdu_get_plan() {
    $data['task'] = TELEDU_TASK_GET_PLAN;
    $result = telEdu_get_curl_info($data);
    return $result;

"They couldn't possibly ... could they?" Paul wondered aloud.

function telEdu_get_payment_info() {
    $data['task'] = TELEDU_TASK_GET_PAYMENT_INFO;
    $result = telEdu_get_curl_info($data);
    return $result;

Just to make sure, Paul next checked what telEdu_get_curl_info actually did:

function telEdu_get_curl_info($data) {
    global $CFG;
    require_once($CFG->libdir . '/filelib.php');

    $key = $CFG->mod_telEdu_apikey;
    $baseurl = $CFG->mod_telEdu_baseurl;

    $urlfirstpart = $baseurl . "/" . $data['task'] . "?apikey=" . $key;

    if (($data['task'] == TELEDU_TASK_GET_PAYMENT_INFO) || ($data['task'] == TELEDU_TASK_GET_PLAN)) {
        $location = $baseurl;
    } else {
        $location = telEdu_post_url($urlfirstpart, $data);

    $postdata = '';
    if ($data['task'] == TELEDU_TASK_GET_PAYMENT_INFO) {
        $postdata = 'task=getPaymentInfo&apikey=' . $key;
    } else if ($data['task'] == TELEDU_TASK_GET_PLAN) {
        $postdata = 'task=getplan&apikey=' . $key;

    $options = array(

    $curl = new curl();
    $result = $curl->post($location, $postdata, $options);

    $finalresult = json_decode($result, true);
    return $finalresult;

A remote call to another API using, of all things, a shell call out to cURL, which queried URLs from the command line. Then it waited for the result, which was clocking in at anywhere between 1 and 30 seconds ... each call. The result wasn't used anywhere, either. It seemed to be just a precaution in case somewhere down the line they wanted these things.

After another half a day of digging through the rest of the codebase, Paul gave up. Sales told the client that "Due to the high number of users, we need more time to make a small server calibration."

The calibration? Replacing TelEdu with BigBlueButton. Problem solved.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Krebs on SecurityMicrosoft Patch Tuesday, August 2020 Edition

Microsoft today released updates to plug at least 120 security holes in its Windows operating systems and supported software, including two newly discovered vulnerabilities that are actively being exploited. Yes, good people of the Windows world, it’s time once again to backup and patch up!

At least 17 of the bugs squashed in August’s patch batch address vulnerabilities Microsoft rates as “critical,” meaning they can be exploited by miscreants or malware to gain complete, remote control over an affected system with little or no help from users. This is the sixth month in a row Microsoft has shipped fixes for more than 100 flaws in its products.

The most concerning of these appears to be CVE-2020-1380, which is a weaknesses in Internet Explorer that could result in system compromise just by browsing with IE to a hacked or malicious website. Microsoft’s advisory says this flaw is currently being exploited in active attacks.

The other flaw enjoying active exploitation is CVE-2020-1464, which is a “spoofing” bug in virtually all supported versions of Windows that allows an attacker to bypass Windows security features and load improperly signed files. For more on this flaw, see Microsoft Put Off Fixing Zero for 2 Years.

Trend Micro’s Zero Day Initiative points to another fix — CVE-2020-1472 — which involves a critical issue in Windows Server versions that could let an unauthenticated attacker gain administrative access to a Windows domain controller and run an application of their choosing. A domain controller is a server that responds to security authentication requests in a Windows environment, and a compromised domain controller can give attackers the keys to the kingdom inside a corporate network.

“It’s rare to see a Critical-rated elevation of privilege bug, but this one deserves it,” said ZDI’S Dustin Childs. “What’s worse is that there is not a full fix available.”

Perhaps the most “elite” vulnerability addressed this month earned the distinction of being named CVE-2020-1337, and refers to a security hole in the Windows Print Spooler service that could allow an attacker or malware to escalate their privileges on a system if they were already logged on as a regular (non-administrator) user.

Satnam Narang at Tenable notes that CVE-2020-1337 is a patch bypass for CVE-2020-1048, another Windows Print Spooler vulnerability that was patched in May 2020. Narang said researchers found that the patch for CVE-2020-1048 was incomplete and presented their findings for CVE-2020-1337 at the Black Hat security conference earlier this month. More information on CVE-2020-1337, including a video demonstration of a proof-of-concept exploit, is available here.

Adobe has graciously given us another month’s respite from patching Flash Player flaws, but it did release critical security updates for its Acrobat and PDF Reader products. More information on those updates is available here.

Keep in mind that while staying up-to-date on Windows patches is a must, it’s important to make sure you’re updating only after you’ve backed up your important data and files. A reliable backup means you’re less likely to pull your hair out when the odd buggy patch causes problems booting the system.

So do yourself a favor and backup your files before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And as ever, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Cory DoctorowTerra Nullius

Terra Nullius is my March 2019 column in Locus magazine; it explores the commonalities between the people who claim ownership over the things they use to make new creative works and the settler colonialists who arrived in various “new worlds” and declared them to be empty, erasing the people who were already there as a prelude to genocide.

I was inspired by the story of Aloha Poke, in which a white dude from Chicago secured a trademark for his “Aloha Poke” midwestern restaurants, then threatened Hawai’ians who used “aloha” in the names of their restaurants (and later, by the Dutch grifter who claimed a patent on the preparation of teff, an Ethiopian staple grain that has been cultivated and refined for about 7,000 years).

MP3 Link

LongNowScientists Have a Powerful New Tool to Investigate Triassic Dark Ages

The time-honored debate between catastrophists and gradualists (those who believe major Earth changes were due to sudden violent events or happened over long periods of time) has everything to do with the coarse grain of the geological record. When paleontologists only have a series of thousand-year flood deposits to study, it’s almost impossible to say what was really going on at shorter timescales. So many of the great debates of natural history hinge on the resolution at which data can be collected, and boil down to something like, “Was it a meteorite impact that caused this extinction, or the inexorable climate changes caused by continental drift?”

One such gap in our understanding is in the Late Triassic — a geological shadow during which major regime changes in terrestrial fauna took place, setting the stage for The Age of Dinosaurs. But the curtains were closed during that scene change…until, perhaps, now:

By determining the age of the rock core, researchers were able to piece together a continuous, unbroken stretch of Earth’s history from 225 million to 209 million years ago. The timeline offers insight into what has been a geologic dark age and will help scientists investigate abrupt environmental changes from the peak of the Late Triassic and how they affected the plants and animals of the time.

Cool new detective work on geological “tree rings” from the Petrified Forest National Park (where I was lucky enough to do some revolutionary paleontological reconstruction work under Dr. Bill Parker back in 2005).

CryptogramCollecting and Selling Mobile Phone Location Data

The Wall Street Journal has an article about a company called Anomaly Six LLC that has an SDK that's used by "more than 500 mobile applications." Through that SDK, the company collects location data from users, which it then sells.

Anomaly Six is a federal contractor that provides global-location-data products to branches of the U.S. government and private-sector clients. The company told The Wall Street Journal it restricts the sale of U.S. mobile phone movement data only to nongovernmental, private-sector clients.


Anomaly Six was founded by defense-contracting veterans who worked closely with government agencies for most of their careers and built a company to cater in part to national-security agencies, according to court records and interviews.

Just one of the many Internet companies spying on our every move for profit. And I'm sure they sell to the US government; it's legal and why would they forgo those sales?

Kevin RuddCNN: South China Sea and the US-China Tech War

11 AUGUST 2020

Topics: Foreign Affairs article; US-China tech war

Zain Asher
In a sobering assessment in Foreign Affairs magazine, the former Australian Prime Minister Kevin Rudd warns that diplomatic relations are crumbling and raise the possibility of armed conflict. Mr Rudd, who is president of the Asia Society Policy Institute, joins us live now. So Mr Rudd, just walk us through this. You believe that armed conflict is possible and, is this relationship at this point, in your opinion, quite frankly, beyond repair?

Kevin Rudd
It’s not beyond repair, but we’ve got to be blunt about the fact that the level of deterioration has been virtually unprecedented at least in the last half-century. And things are moving at a great pace in terms of the scenarios, the two scenarios which trouble us most are the Taiwan straits and the South China Sea. In the Taiwan straits, we see consistent escalation of tensions between Washington and Beijing. And certainly, in the South China Sea, the pace and intensity of naval and air activity in and around that region increases the possibility, the real possibility, of collisions at sea and collisions in the air. And the question then becomes: do Beijing and Washington really have an intention to de-escalate or then to escalate, if such a crisis was to unfold?

Zain Asher
How do they de-escalate? Is the only way at this point, or how do they reverse the sort of tensions between them? Is the main way at this point that, you know, a new administration comes in in November and it can be reset? If Trump gets re-elected, can there be de-escalation? If so, how?

Kevin Rudd
Well the purpose of my writing the article in Foreign Affairs, which you referred to before, was to, in fact, talk about the real dangers we face in the next three months. That is, before the US presidential election. We all know that in the US right now, that tensions or, shall I say, political pressure on President Trump are acute. But what people are less familiar of within the West is the fact that in Chinese politics there is also pressure on Xi Jinping for a range of domestic and external reasons as well. So what I have simply said is: in this next three months, where we face genuine political pressure operating on both political leaders, if we do have an incident, that is an unplanned incident or collision in the air or at sea, we now have a tinderbox environment. Therefore, the plans which need to be put in place between the grown-ups in the US and Chinese militaries is to have a mechanism to rapidly de-escalate should a collision occur. I’m not sure that those plans currently exist.

Zain Asher
Let’s talk about tech because President Donald Trump, as you know, is forcing ByteDance, the company that owns TikTok, to sell its assets and no longer operate in the US. The premise is that there are national security fears and also this idea that TikTok is handing over user data from American citizens to the Chinese government. How real and concrete are those fears, or is this purely politically motivated? Are the fears justified, in other words?

Kevin Rudd
As far as TikTok is concerned, this is way beyond my paygrade in terms of analysing the technological capacities of a) the company and b) the ability of the Chinese security authorities to backdoor them. What I can say is this a deliberate decision on the part of the US administration to radically escalate the technology war. In the past, it was a war about Huawei and 5G. It then became an unfolding conflict over the question of the future access to semiconductors, computer chips. And now we have, as it were, the unfolding ban imposed by the administration on Chinese-sourced computer apps, including this one, for TikTok. So this is a throwing-down of the gauntlet by the US administration. What I believe we will see, however, is Chinese retaliation. I think they will find a corporate mechanism to retaliate, given the actions taken not just against ByteDance and TikTok, but of course against WeChat. And so the pattern of escalation that we were talking about earlier in technology, the economy, trade, investment, finance, and the hard stuff in national security continues to unfold, which is why we need sober heads to prevail in the months ahead.

The post CNN: South China Sea and the US-China Tech War appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: The Concatenator

In English, there's much debate over the "Oxford Comma": in a list of items, do you put a comma between the penultimate item and the "and" before the final one? For example: "The conference featured bad programmers, Remy and TheDailyWTF readers" versus "The conference featured bad programmers, Remy, and the TheDailyWTF readers."

I'd like to introduce a subtly different one: "the concatenator's comma", or if we want to be generic "the concatenator's seperator character", but that doesn't have the same ring to it. If you're planning to list items as a string, you might to something like this pseudocode:

for each item in items result.append(item + ", ")

This naive approach does pose a problem: we'll have an extra comma. So maybe you have to add logic to decide if you're on the first or last item, and insert (or fail to insert) commas as appropriate. Or, maybe isn't a problem- if we're generating JSON, for example, we can just leave the trailing commas. This isn't universally true, of course, but many formats will ignore extra separators. Edit: I was apparently hallucinating when I wrote this; one of the most annoying things about JSON is that you can't do this.

Like, for example, URL query strings, which don't require a "sub-delim" like "&" to have anything following it.

But fortunately for us, no matter what language we're using, there's almost certainly an API that makes it so that we don't have to do string concatenation anyway, so why even bring it up?

Well, because Mike has a co-worker that has read the docs well enough to know that PHP has a substr method, but not well enough to know it has an http_build_query method. Or even an implode method, which handles string concats for you. Instead, they wrote this:

$query = ''; foreach ($postdata as $var => $val) { $query .= $var .'='. $val .'&'; } $query = substr($query, 0, -1);

This code exploits a little-observed feature of substr: a negative length reads back from the end. So this lops off that trailing "&", which is both unnecessary and one of the most annoying ways to do this.

Maybe it's not enough to RTFM, as Mike puts it, maybe you need to "RTEFM": read the entire manual.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!


Worse Than FailureI'm Blue

Designers are used to getting vague direction from clients. "It should have more pop!" or "Can you make the blue more blue?" But Kevin was a contractor who worked on embedded software, so he didn't really expect to have to deal with that, even if he did have to deal with colors a fair bit.

Kevin was taking over a contract from another developer to build software for a colorimeter, a device to measure color. When companies, like paint companies, care about color, they tend to really care about color, and need to be able to accurately observe a real-world color. Once you start diving deep into color theory, you start having to think about things like observers, and illuminants and tristimulus models and "perceptual color spaces".

The operating principle of the device was fairly simple. It had a bright light, of a well known color temperature. It had a brightness sensor. It had a set of colored filter gels that would pass in front of the sensor. Place the colorimeter against an object, and the bright light would reflect off the surface, through each of the filters in turn and record the brightness. With a little computation, you can determine, with a high degree of precision, what color something is.

Now, this is a scientific instrument, and that means that the code which runs it, even though it's proprietary, needs to be vetted by scientists. The device needs to be tested against known samples. Deviations need to be corrected for, and then carefully justified. There should be no "magic numbers" in the code that aren't well documented and explained. If, for example, the company gets its filter gels from a new vendor and they filter slightly different frequencies, the commit needs to link to the datasheets for those gels to explain the change. Similarly, if a sensor has a frequency response that means that the samples may be biased, you commit that with a link to the datasheet showing that to be the case.

Which is why Kevin was a little surprised by the commit by his predecessor. The message read: "Nathan wants the blue 'more blue'? Fine. the blue is more blue." Nathan was the product owner.

The corresponding change was a line which read:

blue += 20;

Well, Nathan got what he wanted. It's a good thing he didn't ask for it to "pop" though.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


LongNowThe Deep Sea

As detailed in the exquisite documentary Proteus, the ocean floor was until very recently a repository for the dreams of humankind — the receptacle for our imagination. But when the H.M.S. Challenger expedition surveyed the world’s deep-sea life and brought it back for cataloging by now-legendary illustrator Ernst Haeckel (who coined the term “ecology”), the hidden benthic universe started coming into view. What we found, and what we continue to discover on the ocean floor, is far stranger than the monsters we’d projected.

This spectacular site by Neal Agarwal brings depth into focus. You’ve surfed the Web; now take a few and dive all the way down to Challenger Deep, scrolling past the animals that live at every depth.

Just as The Long Now situates us in a humbling, Copernican experience of temporality, Deep Sea reminds us of just how thin of a layer surface life exists in. Just as with Stewart Brand’s pace layers, the further down you go, the slower everything unfolds: the cold and dark and pressure slow the evolutionary process, dampening the frequency of interactions between creatures, bestowing space and time for truly weird and wondrous and as-yet-uncategorized life.

Dig in the ground and you might pull up the fossils of some strange long-gone organisms. Dive to the bottom of the ocean and you might find them still alive down there, the unmolested records of an ancient world still drifting in slow motion, going about their days-without-days…

For evidence of time-space commutability, settle in for a sublime experience that (like benthic life itself) makes much of very little: just one page, one scroll bar, and one journey to a world beyond.

(Mobile device suggested: this scroll goes in, not just across…)

Learn More:

  • The “Big Here” doesn’t get much bigger than Neal Agarwal‘s The Size of Space, a new interactive visualization that provides a dose of perspective on our place in the universe.


CryptogramFriday Squid Blogging: New SQUID

There's a new SQUID:

A new device that relies on flowing clouds of ultracold atoms promises potential tests of the intersection between the weirdness of the quantum world and the familiarity of the macroscopic world we experience every day. The atomtronic Superconducting QUantum Interference Device (SQUID) is also potentially useful for ultrasensitive rotation measurements and as a component in quantum computers.

"In a conventional SQUID, the quantum interference in electron currents can be used to make one of the most sensitive magnetic field detectors," said Changhyun Ryu, a physicist with the Material Physics and Applications Quantum group at Los Alamos National Laboratory. "We use neutral atoms rather than charged electrons. Instead of responding to magnetic fields, the atomtronic version of a SQUID is sensitive to mechanical rotation."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Kevin RuddThe Guardian: If the Liberal Party truly cared about racial injustice, they would pay their fair share to Close the Gap

Published in the Guardian on 7 August 2020

Throughout our country’s modern history, the treatment of our Aboriginal and Torres Strait Islander brothers and sisters has been appalling. It has also been inconsistent with the original instructions from the British Admiralty to treat the Indigenous peoples of this land with proper care and respect. From first encounter to the frontier wars, the stolen generations and ongoing institutionalised racism, First Nations people have been handed a raw deal. The gaps between Indigenous and non-Indigenous Australians’ outcomes in areas of education, employment, health, housing and justice are a product of historical, intergenerational maltreatment.

In 2008, I apologised to the stolen generations and Indigenous Australians for the racist laws and policies of successive Australian governments. The apology may have been 200 years late, but it was an important part of the reconciliation process.

But the apology meant nothing if it wasn’t backed by action. For this reason, my government acted on Aboriginal and Torres Strait Islander social justice commissioner Tom Calma’s call to Close the Gap. We worked hard to push this framework through the Council of Australian governments so that all states and territories were on board with the strategy. We also funded it, with $4.6bn committed to achieve each of the six targets we set. While the targets and funding were critical to any improvements in the lives of Indigenous Australians, we suspected the Coalition would scrap our programs once they returned to government. After all, only a few years earlier, John Howard’s Indigenous affairs minister was denying the very existence of the stolen generations. Howard himself had refused to deliver an apology for a decade. And then both he and Peter Dutton decided to boycott the official apology in 2008.

To ensure that the Closing the Gap strategy would not be abandoned, we made it mandatory for the prime minister to stand before the House of Representatives each year and account for the success and failures in reaching the targets that were set.

Had we not adopted the Closing the Gap framework, would we now be on target to have 95% of Indigenous four year-olds enrolled in early childhood education? I think not. Would we have halved the gap for young Indigenous adults to have completed year 12 by 2020? I think not. And would we see progress on closing the gap in child mortality, and literacy and numeracy skills? No, I think not.

Despite these achievements, the most recent Closing the Gap report nonetheless showed Australia was not on track to meet four of the deadlines we’d originally set. A major reason for this is that federal funding for the closing the gap strategy collapsed under Tony Abbott, the great wrecking-ball of Australian politics, whose government cut $534.4m from programs dedicated to improving the lives of Indigenous Australians. And it’s never been restored by Abbott’s successors. It’s all there in the budget papers.

Whatever targets are put in place, governments must commit to physical resourcing of Closing the Gap. They are not going to be delivered by magic.

On Thursday last week, the new national agreement on Closing the Gap was announced. I applaud Pat Turner and other Indigenous leaders who will now sit with the leaders of the commonwealth, states, territories and local government to devise plans to achieve the new targets they have negotiated.

Scott Morrison, however, sought to discredit our government’s targets, rather than coming clean about the half-billion-dollar funding cuts that had made it impossible to achieve these targets under any circumstances. His argument that the original targets were conjured out of thin air by my government is demonstrably untrue. The truth is, Jenny Macklin, the responsible minister, spoke widely with Indigenous leaders to prioritise the areas that needed to be urgently addressed in the original Closing the Gap targets. Furthermore, if Morrison is now truly awakened to the intrinsic value of listening to Indigenous Australians, I look forward to him enshrining an Indigenous voice to parliament in the Constitution, given this is the universal position of all Indigenous groups.

Yet amid the welter of news coverage of the new closing the gap agreement, the central question remains: who will be paying the bill? While shared responsibility to close the gap between all levels of government and Indigenous organisations might sound like good news, this will quickly unravel into a political blame game if the commonwealth continues to shirk its financial duty.

The announcement this week that the commonwealth would allocate $45m over four years is just a very bad joke. This is barely 10% of what the Liberals cut from our national Closing the Gap strategy. And barely 1% of our total $4.5bn national program to meet our targets agreed to with the states and territories in 2009.

The Liberals want you to believe they care about racial injustice. But they don’t believe there are any votes in it. This is well understood by Scotty From Marketing, a former state director of the Liberal party, who lives and breathes polling and focus groups. That’s why they are not even pretending to fund the realisation of the new more “realistic” targets they have so loudly proclaimed.

The post The Guardian: If the Liberal Party truly cared about racial injustice, they would pay their fair share to Close the Gap appeared first on Kevin Rudd.

Worse Than FailureError'd: All Natural Errors

"I'm glad the asdf is vegan. I'm really thinking of going for the asasdfsadf, though. With a name like that, you know it's got to be 2 1/2 times as good for you," writes VJ.


Phil G. wrote, "Get games twice as fast with Epic's new multidimensional downloads!"


" DOES!" Zed writes.


John M. wrote, "I appreciate the helpful suggestion, but I think I'll take a pass."


"java.lang.IllegalStateException...must be one of those edgy indie games! I just hope it's not actually illegal," writes Matthijs .


"For added flavor, I received this reminder two hours after I'd completed my checkout and purchased that very same item_name," Aaron K. writes.


[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.


Krebs on SecurityHacked Data Broker Accounts Fueled Phony COVID Loans, Unemployment Claims

A group of thieves thought to be responsible for collecting millions in fraudulent small business loans and unemployment insurance benefits from COVID-19 economic relief efforts gathered personal data on people and businesses they were impersonating by leveraging several compromised accounts at a little-known U.S. consumer data broker, KrebsOnSecurity has learned.

In June, KrebsOnSecurity was contacted by a cybersecurity researcher who discovered that a group of scammers was sharing highly detailed personal and financial records on Americans via a free web-based email service that allows anyone who knows an account’s username to view all email sent to that account — without the need of a password.

The source, who asked not to be identified in this story, said he’s been monitoring the group’s communications for several weeks and sharing the information with state and federal authorities in a bid to disrupt their fraudulent activity.

The source said the group appears to consist of several hundred individuals who collectively have stolen tens of millions of dollars from U.S. state and federal treasuries via phony loan applications with the U.S. Small Business Administration (SBA) and through fraudulent unemployment insurance claims made against several states.

KrebsOnSecurity reviewed dozens of emails the fraud group exchanged, and noticed that a great many consumer records they shared carried a notation indicating they were cut and pasted from the output of queries made at Interactive Data LLC, a Florida-based data analytics company.

Interactive Data, also known as, markets access to a “massive data repository” on U.S. consumers to a range of clients, including law enforcement officials, debt recovery professionals, and anti-fraud and compliance personnel at a variety of organizations.

The consumer dossiers obtained from IDI and shared by the fraudsters include a staggering amount of sensitive data, including:

-full Social Security number and date of birth;
-current and all known previous physical addresses;
-all known current and past mobile and home phone numbers;
-the names of any relatives and known associates;
-all known associated email addresses
-IP addresses and dates tied to the consumer’s online activities;
-vehicle registration, and property ownership information
-available lines of credit and amounts, and dates they were opened
-bankruptcies, liens, judgments, foreclosures and business affiliations

Reached via phone, IDI Holdings CEO Derek Dubner acknowledged that a review of the consumer records sampled from the fraud group’s shared communications indicates “a handful” of authorized IDI customer accounts had been compromised.

“We identified a handful of legitimate businesses who are customers that may have experienced a breach,” Dubner said.

Dubner said all customers are required to use multi-factor authentication, and that everyone applying for access to its services undergoes a rigorous vetting process.

“We absolutely credential businesses and have several ways do that and exceed the gold standard, which is following some of the credit bureau guidelines,” he said. “We validate the identity of those applying [for access], check with the applicant’s state licensor and individual licenses.”

Citing an ongoing law enforcement investigation into the matter, Dubner declined to say if the company knew for how long the handful of customer accounts were compromised, or how many consumer records were looked up via those stolen accounts.

“We are communicating with law enforcement about it,” he said. “There isn’t much more I can share because we don’t want to impede the investigation.”

The source told KrebsOnSecurity he’s identified more than 2,000 people whose SSNs, DoBs and other data were used by the fraud gang to file for unemployment insurance benefits and SBA loans, and that a single payday can land the thieves $20,000 or more. In addition, he said, it seems clear that the fraudsters are recycling stolen identities to file phony unemployment insurance claims in multiple states.


Hacked or ill-gotten accounts at consumer data brokers have fueled ID theft and identity theft services of various sorts for years. In 2013, KrebsOnSecurity broke the news that the U.S. Secret Service had arrested a 24-year-old man named Hieu Minh Ngo for running an identity theft service out of his home in Vietnam.

Ngo’s service, variously named superget[.]info and findget[.]me, gave customers access to personal and financial data on more than 200 million Americans. He gained that access by posing as a private investigator to a data broker subsidiary acquired by Experian, one of the three major credit bureaus in the United States.

Ngo’s ID theft service

Experian was hauled before Congress to account for the lapse, and assured lawmakers there was no evidence that consumers had been harmed by Ngo’s access. But as follow-up reporting showed, Ngo’s service was frequented by ID thieves who specialized in filing fraudulent tax refund requests with the Internal Revenue Service, and was relied upon heavily by an identity theft ring operating in the New York-New Jersey region.

Also in 2013, KrebsOnSecurity broke the news that ssndob[.]ms, then a major identity theft service in the cybercrime underground, had infiltrated computers at some of America’s large consumer and business data aggregators, including LexisNexis Inc., Dun & Bradstreet, and Kroll Background America Inc.

The now defunct SSNDOB identity theft service.

In 2006, The Washington Post reported that a group of five men used stolen or illegally created accounts at LexisNexis subsidiaries to lookup SSNs and other personal information more than 310,000 individuals. And in 2004, it emerged that identity thieves masquerading as customers of data broker Choicepoint had stolen the personal and financial records of more than 145,000 Americans.

Those compromises were noteworthy because the consumer information warehoused by these data brokers can be used to find the answers to so-called knowledge-based authentication (KBA) questions used by companies seeking to validate the financial history of people applying for new lines of credit.

In that sense, thieves involved in ID theft may be better off targeting data brokers like IDI and their customers than the major credit bureaus, said Nicholas Weaver, a researcher at the International Computer Science Institute and lecturer at UC Berkeley.

“This means you have access not only to the consumer’s SSN and other static information, but everything you need for knowledge-based authentication because these are the types of companies that are providing KBA data.”

The fraud group communications reviewed by this author suggest they are cashing out primarily through financial instruments like prepaid cards and a small number of online-only banks that allow consumers to establish accounts and move money just by providing a name and associated date of birth and SSN.

While most of these instruments place daily or monthly limits on the amount of money users can deposit into and withdraw from the accounts, some of the more popular instruments for ID thieves appear to be those that allow spending, sending or withdrawal of between $5,000 to $7,000 per transaction, with high limits on the overall number or dollar value of transactions allowed in a given time period.

KrebsOnSecurity is investigating the extent to which a small number of these financial instruments may be massively over-represented in the incidence of unemployment insurance benefit fraud at the state level, and in SBA loan fraud at the federal level. Anyone in the financial sector or state agencies with information about these apparent trends may confidentially contact this author at krebsonsecurity @ gmail dot com, or via the encrypted message service Wickr at “krebswickr“.

The looting of state unemployment insurance programs by identity thieves has been well documented of late, but far less public attention has centered on fraud targeting Economic Injury Disaster Loan (EIDL) and advance grant programs run by the U.S. Small Business Administration in response to the COVID-19 crisis.

Late last month, the SBA Office of Inspector General (OIG) released a scathing report (PDF) saying it has been inundated with complaints from financial institutions reporting suspected fraudulent EIDL transactions, and that it has so far identified $250 million in loans given to “potentially ineligible recipients.” The OIG said many of the complaints were about credit inquiries for individuals who had never applied for an economic injury loan or grant.

The figures released by the SBA OIG suggest the financial impact of the fraud may be severely under-reported at the moment. For example, the OIG said nearly 3,800 of the 5,000 complaints it received came from just six financial institutions (out of several thousand across the United States). One credit union reportedly told the U.S. Justice Department that 59 out of 60 SBA deposits it received appeared to be fraudulent.

LongNowChildhood as a solution to explore–exploit tensions

Big questions abound regarding the protracted childhood of Homo sapiens, but there’s a growing argument that it’s an adaptation to the increased complexity of our social environment and the need to learn longer and harder in order to handle the ever-raising bar of adulthood. (Just look to the explosion of requisite schooling over the last century for a concrete example of how childhood grows along with social complexity.)

It’s a tradeoff between genetic inheritance and enculturation — see also Kevin Kelly’s remarks in The Inevitable that we have entered an age of lifelong learning and the 21st Century requires all of us to be permanent “n00bs”, due to the pace of change and the scale at which we have to grapple with evolutionarily relevant sociocultural information.

New research from Past Long Now Seminar Speaker Alison Gopnik:

“I argue that the evolution of our life history, with its distinctively long, protected human childhood, allows an early period of broad hypothesis search and exploration, before the demands of goal-directed exploitation set in. This cognitive profile is also found in other animals and is associated with early behaviours such as neophilia and play. I relate this developmental pattern to computational ideas about explore–exploit trade-offs, search and sampling, and to neuroscience findings. I also present several lines of empirical evidence suggesting that young human learners are highly exploratory, both in terms of their search for external information and their search through hypothesis spaces. In fact, they are sometimes more exploratory than older learners and adults.”

Alison Gopnik, “Childhood as a solution to explore-exploit tensions” in Philosophical Transactions of the Royal Society B.

Worse Than FailureCodeSOD: A Slow Moving Stream

We’ve talked about Java’s streams in the past. It’s hardly a “new” feature at this point, but its blend of “being really useful” and “based on functional programming techniques” and “different than other APIs” means that we still have developers struggling to figure out how to use it.

Jeff H has a co-worker, Clarence, who is very “anti-stream”. “It creates too many copies of our objects, so it’s terrible for memory, and it’s so much slower. Don’t use streams unless you absolutely have to!” So in many a code review, Jeff submits some very simple, easy to read, and fast-performing bit of stream code, and Clarence objects. “It’s slow. It wastes memory.”

Sometimes, another team member goes to bat for Jeff’s code. Sometimes they don’t. But then, in a recent review, Clarence submitted his own bit of stream code. -> -> {

    if (schedule.getDays() != null && !schedule.getDays().isEmpty()) {
        schedule.getDays().stream().forEach(day -> -> {
            dayVisitor.visitDay(schedule, day);

            if (day.getSlots() != null && !day.getSlots().isEmpty()) {
                day.getSlots().stream().forEach(slot -> -> {
                    slotVisitor.visitSlot(schedule, day, slot);

That is six nested “for each” operations, and they’re structured so that we iterate across the same list multiple times. For each schedule, we look at each visitor on that schedule, then we look at each day for that schedule, and then we look at every visitor again, then we look at each day’s slots, and then we look at each visitor again.

Well, if nothing else, we understand why Clarence thinks the Java Streams API is slow. This code did not pass code review.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


Chaotic IdealismTo a Newly Diagnosed Autistic Teenager

I was recently asked by a 14-year-old who had just been diagnosed autistic what advice I had to give. This is what I said.

The thing that helped me most was understanding myself and talking to other autistic people, so you’re already well on that road.

The more you learn about yourself, the more you learn about how you *learn*… meaning that you can become better at teaching yourself to communicate with neurotypicals.

Remember though: The goal is to communicate. Blending in is secondary, or even irrelevant, depending on your priorities. If you can get your ideas from your brain to theirs, and understand what they’re saying, and live in the world peacefully without hurting anyone and without putting yourself in danger, then it does not matter how different you are or how differently you do things.

Autistic is not better and not worse than neurotypical; it’s simply different. Having a disability is a normal part of human life; it’s nothing to be proud of and nothing to be ashamed of. Disability doesn’t stop you from being talented or from becoming unusually skilled, especially with practice. Being different means that you see things from a different perspective, which means that as you grow and gain experience you will be able to provide solutions to problems that other people simply don’t see, to contribute skills that most people don’t have.

Learn to advocate for yourself. If you have an IEP, go to the meetings and ask questions about what help is available and what problems you have. When you are mistreated, go to someone you trust and ask for help; and if you can’t get help, protect yourself as best you can. Learn to stand up for yourself, to keep other people from taking advantage of you. Also learn to help other people stay safe.

Your best social connections now will be anyone who treats you with kindness. You can tell whether someone is kind by observing how they treat those they have power over when nobody, or nobody with much influence, is watching. You want people who are honest, or who only lie when they are trying to protect others’ feelings. Talk to these people; explain that you are not very good with social things and that you sometimes embarrass yourself or accidentally insult people, and that you would like them to tell you when you are doing something clumsy, offensive, confusing, or cringeworthy. Explain to these people that you would prefer to know about mistakes you are making, because if you are not told you will never be able to correct those mistakes.

Learn to apologize, and learn that an apology simply means, “I recognize I have made a mistake and shall work to correct it in the future.” An apology is not a sign of failure or an admission of inferiority. Sometimes an apology can even mean, “I have made a mistake that I could not control; if I had been able to control it, I would not have made the mistake.” Therefore, it is okay to apologize if you have simply made an honest mistake. The best apology includes an explanation of how you will fix your mistake or what you will change to keep it from happening in the future.

Learn not to apologize when you have done nothing wrong. Do not apologize for being different, for standing up for yourself or for other people, or for having an opinion others disagree with. You do not need to justify your existence. You should never give in to the pressure to say, “I am autistic, but that’s okay because I have this skill and that talent.” The correct statement is, “I am autistic, and that is okay.” You don’t need to do anything to be valuable. You just need to be human.

If someone uses you to fulfill their own desires but doesn’t give things back in return; if someone doesn’t care about your needs when you tell them; if someone can tell you are hurt and doesn’t care; then that is a person you cannot trust.

In general, you can expect your teen years to be harder than your young-adult years. As you grow and gain experience, you’ll gain skills and you’ll gather a library of techniques to help you navigate the social and sensory world, to help you deal with your emotions and with your relationships. You will never be perfect–but then, nobody is. What you’re aiming for is useful, functional skills, in whatever form they take, whether they are the typical way of doing things or not. As the saying goes: If it looks stupid but it works, it isn’t stupid.

Keep trying. Take good care of yourself. When you are tired, rest. Learn to push yourself to your limits, but not beyond; and learn where those limits are. When you are tired from something that would not tire a neurotypical, be unashamed about your need for down time. Learn to say “no” when you don’t want something, and learn to say “yes” when you want something but you are a little bit intimidated by it because it is new or complicated or unpredictable. Learn to accept failure and learn from it. Help others. Make your world better. Make your own way. Grow. Live.

You’ll be okay.

Krebs on SecurityPorn Clip Disrupts Virtual Court Hearing for Alleged Twitter Hacker

Perhaps fittingly, a Web-streamed court hearing for the 17-year-old alleged mastermind of the July 15 mass hack against Twitter was cut short this morning after mischief makers injected a pornographic video clip into the proceeding.

17-year-old Graham Clark of Tampa, Fla. was among those charged in the July 15 Twitter hack. Image: Hillsborough County Sheriff’s Office.

The incident occurred at a bond hearing held via the videoconferencing service Zoom by the Hillsborough County, Fla. criminal court in the case of Graham Clark. The 17-year-old from Tampa was arrested earlier this month on suspicion of social engineering his way into Twitter’s internal computer systems and tweeting out a bitcoin scam through the accounts of high-profile Twitter users.

Notice of the hearing was available via public records filed with the Florida state attorney’s office. The notice specified the Zoom meeting time and ID number, essentially allowing anyone to participate in the proceeding.

Even before the hearing officially began it was clear that the event would likely be “zoom bombed.” That’s because while participants were muted by default, they were free to unmute their microphones and transmit their own video streams to the channel.

Sure enough, less than a minute had passed before one attendee not party to the case interrupted a discussion between Clark’s attorney and the judge by streaming a live video of himself adjusting his face mask. Just a few minutes later, someone began interjecting loud music.

It became clear that presiding Judge Christopher C. Nash was personally in charge of administering the video hearing when, after roughly 15 seconds worth of random chatter interrupted the prosecution’s response, Nash told participants he was removing the troublemakers as quickly as he could.

Judge Nash, visibly annoyed immediately after one of the many disruptions to today’s hearing.

What transpired a minute later was almost inevitable given the permissive settings of this particular Zoom conference call: Someone streamed a graphic video clip from Pornhub for approximately 15 seconds before Judge Nash abruptly terminated the broadcast.

With the ongoing pestilence that is the COVID-19 pandemic, the nation’s state and federal courts have largely been forced to conduct proceedings remotely via videoconferencing services. While Zoom and others do offer settings that can prevent participants from injecting their own audio and video into the stream unless invited to do so, those settings evidently were not enabled in today’s meeting.

At issue before the court today was a defense motion to modify the amount of the defendant’s bond, which has been set at $750,000. The prosecution had argued that Clark should be required to show that any funds used toward securing that bond were gained lawfully, and were not merely the proceeds from his alleged participation in the Twitter bitcoin scam or some other form of cybercrime.

Florida State Attorney Andrew Warren’s reaction as a Pornhub clip began streaming to everyone in today’s Zoom proceeding.

Mr. Clark’s attorneys disagreed, and spent most of the uninterrupted time in today’s hearing explaining why their client could safely be released under a much smaller bond and close supervision restrictions.

On Sunday, The New York Times published an in-depth look into Clark’s wayward path from a small-time cheater and hustler in online games like Minecraft to big-boy schemes involving SIM swapping, a form of fraud that involves social engineering employees at mobile phone companies to gain control over a target’s phone number and any financial, email and social media accounts associated with that number.

According to The Times, Clark was suspected of being involved in a 2019 SIM swapping incident which led to the theft of 164 bitcoins from Gregg Bennett, a tech investor in the Seattle area. That theft would have been worth around $856,000 at the time; these days 164 bitcoins is worth approximately $1.8 million.

The Times said that soon after the theft, Bennett received an extortion note signed by Scrim, one of the hacker handles alleged to have been used by Clark. From that story:

“We just want the remainder of the funds in the Bittrex,” Scrim wrote, referring to the Bitcoin exchange from which the coins had been taken. “We are always one step ahead and this is your easiest option.”

In April, the Secret Service seized 100 Bitcoins from Mr. Clark, according to government forfeiture documents. A few weeks later, Mr. Bennett received a letter from the Secret Service saying they had recovered 100 of his Bitcoins, citing the same code that was assigned to the coins seized from Mr. Clark.

Florida prosecutor Darrell Dirks was in the middle of explaining to the judge that investigators are still in the process of discovering the extent of Clark’s alleged illegal hacking activities since the Secret Service returned the 100 bitcoin when the porn clip was injected into the Zoom conference.

Ultimately, Judge Nash decided to keep the bond amount as is, but to remove the condition that Clark prove the source of the funds.

Clark has been charged with 30 felony counts and is being tried as an adult. Federal prosecutors also have charged two other young men suspected of playing roles in the Twitter hack, including a 22-year-old from Orlando, Fla. and a 19-year-old from the United Kingdom.

Kevin RuddABC RN: South China Sea

5 AUGUST 2020

Fran Kelly
Prime Minister Scott Morrison today will warn of the unprecedented militarization of the Indo-Pacific which he says has become the epicentre of strategic competition between the US and China. In his virtual address to the Aspen Security Forum in the United States, Scott Morrison will also condemn the rising frequency of cyber attacks and the new threats democratic nations are facing from foreign interference. This speech coincides with a grim warning from former prime minister Kevin Rudd that the threat of armed conflict in the region is especially high in the run-up to the US presidential election in November. Kevin Rudd, welcome back to breakfast.

Kevin Rudd
Thanks for having me on the program, Fran.

Fran Kelly
Kevin Rudd, you’ve written in the Foreign Affairs journal that the US-China tensions could lead to, quote, a hot war not just a cold one. That conflict, you say, is no longer unthinkable. It’s a fairly alarming assessment. Just how likely do you rate the confrontation in the Indo-Pacific other coming three or four months?

Kevin Rudd
Well, Fran, I think it’s important to narrow our geographical scope here. Prime Minister Morrison is talking about a much wider theatre. My comments in Foreign Affairs are about crisis scenarios emerging over what will happen or could happen in Hong Kong over the next three months leading up to the presidential election. And I think things in Hong Kong are more likely to get worse than better. What’s happening in relation to the Taiwan Straits where things have become much sharper than before in terms of actions on both sides, that’s the Chinese and the United States. But the thrust of my article is that the real problem area in terms of crisis management, crisis escalation, etc, lies in the South China Sea. And what I simply try to pull together is the fact that we now have a much greater concentration of military hardware, ships at sea, aircraft flying reconnaissance missions, together with changes in deployments by the Chinese fighters and bombers now into the actual Paracel Islands themselves in the north part of the South China Sea. Together with the changes in the declaratory postures of both sides. So what I do in this article this pull these threads together and say to both sides: be careful what you wish for; you’re playing with fire.

Fran Kelly
And when you talk about a heightened risk of armed conflict, or you’re talking about a being confined to a flare-up in one very specific location like the South China Sea?

Kevin Rudd
What I try to do is to go to where could a crisis actually emerge?

Fran Kelly

Kevin Rudd
If you go across the whole spectrum of conflicts, at the moment between China and the United States on a whole range of policies, all roads tend to lead back to the South China Sea because it’s effectively a ruleless environment at the moment. We have contested views of both territorial and maritime sovereignty. And that’s where my concern, Fran, is that we have a crisis, which comes about through a collision at sea, a collision in the air, and given the nationalist politics now in Washington because of the presidential election, but also the nationalist politics in China, as its own leadership go to their annual August retreat, Beidaihe, that it’s a very bad admixture which could result in a crisis for allies like Australia, which have treaty obligations with the United States through the ANZUS treaty. This is a deeply concerning set of developments because if the crisis erupts, what then does the Australian government do?

Fran Kelly
Well, what does it do in your view from your viewpoint as a former Prime Minister. You know Australia tries to walk a very fine line by Washington and Beijing. That’s proved very difficult lately, but we are in the ANZUS alliance. Would we need to get involved militarily?

Kevin Rudd
Let me put it in these terms: Australia, like other countries dealing with China’s greater regional and international assertiveness, has had to adjust its strategy. We can have a separate debate, Fran, about what that strategy should be across the board in terms of the economy, technology, Huawei in the rest. But what I’ve sought to do in this article is go specifically to the possibility of a national security crisis. Now, if I was Prime Minister Morrison, what I’d be doing in the current circumstances is taking out the fire hose to those in Washington and to the extent that you can to those in Beijing, and simply make it as plain as possible through private diplomacy and public statements, the time has come for de-escalation because the obligations under the treaty, Fran, to go to your direct question, are relatively clear. What it says in one of the operational clauses of the ANZUS treaty of 1951 is that if the armed forces of either of the contracting parties, namely Australia or the United States, come under attack in the Pacific area, then the allies shall meet and consult to meet the common danger. That, therefore, puts us as an Australian ally directly into this frame. Hence my call for people to be very careful about the months which lie ahead.

Fran Kelly
In terms of ‘the time has come for de-escalation’, that message, do we see signs that that was the message very clearly being given by the Foreign Minister and the Defense Minister when they’re in Washington last week? Marise Payne didn’t buy into Secretary of State Mike Pompeo’s very elevated rhetoric aimed at China, kept a distance there. And is it your view that this danger period will be over come the first Tuesday in November, the presidential election?

Kevin Rudd
I think when we looking at the danger of genuine armed conflict between China and United States, that is now with us for a long period of time, whoever wins in November, including under the Democrats. What I’m more concerned about, however, is given that President Trump is in desperate domestic political circumstances at present in Washington, and that there will be a temptation to continue to elevate. And also domestic politics are playing their role in China itself where Xi Jinping is under real pressure because of the state of the Chinese economy because of COVID and a range of other factors. On Australia, you asked directly about what Marise Payne was doing in Washington. I think finally the penny dropped with Prime Minister Morrison and Foreign Minister Payne that the US presidential election campaign strategy was beginning to directly influence the content of rational national security policy. I think wisely they decided to step back slightly from that.

Fran Kelly
Former Prime Minister Kevin Rudd is our guest. Kevin Rudd, this morning Scott Morrison, the Prime Minister, is addressing the US Aspen Security Forum. He’s also talking about rising tensions in the Indo-Pacific. He’s pledged that Australia won’t be a bystander, quote, who will leave it to others in the region. He wants other like-minded democracies of the region to step up and act in some kind of alliance. Is that the best way to counter Beijing’s rising aggression and assertiveness?

Kevin Rudd
Well, Prime Minister Morrison seems to like making speeches but I’ve yet to see evidence of a coherent Australian national China strategy in terms of what the government is operationally doing as opposed to what it continues to talk about. So my concern on his specific proposal is: what are you doing, Mr Morrison? The talk of an alliance I think is misplaced. The talk of, shall we say, a common policy approach to the challenges which China now represents, that is an entirely appropriate course of action and something which we sought to do during our own period in government, but it’s a piece of advice which Morrison didn’t bother listening to himself when he unilaterally went out and called for an independent global investigation into the origins of COVID-19. Far wiser, if Morrison had taken his own counsel and brought together a coalition of the policy willing first and said: do we have a group of 10 robust states standing behind this proposal? And the reason for that, Fran, is that makes it much harder then for Beijing to unilaterally pick off individual countries.

Fran Kelly
Kevin Rudd, thank you very much for joining us on Breakfast.

Kevin Rudd
Good to be with you, Fran.

Fran Kelly
Former Prime Minister Kevin Rudd. He’s president of the Asia Society Policy Institute in New York, and the article that he’s just penned is in the Foreign Affairs journal.

The post ABC RN: South China Sea appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Private Code Review

Jessica has worked with some cunning developers in the past. To help cope with some of that “cunning”, they’ve recently gone out searching for new developers.

Now, there were some problems with their job description and salary offer, specifically, they were asking for developers who do too much and get paid too little. Which is how Jessica started working with Blair. Jessica was hoping to staff up her team with some mid-level or junior developers with a background in web development. Instead, she got Blair, a 13+ year veteran who had just started doing web development in the past six months.

Now, veteran or not, there is a code review process, so everything Blair does goes through code review. And that catches some… annoying habits, but every once in awhile, something might sneak through. For example, he thinks static is a code smell, and thus removes the keyword any time he sees it. He’ll rewrite most of the code to work around it, except once the method was called from a cshtml template file, so no one discovered that it didn’t work until someone reported the error.

Blair also laments that with all the JavaScript and loosely typed languages, kids these days don’t understand the importance of separation of concerns and putting a barrier between interface and implementation. To prove his point, he submitted his MessageBL class. BL, of course, is to remind you that this class is “business logic”, which is easy to forget because it’s in an assembly called theappname.businesslogic.

Within that class, he implemented a bunch of data access methods, and this pair of methods lays out the pattern he followed.

public async Task<LinkContentUpdateTrackingModel> GetLinkAndContentTrackingModelAndUpdate(int id, Msg msg)
    return await GetLinkAndContentTrackingAndUpdate(id, msg);

/// <summary>
/// LinkTrackingUpdateLinks
/// returns: HasAnalyticsConfig, LinkTracks, ContentTracks
/// </summary>
/// <param name="id"></param>
/// <param name="msg"></param>
private async Task<LinkContentUpdateTrackingModel> GetLinkAndContentTrackingAndUpdate(int id, Msg msg)

Here, we have one public method, and one private method. Their names, as you can see, are very similar. The public method does nothing but invoke the private method. This public method is, in fact, the only place the private method is invoked. The public method, in turn, is called only twice, from one controller.

This method also doesn’t ever need to be called, because the same block of code which constructs this object also fetches the relevant model objects. So instead of going back to the database with this thing, we could just use the already fetched objects.

But the real magic here is that Blair was veteran enough to know that he should put some “thorough” documentation using Visual Studio’s XML comment features. But he put the comments on the private method.

Jessica was not the one who reviewed this code, but adds:

I won’t blame the code reviewer for letting this through. There’s only so many times you can reject a peer review before you start questioning yourself. And sometimes, because Blair has been here so long, he checks code in without peer review as it’s a purely manual process.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


LongNowTraditional Ecological Knowledge

Archaeologist Stefani Crabtree writes about her work to reconstruct Indigenous food and use networks for the National Park Service:

Traditional Ecological Knowledge gets embedded in the choices that people make when they consume, and how TEK can provide stability of an ecosystem. Among Martu, the use of fire for hunting and the knowledge of the habits of animals are enshrined in the Dreamtime stories passed inter-generationally; these Dreamtime stories have material effects on the food web, which were detected in our simulations. The ecosystem thrived with Martu; it was only through their removal that extinctions began to cascade through the system.

Kevin RuddForeign Affairs: Beware the Guns of August — in Asia

U.S. Navy photo by Mass Communication Specialist 2nd Class Taylor DiMartino

Published in Foreign Affairs on August 3, 2020.

In just a few short months, the U.S.-Chinese relationship seems to have returned to an earlier, more primal age. In China, Mao Zedong is once again celebrated for having boldly gone to war against the Americans in Korea, fighting them to a truce. In the United States, Richard Nixon is denounced for creating a global Frankenstein by introducing Communist China to the wider world. It is as if the previous half century of U.S.-Chinese relations never happened.

The saber rattling from both Beijing and Washington has become strident, uncompromising, and seemingly unending. The relationship lurches from crisis to crisis—from the closures of consulates to the most recent feats of Chinese “wolf warrior” diplomacy to calls by U.S. officials for the overthrow of the Chinese Communist Party (CCP). The speed and intensity of it all has desensitized even seasoned observers to the scale and significance of change in the high politics of the U.S.-Chinese relationship. Unmoored from the strategic assumptions of the previous 50 years but without the anchor of any mutually agreed framework to replace them, the world now finds itself at the most dangerous moment in the relationship since the Taiwan Strait crises of the 1950s.

The question now being asked, quietly but nervously, in capitals around the world is, where will this end? The once unthinkable outcome—actual armed conflict between the United States and China—now appears possible for the first time since the end of the Korean War. In other words, we are confronting the prospect of not just a new Cold War, but a hot one as well.

Click here to read the rest of the article at Foreign Affairs.

The post Foreign Affairs: Beware the Guns of August — in Asia appeared first on Kevin Rudd.

Worse Than FailureA Massive Leak

"Memory leaks are impossible in a garbage collected language!" is one of my favorite lies. It feels true, but it isn't. Sure, it's much harder to make them, and they're usually much easier to track down, but you can still create a memory leak. Most times, it's when you create objects, dump them into a data structure, and never empty that data structure. Usually, it's just a matter of finding out what object references are still being held. Usually.

A few months ago, I discovered a new variation on that theme. I was working on a C# application that was leaking memory faster than bad waterway engineering in the Imperial Valley.

A large, glowing, computer-controlled chandelier

I don't exactly work in the "enterprise" space anymore, though I still interact with corporate IT departments and get to see some serious internal WTFs. This is a chandelier we built for the Allegheny Health Network's Cancer Institute which recently opened in Pittsburgh. It's 15 meters tall, weighs about 450kg, and is broken up into 30 segments, each with hundreds of addressable LEDs in a grid. The software we were writing was built to make them blink pretty.

Each of those 30 segments is home to a single-board computer with their GPIO pins wired up to addressable LEDs. Each computer runs a UDP listener, and we blast them with packets containing RGB data, which they dump to the LEDs using a heavily tweaked version of LEDScape.

This is our standard approach to most of our lighting installations. We drop a Beaglebone onto a custom circuit board and let it drive the LEDs, then we have a render-box someplace which generates frame data and chops it up into UDP packets. Depending on the environment, we can drive anything from 30-120 frames per second this way (and probably faster, but that's rarely useful).

Apologies to the networking folks, but this works very well. Yes, we're blasting many megabytes of raw bitmap data across the network, but we're usually on our own dedicated network segment. We use UDP because, well, we don't care about the data that much. A dropped packet or an out of order packet isn't going to make too large a difference in most cases. We don't care if our destination Beaglebone is up or down, we just blast the packets out onto the network, and they get there reliably enough that the system works.

Now, normally, we do this from Python programs on Linux. For this particular installation, though, we have an interactive kiosk which provides details about cancer treatments and patient success stories, and lets the users interact with the chandelier in real time. We wanted to show them a 3D model of the chandelier on the screen, and show them an animation on the UI that was mirrored in the physical object. After considering our options, we decided this was a good case for Unity and C#. After a quick test of doing multitouch interactions, we also decided that we shouldn't deploy to Linux (Unity didn't really have good Linux multitouch support), so we would deploy a Windows kiosk. This meant we were doing most of our development on MacOS, but our final build would be for Windows.

Months go by. We worked on the software while building the physical pieces, which meant the actual testbed hardware wasn't available for most of the development cycle. Custom electronics were being refined and physical designs were changing as we iterated to the best possible outcome. This is normal for us, but it meant that we didn't start getting real end-to-end testing until very late in the process.

Once we started test-hanging chandelier pieces, we started basic developer testing. You know how it is: you push the run button, you test a feature, you push the stop button. Tweak the code, rinse, repeat. Eventually, though, we had about 2/3rds of the chandelier pieces plugged in, and started deploying to the kiosk computer, running Windows.

We left it running, and the next time someone walked by and decided to give the screen a tap… nothing happened. It was hung. Well, that could be anything. We rebooted and checked again, and everything seems fine, until a few minutes later, when it's hung… again. We checked the task manager- which hey, everything is really slow, and sure enough, RAM is full and the computer is so slow because it's constantly thrashing to disk.

We're only a few weeks before we actually have to ship this thing, and we've discovered a massive memory leak, and it's such a sudden discovery that it feels like the draining of Lake Agassiz. No problem, though, we go back to our dev machines, fire it up in the profiler, and start looking for the memory leak.

Which wasn't there. The memory leak only appeared in the Windows build, and never happened in the Mac or Linux builds. Clearly, there must be some different behavior, and it must be around object lifecycles. When you see a memory leak in a GCed language, you assume you're creating objects that the GC ends up thinking are in use. In the case of Unity, your assumption is that you're handing objects off to the game engine, and not telling it you're done with them. So that's what we checked, but we just couldn't find anything that fit the bill.

Well, we needed to create some relatively large arrays to use as framebuffers. Maybe that's where the problem lay? We keep digging through the traces, we added a bunch of profiling code, we spent days trying to dig into this memory leak…

… and then it just went away. Our memory leak just became a Heisenbug, our shipping deadline was even closer, and we officially knew less about what was going wrong than when we started. For bonus points, once this kiosk ships, it's not going to be connected to the Internet, so if we need to patch the software, someone is going to have to go onsite. And we aren't going to have a suitable test environment, because we're not exactly going to build two gigantic chandeliers.

The folks doing assembly had the whole chandelier built up, hanging in three sections (we don't have any 14m tall ceiling spaces), and all connected to the network for a smoke test. There wasn't any smoke, but they needed to do more work. Someone unplugged a third of the chandelier pieces from the network.

And the memory leak came back.

We use UDP because we don't care if our packet sends succeed or not. Frame-by-frame, we just want to dump the data on the network and hope for the best. On MacOS and Linux, our software usually uses a sender thread that just, at the end of the day, wraps around calls to the send system call. It's simple, it's dumb, and it works. We ignore errors.

In C#, though, we didn't do things exactly the same way. Instead, we used the .NET UdpClient object and its SendAsync method. We assumed that it would do roughly the same thing.

We were wrong.

await client.SendAsync(packet, packet.Length, hostip, port);

Async operations in C# use Tasks, which are like promises or futures in other environments. It lets .NET manage background threads without the developer worrying about the details. The await keyword is syntactic sugar which lets .NET know that it can hand off control to another thread while we wait. While we await here, we don't actually await the results of the await, because again: we don't care about the results of the operation. Just send the packet, hope for the best.

We don't care- but Windows does. After a load of investigation, what we discovered is that Windows would first try and resolve the IP address. Which, if a host was down, obviously it couldn't. But Windows was friendly, Windows was smart, and Windows wasn't going to let us down: it kept the Task open and kept trying to resolve the address. It held the task open for 3 seconds before finally deciding that it couldn't reach the host and errored out.

An error which, as I stated before, we were ignoring, because we didn't care.

Still, if you can count and have a vague sense of the linear passage of time, you can see where this is going. We had 30 hosts. We sent each of the 30 packets every second. When one or more of those hosts were down, Windows would keep each of those packets "alive" for 3 seconds. By the time that one expired, 90 more had queued up behind it.

That was the source of our memory leak, and our Heisenbug. If every Beaglebone was up, we didn't have a memory leak. If only one of them was down, the leak was pretty slow. If ten or twenty were out, the leak was a waterfall.

I spent a lot of time reading up on Windows networking after this. Despite digging through the socket APIs, I honestly couldn't figure out how to defeat this behavior. I tried various timeout settings. I tried tracking each task myself and explicitly timing them out if they took longer than a few frames to send. I was never able to tell Windows, "just toss the packet and hope for the best".

Well, my co-worker was building health monitoring on the Beaglebones anyway. While the kiosk wasn't going to be on the Internet via a "real" Internet connection, we did have a cellular modem attached, which we could use to send health info, so getting pings that say "hey, one of the Beaglebones failed" is useful. So my co-worker hooked that into our network sending layer: don't send frames to Beaglebones which are down. Recheck the down Beaglebones every five minutes or so. Continue to hope for the best.

This solution worked. We shipped. The device looks stunning, and as patients and guests come to use it, I hope they find some useful information, a little joy, and maybe some hope while playing with it. And while there may or may not be some ugly little hacks still lurking in that code, this was the one thing which made me say: WTF.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Krebs on SecurityRobocall Legal Advocate Leaks Customer Data

A California company that helps telemarketing firms avoid getting sued for violating a federal law that seeks to curb robocalls has leaked the phone numbers, email addresses and passwords of all its customers, as well as the mobile phone numbers and other data on people who have hired lawyers to go after telemarketers.

The Blacklist Alliance provides technologies and services to marketing firms concerned about lawsuits under the Telephone Consumer Protection Act (TCPA), a 1991 law that restricts the making of telemarketing calls through the use of automatic telephone dialing systems and artificial or prerecorded voice messages. The TCPA prohibits contact with consumers — even via text messages — unless the company has “prior express consent” to contact the consumer.

With statutory damages of $500 to $1,500 per call, the TCPA has prompted a flood of lawsuits over the years. From the telemarketer’s perspective, the TCPA can present something of a legal minefield in certain situations, such as when a phone number belonging to someone who’d previously given consent gets reassigned to another subscriber.

Enter The Blacklist Alliance, which promises to help marketers avoid TCPA legal snares set by “professional plaintiffs and class action attorneys seeking to cash in on the TCPA.” According to the Blacklist, one of the “dirty tricks” used by TCPA “frequent filers” includes “phone flipping,” or registering multiple prepaid cell phone numbers to receive calls intended for the person to whom a number was previously registered.

Lawyers representing TCPA claimants typically redact their clients’ personal information from legal filings to protect them from retaliation and to keep their contact information private. The Blacklist Alliance researches TCPA cases to uncover the phone numbers of plaintiffs and sells this data in the form of list-scrubbing services to telemarketers.

“TCPA predators operate like malware,” The Blacklist explains on its website. “Our Litigation Firewall isolates the infection and protects you from harm. Scrub against active plaintiffs, pre litigation complainers, active attorneys, attorney associates, and more. Use our robust API to seamlessly scrub these high-risk numbers from your outbound campaigns and inbound calls, or adjust your suppression settings to fit your individual requirements and appetite for risk.”

Unfortunately for the Blacklist paying customers and for people represented by attorneys filing TCPA lawsuits, the Blacklist’s own Web site until late last week leaked reams of data to anyone with a Web browser. Thousands of documents, emails, spreadsheets, images and the names tied to countless mobile phone numbers all could be viewed or downloaded without authentication from the domain

The directory also included all 388 Blacklist customer API keys, as well as each customer’s phone number, employer, username and password (scrambled with the relatively weak MD5 password hashing algorithm).

The leaked Blacklist customer database points to various companies you might expect to see using automated calling systems to generate business, including real estate and life insurance providers, credit repair companies and a long list of online advertising firms and individual digital marketing specialists.

The very first account in the leaked Blacklist user database corresponds to its CEO Seth Heyman, an attorney in southern California. Mr. Heyman did not respond to multiple requests for comment, although The Blacklist stopped leaking its database not long after that contact request.

Two other accounts marked as administrators were among the third and sixth registered users in the database; those correspond to two individuals at Riip Digital, a California-based email marketing concern that serves a diverse range of clients in the lead generation business, from debt relief and timeshare companies, to real estate firms and CBD vendors.

Riip Digital did not respond to requests for comment. But According to Spamhaus, an anti-spam group relied upon by many Internet service providers (ISPs) to block unsolicited junk email, the company has a storied history of so-called “snowshoe spamming,” which involves junk email purveyors who try to avoid spam filters and blacklists by spreading their spam-sending systems across a broad swath of domains and Internet addresses.

The irony of this data leak is that marketers who constantly scrape the Web for consumer contact data may not realize the source of the information, and end up feeding it into automated systems that peddle dubious wares and services via automated phone calls and text messages. To the extent this data is used to generate sales leads that are then sold to others, such a leak could end up causing more legal problems for The Blacklist’s customers.

The Blacklist and their clients talk a lot about technologies that they say separate automated telephonic communications from dime-a-dozen robocalls, such as software that delivers recorded statements that are manually selected by a live agent. But for your average person, this is likely a distinction without a difference.

Robocalls are permitted for political candidates, but beyond that if the recording is a sales message and you haven’t given your written permission to get calls from the company on the other end, the call is illegal. According to the Federal Trade Commission (FTC), companies are using auto-dialers to send out thousands of phone calls every minute for an incredibly low cost.

In fiscal year 2019, the FTC received 3.78 million complaints about robocalls. Readers may be able to avoid some marketing calls by registering their mobile number with the Do Not Call registry, but the list appears to do little to deter all automated calls — particularly scam calls that spoof their real number. If and when you do receive robocalls, consider reporting them to the FTC.

Some wireless providers now offer additional services and features to help block automated calls. For example, AT&T offers wireless customers its free Call Protect app, which screens incoming calls and flags those that are likely spam calls. See the FCC’s robocall resource page for links to resources at your mobile provider. In addition, there are a number of third-party mobile apps designed to block spammy calls, such as Nomorobo and TrueCaller.

Obviously, not all telemarketing is spammy or scammy. I have friends and relatives who’ve worked at non-profits that rely a great deal on fundraising over the phone. Nevertheless, readers who are fed up with telemarketing calls may find some catharsis in the Jolly Roger Telephone Company, which offers subscribers a choice of automated bots that keep telemarketers engaged for several minutes. The service lets subscribers choose which callers should get the bot treatment, and then records the result.

For my part, the volume of automated calls hitting my mobile number got so bad that I recently enabled a setting on my smart phone to simply send to voicemail all calls from numbers that aren’t already in my contacts list. This may not be a solution for everyone, but since then I haven’t received a single spammy jingle.

CryptogramBlackBerry Phone Cracked

Australia is reporting that a BlackBerry device has been cracked after five years:

An encrypted BlackBerry device that was cracked five years after it was first seized by police is poised to be the key piece of evidence in one of the state's longest-running drug importation investigations.

In April, new technology "capabilities" allowed authorities to probe the encrypted device....

No details about those capabilities.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 12)

Here’s part twelve of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.


Worse Than FailureCodeSOD: A Unique Choice

There are many ways to mess up doing unique identifiers. It's a hard problem, and that's why we've sorta agreed on a few distinct ways to do it. First, we can just autonumber. Easy, but it doesn't always scale that well, especially in distributed systems. Second, we can use something like UUIDs: mix a few bits of real data in with a big pile of random data, and you can create a unique ID. Finally, there are some hashing-related options, where the data itself generates its ID.

Tiffanie was digging into some weird crashes in a database application, and discovered that their MODULES table couldn't decide which was correct, and opted for two: MODULE_ID, an autonumbered field, and MODULE_UUID, which one would assume, held a UUID. There were also the requsite MODULE_NAME and similar fields. A quick scan of the table looked like:

0 Defects 8461aa9b-ba38-4201-a717-cee257b73af0 Defects
1 Test Plan 06fd18eb-8214-4431-aa66-e11ae2a6c9b3 Test Plan

Now, using both UUIDs and autonumbers is a bit suspicious, but there might be a good reason for that (the UUIDs might be used for tracking versions of installed modules, while the ID is the local database-reference for that, so the ID shouldn't change ever, but the UUID might). Still, given that MODULE_NAME and MODULE_DESC both contain exactly the same information in every case, I suspect that this table was designed by the Department of Redunancy Department.

Still, that's hardly the worst sin you could commit. What would be really bad would be using the wrong datatype for a column. This is a SQL Server database, and so we can safely expect that the MODULE_ID is numeric, the MODULE_NAME and MODULE_DESC must be text, and clearly the MODULE_UUID field should be the UNIQUEIDENTIFIER type, right?

Well, let's look at one more row from this table:

11 Releases Releases does not have a UUID Releases

Oh, well. I think I have a hunch what was causing the problems. Sure enough, the program was expecting the UUID field to contain UUIDs, and was failing when a field contained something that couldn't be converted into a UUID.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Krebs on SecurityThree Charged in July 15 Twitter Compromise

Three individuals have been charged for their alleged roles in the July 15 hack on Twitter, an incident that resulted in Twitter profiles for some of the world’s most recognizable celebrities, executives and public figures sending out tweets advertising a bitcoin scam.

Amazon CEO Jeff Bezos’s Twitter account on the afternoon of July 15.

Nima “Rolex” Fazeli, a 22-year-old from Orlando, Fla., was charged in a criminal complaint in Northern California with aiding and abetting intentional access to a protected computer.

Mason “Chaewon” Sheppard, a 19-year-old from Bognor Regis, U.K., also was charged in California with conspiracy to commit wire fraud, money laundering and unauthorized access to a computer.

A U.S. Justice Department statement on the matter does not name the third defendant charged in the case, saying juvenile proceedings in federal court are sealed to protect the identity of the youth. But an NBC News affiliate in Tampa reported today that authorities had arrested 17-year-old Graham Clark as the alleged mastermind of the hack.

17-year-old Graham Clark of Tampa, Fla. was among those charged in the July 15 Twitter hack. Image: Hillsborough County Sheriff’s Office. said Clark was hit with 30 felony charges, including organized fraud, communications fraud, one count of fraudulent use of personal information with over $100,000 or 30 or more victims, 10 counts of fraudulent use of personal information and one count of access to a computer or electronic device without authority. Clark’s arrest report is available here (PDF). A statement from prosecutors in Florida says Clark will be charged as an adult.

On Thursday, Twitter released more details about how the hack went down, saying the intruders “targeted a small number of employees through a phone spear phishing attack,” that “relies on a significant and concerted attempt to mislead certain employees and exploit human vulnerabilities to gain access to our internal systems.”

By targeting specific Twitter employees, the perpetrators were able to gain access to internal Twitter tools. From there, Twitter said, the attackers targeted 130 Twitter accounts, tweeting from 45 of them, accessing the direct messages of 36 accounts, and downloading the Twitter data of seven.

Among the accounts compromised were democratic presidential candidate Joe BidenAmazon CEO Jeff BezosPresident Barack ObamaTesla CEO Elon Musk, former New York Mayor Michael Bloomberg and investment mogul Warren Buffett.

The hacked Twitter accounts were made to send tweets suggesting they were giving away bitcoin, and that anyone who sent bitcoin to a specified account would be sent back double the amount they gave. All told, the bitcoin accounts associated with the scam received more than 400 transfers totaling more than $100,000.

Sheppard’s alleged alias Chaewon was mentioned twice in stories here since the July 15 incident. On July 16, KrebsOnSecurity wrote that just before the Twitter hack took place, a member of the social media account hacking forum OGUsers named Chaewon advertised they could change email address tied to any Twitter account for $250, and provide direct access to accounts for between $2,000 and $3,000 apiece.

The OGUsers forum user “Chaewon” taking requests to modify the email address tied to any twitter account.

On July 17, The New York Times ran a story that featured interviews with several people involved in the attack. The young men told The Times they weren’t responsible for the Twitter bitcoin scam and had only brokered the purchase of accounts from the Twitter hacker — who they referred to only as “Kirk.”

One of those interviewed by The Times used the alias “Ever So Anxious,” and said he was a 19-year from the U.K. In my follow-up story on July 22, it emerged that Ever So Anxious was in fact Chaewon.

The person who shared that information was the principal subject of my July 16 post, which followed clues from tweets sent by one of the accounts claimed during the Twitter compromise back to a 21-year-old from the U.K. who uses the nickname PlugWalkJoe.

That individual shared a series of screenshots showing he had been in communications with Chaewon/Ever So Anxious just prior to the Twitter hack, and had asked him to secure several desirable Twitter usernames from the Twitter hacker. He added that Chaewon/Ever So Anxious also was known as “Mason.”

The negotiations over highly-prized Twitter usernames took place just prior to the hijacked celebrity accounts tweeting out bitcoin scams. PlugWalkJoe is pictured here chatting with Ever So Anxious/Chaewon/Mason using his Discord username “Beyond Insane.”

On July 22, KrebsOnSecurity interviewed Mason/Chaewon/Ever So Anxious, who confirmed that PlugWalkJoe had indeed asked him to ask Kirk to change the profile picture and display name for a specific Twitter account on July 15. Mason/Chaewon/Ever So Anxious acknowledged that while he did act as a “middleman” between Kirk and others seeking to claim desirable Twitter usernames, he had nothing to do with the hijacking of the VIP Twitter accounts for the bitcoin scam that same day.

“Encountering Kirk was the worst mistake I’ve ever made due to the fact it has put me in issues I had nothing to do with,” he said. “If I knew Kirk was going to do what he did, or if even from the start if I knew he was a hacker posing as a rep I would not have wanted to be a middleman.”

Another individual who told The Times he worked with Ever So Anxious/Chaewon/Mason in communicating with Kirk said he went by the nickname “lol.” On July 22, KrebsOnSecurity identified lol as a young man who went to high school in Danville, Calif.

Federal investigators did not mention lol by his nickname or his real name, but the charging document against Sheppard says that on July 21 federal agents executed a search warrant at a residence in Northern California to question a juvenile who assisted Kirk and Chaewon in selling access to Twitter accounts. According to that document, the juvenile and Chaewon had discussed turning themselves in to authorities after the Twitter hack became publicly known.

CryptogramFriday Squid Blogging: Squid Proteins for a Better Face Mask

Researchers are synthesizing squid proteins to create a face mask that better survives cleaning. (And you thought there was no connection between squid and COVID-19.) The military thinks this might have applications for self-healing robots.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramData and Goliath Book Placement

Notice the copy of Data and Goliath just behind the head of Maine Senator Angus King.

Screenshot of MSNBC interview with Angus King

This demonstrates the importance of a vibrant color and a large font.

Kevin RuddABC: Closing The Gap, AUSMIN & Public Health


31 JULY 2020

Patricia Karvelas
My next guest this afternoon is the former prime minister Kevin Rudd. He’s the man that delivered the historic Apology to the stolen generations and launched the original Close the Gap targets. Of course, yesterday, there was a big revamp of the Close the Gap so we thought it was a good idea to talk to the man originally responsible. Kevin Rudd, welcome.

Kevin Rudd
Good to be with you. Patricia.

Patricia Karvelas
Prime Minister Scott Morrison said there had been a failure to partner with Indigenous people to develop and deliver the 2008 targets. Is that something you regret?

Kevin Rudd
Oh, Prime Minister Morrison is always out to differentiate himself from what previous Labor governments have done. We worked closely with Indigenous leaders at the time through minister Jenny Macklin in framing those Closing the Gap targets. The bottom line is: we deliver the National Apology; we established a Closing the Gap framework, which we thought should be measurable; and on top of that, Patricia, what we also did was, we negotiated the first-ever commonwealth-state agreement in 2008-9 over the following 10-year period, which had Closing the Gap targets as the basis for the funding commitments by the commonwealth and the states. Those things have been sustained into the future. If the Indigenous leadership of Australia have decided that it’s time to refresh the targets then I support Pat Turner’s leadership and I support what Indigenous leaders have done.

Patricia Karvelas
She’s got a seat at the table though. I remember, you know, I covered it extensively at the time. But she has got a point and they have a point that they now have a seat at the table in a different partnership model than was delivered originally.

Kevin Rudd
Well, as you know, the realities back in 2007 were radically different. Back then there was a huge partisan fight over whether we should have a National Apology. We had people like Peter Dutton and Tony Abbott threatening not to participate in the Apology. So it was a highly partisan environment back then. So these things evolve over time. The Apology remains in place. The national statement each year on the anniversary of the Apology remains in place on progress in achieving Closing the Gap, our successes and our failures. But yes, I welcome any advance that’s been made. But here’s the rub, Patricia: why have there been challenges in delivering on previous Closing the Gap targets? In large part it’s because in the 2014 budget, the first year after the current Coalition government took office, as you know, someone who’s covered the area extensively, they pulled out half a billion dollars worth of funding. Now you’re not going to achieve targets, if simultaneously you gut the funding capacity to act in these areas. That’s what essentially happened over the last five-to-six years.

Patricia Karvelas
That’s absolutely part of the story. But is it all of the story? I mean, if you look at failure to deliver on these targets, it’s been very disappointing for Aboriginal Australians. But I think for Australians who wanted to see the gap closed because it’s the right thing to do; it’s the kind of country they want to live in. There are other reasons aren’t there, that the gap hasn’t been closed? Isn’t one of the reasons that it’s lacked Aboriginal authority and ownership, that it’s been a top-down approach?

Kevin Rudd
Well, I welcome the statement by Pat Turner in bringing Indigenous leadership to the table with these new targets for the future. I’m fully supportive of that. You’re looking at someone who has stood for a lifetime in empowerment of Indigenous organisations. As I said, realities change over time, and I welcome what will happen in the future. But the bottom line is, Patricia, with or without Indigenous leadership from the ground up, nothing will happen in the absence of physical resources as well. And that is a critical part of the equation as I think you’ve just agreed with me. And we can have as many notional targets as we like, but if on day two you, as it were, disembowel the funding arrangements, which is what happened under the current government, guess what: nothing happens. And I note that when these new targets were announced yesterday that Ken Wyatt and the Prime Minister were silent on the question of future funding commitments by the commonwealth. So our Closing the Gap targets, yes, they weren’t all realised. We were on track to achieve two of the six targets that we set. We made some progress on another two. And we were kind of flatlining when it came to the remaining two. But I make no apology for measurement, Patricia, because unless you measure things, guess what? They never happen. And so I’m all for actually an annual report card on success and failure. That’s why I did it in the first place, and without apology.

Patricia Karvelas
I want to move on just to another story that was big this week. What did you make of this week’s AUSMIN talks and the Foreign Minister’s emphasis on Australia taking what is an independent position here, particularly with our relationship with China, was that significant?

Kevin Rudd
Well, whacko! The Australian Foreign Minister says we should have an independent foreign policy! Hold the front page! I mean, for God’s sake.

Patricia Karvelas
Well, it was in the AUSMIN framework. I mean, it wasn’t just a statement to the media, do you think?

Kevin Rudd
Yeah, yeah, but you know, the function of the national government of Australia is to run the foreign policy of Australia, an independent foreign policy. And if the conservatives have recently discovered this principle is a good one, well, I welcome them to the table. That’s been our view for about the last hundred years that the Australian Labor Party has been engaged in the foreign policy debates of this country. But why did she say that? That’s the more important question, I think, Patricia. I think the Australian Government, both Morrison and the Foreign Minister looked at Secretary of State Pompeo’s speech at the Nixon Library a week or so ago when effectively he called for a Second Cold War against China and, within that, called for the overthrow of the Chinese Communist Party. Even for the current Australian conservative government, that looked like a bridge too far, and I think they basically took fright at what they were walking into. And my judgment is: it’s very important to separate out our national interests from those the United States; secondly, understand what a combined allied strategy could and should be on China, as opposed to finding yourself wrapped up either in President Trump’s re-election strategy or Secretary of State Pompeo’s interest in securing the Republican nomination in 2024. These are quite separate political matters as opposed to national strategy.

Patricia Karvelas
Just on COVID, before I let you go, the Queensland Government has declared all of Greater Sydney as a COVID-19 hotspot and the state’s border will be closed to people travelling from that region from 1am on Saturday. Is that the right decision?

Kevin Rudd
Well, absolutely right. I mean, Premier Palaszczuk has faced like every premier, Daniel Andrews and Gladys Berejiklian, very hard public policy decisions. But what Premier Palaszczuk has done — and I’ve been here in Queensland for the last three and a half months now, observing this on a daily basis — is that she has taken the Chief Medical Officer’s advice day-in, day-out and acted accordingly. She’s come under enormous attack within Queensland, led initially by the Murdoch media, followed up by Frecklington, the leader of the LNP, saying ‘open the borders’. In fact, I think Frecklington called for the borders to be opened some 60 or 70 separate times, but to give Palaszczuk her due, she’s just stood her ground and said ‘my job is to give effect to the Chief Medical Officer’s advice, despite all the political clamour to the contrary’. So as she did then and as she does now, I think that’s right in terms of the public health and wellbeing of your average Queenslanders, including me.

Patricia Karvelas
Including you. And now you are very much a long-standing Queenslander being there for that long. Kevin Rudd, thank you so much for joining us this afternoon.

Kevin Rudd
Still from Queensland. Here to help. Bye.

Patricia Karvelas
Always. That’s the former prime minister Kevin Rudd, Joining me to talk about yesterday’s Closing the Gap announcement, defending his government’s legacy there but also, of course, talking about the failure as well to deliver on those targets. But particularly pointed comments around the withdrawal of funding in relation to Indigenous affairs which happened under the Abbott Government and he says was responsible for the failure to deliver at the rate that was expected, and it’s been obviously a disappointing journey not quite as planned. Now, a whole bunch of new targets.

The post ABC: Closing The Gap, AUSMIN & Public Health appeared first on Kevin Rudd.

LongNowPredicting the Animals of the Future

Jim Cooke / Gizmodo

Gizmodo asks half a dozen natural historians to speculate on who is going to be doing what jobs on Earth after the people disappear. One of the streams that runs wide and deep through this series of fun thought experiments is how so many niches stay the same through catastrophic changes in the roster of Earth’s animals. Dinosaurs die out but giant predatory birds evolve to take their place; butterflies took over from (unrelated) dot-winged, nectar-sipping giant lacewing pollinator forebears; before orcas there were flippered ocean-going crocodiles, and there will probably be more one day.

In Annie Dillard’s Pulitzer Prize-winning Pilgrim at Tinker Creek, she writes about a vision in which she witnesses glaciers rolling back and forth “like blinds” over the Appalachian Mountains. In this Gizmodo piece, Alexis Mychajliw of the La Brea Tar Pits & Museum talks about how fluctuating sea levels connected island chains or made them, fusing and splitting populations in great oscillating cycles, shrinking some creatures and giantizing others. There’s something soothing in the view from orbit paleontologists, like other deep-time mystics, possess, embody, and transmit: a sense for the clockwork of the cosmos and its orderliness, an appreciation for the powerful resilience of life even in the face of the ephemerality of life-forms.

While everybody interviewed here has obvious pet futures owing to their areas of interest, hold all of them superimposed together and you’ll get a clearer image of the secret teachings of biology…

(This article must have been inspired deeply by Dougal Dixon’s book After Man, but doesn’t mention him – perhaps a fair turn, given Dixon was accused of plagiarizing Wayne Barlowe for his follow-up, Man After Man.)

Worse Than FailureError'd: Please Reboot Faster, I Can't Wait Any Longer

"Saw this at a German gas station along the highway. The reboot screen at the pedestal just kept animating the hourglass," writes Robin G.


"Somewhere, I imagine there's a large number of children asking why their new bean bag is making them feel hot and numb," Will N. wrote.


Joel B. writes, "I came across these 'deals' on the Microsoft Canada store. Normally I'd question it, but based on my experiences with Windows, I bet, to them, the math checks out."


Kyle H. wrote, "Truly, nothing but the best quality strip_zeroes will be accepted."


"My Nan is going to be thrilled at the special discount on these masks!" Paul R. wrote.


Paul G. writes, "I know it seemed like the hours were passing more slowly, and thanks to Apple, I now know why."


[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.


MELinks July 2020

iMore has an insightful article about Apple’s transition to the ARM instruction set for new Mac desktops and laptops [1]. I’d still like to see them do something for the server side.

Umair Haque wrote an insightful article about How the American Idiot Made America Unlivable [2]. We are witnessing the destruction of a once great nation.

Chris Lamb wrote an interesting blog post about comedy shows with the laugh tracks edited out [3]. He then compares that to social media with the like count hidden which is an interesting perspective. I’m not going to watch TV shows edited in that way (I’ve enjoyed BBT inspite of all the bad things about it) and I’m not going to try and hide like counts on social media. But it’s interesting to consider these things.

Cory Doctorow wrote an interesting Locus article suggesting that we could have full employment by a transition to renewable energy and methods for cleaning up the climate problems we are too late to prevent [4]. That seems plausible, but I think we should still get a Universal Basic Income.

The Thinking Shop has posters and decks of cards with logical fallacies and cognitive biases [5]. Every company should put some of these in meeting rooms. Also they have free PDFs to download and print your own posters. [6] is a site that lists powerful homophobic people who hurt GLBT people but then turned out to be gay. It’s presented in an amusing manner, people who hurt others deserve to be mocked.

Wired has an insightful article about the shutdown of Backpage [7]. The owners of Backpage weren’t nice people and they did some stupid things which seem bad (like editing posts to remove terms like “lolita”). But they also worked well with police to find criminals. The opposition to what Backpage were doing conflates sex trafficing, child prostitution, and legal consenting adult sex work. Taking down Backpage seems to be a bad thing for the victims of sex trafficing, for consenting adult sex workers, and for society in general.

Cloudflare has an interesting blog post about short lived certificates for ssh access [8]. Instead of having user’s ssh keys stored on servers each user has to connect to a SSO server to obtain a temporary key before connecting, so revoking an account is easy.

CryptogramFake Stories in Real News Sites

Fireeye is reporting that a hacking group called Ghostwriter broke into the content management systems of Eastern European news sites to plant fake stories.

From a Wired story:

The propagandists have created and disseminated disinformation since at least March 2017, with a focus on undermining NATO and the US troops in Poland and the Baltics; they've posted fake content on everything from social media to pro-Russian news websites. In some cases, FireEye says, Ghostwriter has deployed a bolder tactic: hacking the content management systems of news websites to post their own stories. They then disseminate their literal fake news with spoofed emails, social media, and even op-eds the propagandists write on other sites that accept user-generated content.

That hacking campaign, targeting media sites from Poland to Lithuania, has spread false stories about US military aggression, NATO soldiers spreading coronavirus, NATO planning a full-on invasion of Belarus, and more.

Kevin RuddAIIA: The NT’s Global Opportunities and Challenges




Image: POIS Tom Gibson/ADF


The post AIIA: The NT’s Global Opportunities and Challenges appeared first on Kevin Rudd.

Krebs on SecurityIs Your Chip Card Secure? Much Depends on Where You Bank

Chip-based credit and debit cards are designed to make it infeasible for skimming devices or malware to clone your card when you pay for something by dipping the chip instead of swiping the stripe. But a recent series of malware attacks on U.S.-based merchants suggest thieves are exploiting weaknesses in how certain financial institutions have implemented the technology to sidestep key chip card security features and effectively create usable, counterfeit cards.

A chip-based credit card. Image: Wikipedia.

Traditional payment cards encode cardholder account data in plain text on a magnetic stripe, which can be read and recorded by skimming devices or malicious software surreptitiously installed in payment terminals. That data can then be encoded onto anything else with a magnetic stripe and used to place fraudulent transactions.

Newer, chip-based cards employ a technology known as EMV that encrypts the account data stored in the chip. The technology causes a unique encryption key — referred to as a token or “cryptogram” — to be generated each time the chip card interacts with a chip-capable payment terminal.

Virtually all chip-based cards still have much of the same data that’s stored in the chip encoded on a magnetic stripe on the back of the card. This is largely for reasons of backward compatibility since many merchants — particularly those in the United States — still have not fully implemented chip card readers. This dual functionality also allows cardholders to swipe the stripe if for some reason the card’s chip or a merchant’s EMV-enabled terminal has malfunctioned.

But there are important differences between the cardholder data stored on EMV chips versus magnetic stripes. One of those is a component in the chip known as an integrated circuit card verification value or “iCVV” for short — also known as a “dynamic CVV.”

The iCVV differs from the card verification value (CVV) stored on the physical magnetic stripe, and protects against the copying of magnetic-stripe data from the chip and the use of that data to create counterfeit magnetic stripe cards. Both the iCVV and CVV values are unrelated to the three-digit security code that is visibly printed on the back of a card, which is used mainly for e-commerce transactions or for card verification over the phone.

The appeal of the EMV approach is that even if a skimmer or malware manages to intercept the transaction information when a chip card is dipped, the data is only valid for that one transaction and should not allow thieves to conduct fraudulent payments with it going forward.

However, for EMV’s security protections to work, the back-end systems deployed by card-issuing financial institutions are supposed to check that when a chip card is dipped into a chip reader, only the iCVV is presented; and conversely, that only the CVV is presented when the card is swiped. If somehow these do not align for a given transaction type, the financial institution is supposed to decline the transaction.

The trouble is that not all financial institutions have properly set up their systems this way. Unsurprisingly, thieves have known about this weakness for years. In 2017, I wrote about the increasing prevalence of “shimmers,” high-tech card skimming devices made to intercept data from chip card transactions.

A close-up of a shimmer found on a Canadian ATM. Source: RCMP.

More recently, researchers at Cyber R&D Labs published a paper detailing how they tested 11 chip card implementations from 10 different banks in Europe and the U.S. The researchers found they could harvest data from four of them and create cloned magnetic stripe cards that were successfully used to place transactions.

There are now strong indications the same method detailed by Cyber R&D Labs is being used by point-of-sale (POS) malware to capture EMV transaction data that can then be resold and used to fabricate magnetic stripe copies of chip-based cards.

Earlier this month, the world’s largest payment card network Visa released a security alert regarding a recent merchant compromise in which known POS malware families were apparently modified to target EMV chip-enabled POS terminals.

“The implementation of secure acceptance technology, such as EMV® Chip, significantly reduced the usability of the payment account data by threat actors as the available data only included personal account number (PAN), integrated circuit card verification value (iCVV) and expiration date,” Visa wrote. “Thus, provided iCVV is validated properly, the risk of counterfeit fraud was minimal. Additionally, many of the merchant locations employed point-to-point encryption (P2PE) which encrypted the PAN data and further reduced the risk to the payment accounts processed as EMV® Chip.”

Visa did not name the merchant in question, but something similar seems to have happened at Key Food Stores Co-Operative Inc., a supermarket chain in the northeastern United States. Key Food initially disclosed a card breach in March 2020, but two weeks ago updated its advisory to clarify that EMV transaction data also was intercepted.

“The POS devices at the store locations involved were EMV enabled,” Key Food explained. “For EMV transactions at these locations, we believe only the card number and expiration date would have been found by the malware (but not the cardholder name or internal verification code).”

While Key Food’s statement may be technically accurate, it glosses over the reality that the stolen EMV data could still be used by fraudsters to create magnetic stripe versions of EMV cards presented at the compromised store registers in cases where the card-issuing bank hadn’t implemented EMV correctly.

Earlier today, fraud intelligence firm Gemini Advisory released a blog post with more information on recent merchant compromises — including Key Food — in which EMV transaction data was stolen and ended up for sale in underground shops that cater to card thieves.

“The payment cards stolen during this breach were offered for sale in the dark web,” Gemini explained. “Shortly after discovering this breach, several financial institutions confirmed that the cards compromised in this breach were all processed as EMV and did not rely on the magstripe as a fallback.”

Gemini says it has verified that another recent breach — at a liquor store in Georgia — also resulted in compromised EMV transaction data showing up for sale at dark web stores that sell stolen card data. As both Gemini and Visa have noted, in both cases proper iCVV verification from banks should render this intercepted EMV data useless to crooks.

Gemini determined that due to the sheer number of stores affected, it’s extremely unlikely the thieves involved in these breaches intercepted the EMV data using physically installed EMV card shimmers.

“Given the extreme impracticality of this tactic, they likely used a different technique to remotely breach POS systems to collect enough EMV data to perform EMV-Bypass Cloning,” the company wrote.

Stas Alforov, Gemini’s director of research and development, said financial institutions that aren’t performing these checks risk losing the ability to notice when those cards are used for fraud.

That’s because many banks that have issued chip-based cards may assume that as long as those cards are used for chip transactions, there is virtually no risk that the cards will be cloned and sold in the underground. Hence, when these institutions are looking for patterns in fraudulent transactions to determine which merchants might be compromised by POS malware, they may completely discount any chip-based payments and focus only on those merchants at which a customer has swiped their card.

“The card networks are catching on to the fact that there’s a lot more EMV-based breaches happening right now,” Alforov said. “The larger card issuers like Chase or Bank of America are indeed checking [for a mismatch between the iCVV and CVV], and will kick back transactions that don’t match. But that is clearly not the case with some smaller institutions.”

For better or worse, we don’t know which financial institutions have failed to properly implement the EMV standard. That’s why it always pays to keep a close eye on your monthly statements, and report any unauthorized transactions immediately. If your institution lets you receive transaction alerts via text message, this can be a near real-time way to keep an eye out for such activity.

CryptogramImages in Eye Reflections

In Japan, a cyberstalker located his victim by enhancing the reflections in her eye, and using that information to establish a location.

Reminds me of the image enhancement scene in Blade Runner. That was science fiction, but now image resolution is so good that we have to worry about it.

LongNowThe Digital Librarian as Essential Worker

Michelle Swanson, an Oregon-based educator and educational consultant, has written a blog post on the Internet Archive on the increased importance of digital librarians during the pandemic:

With public library buildings closed due to the global pandemic, teachers, students, and lovers of books everywhere have increasingly turned to online resources for access to information. But as anyone who has ever turned up 2.3 million (mostly unrelated) results from a Google search knows, skillfully navigating the Internet is not as easy as it seems. This is especially true when conducting serious research that requires finding and reviewing older books, journals and other sources that may be out of print or otherwise inaccessible.

Enter the digital librarian.

Michelle Swanson, “Digital Librarians – Now More Essential Than Ever” from the Internet Archive.

Kevin Kelly writes (in New Rules for the New Economy and in The Inevitable) about how an information economy flips the relative valuation of questions and answers — how search makes useless answers nearly free and useful questions even more precious than before, and knowing how to reliably produce useful questions even more precious still.

But much of our knowledge and outboard memory is still resistant to or incompatible with web search algorithms — databases spread across both analog and digital, with unindexed objects or idiosyncratic cataloging systems. Just as having map directions on your phone does not outdo a local guide, it helps to have people intimate with a library who can navigate the weird specifics. And just as scientific illustrators still exist to mostly leave out the irrelevant and make a paper clear as day (which cameras cannot do, as of 02020), a librarian is a sharp instrument that cuts straight through the extraneous info to what’s important.

Knowing what to enter in a search is one thing; knowing when it won’t come up in search and where to look amidst an analog collection is another skill entirely. Both are necessary at a time when libraries cannot receive (as many) scholars in the flesh, and what Penn State Prof Rich Doyle calls the “infoquake” online — the too-much-all-at-once-ness of it all — demands an ever-sharper reason just to stay afloat.

Learn More

  • Watch Internet Archive founder Brewster Kahle’s 02011 Long Now talk, “Universal Access to All Knowledge.”

Worse Than FailureCodeSOD: A Variation on Nulls

Submitter “NotAThingThatHappens” stumbled across a “unique” way to check for nulls in C#.

Now, there are already a few perfectly good ways to check for nulls. variable is null, for example, or use nullable types specifically. But “NotAThingThatHappens” found approach:

if(((object)someObjThatMayBeNull) is var _)
    //object is null, react somehow

What I hate most about this is how cleverly it exploits the C# syntax to work.

Normally, the _ is a discard. It’s meant to be used for things like tuple unpacking, or in cases where you have an out parameter but don’t actually care about the output- foo(out _) just discards the output data.

But _ is also a perfectly valid identifier. So var _ creates a variable _, and the type of that variable is inferred from context- in this case, whatever type it’s being compared against in someObjThatMayBeNull. This variable is scoped to the if block, so we don’t have to worry about it leaking into our namespace, but since it’s never initialized, it’s going to choose the appropriate default value for its type- and for reference types, that default is null. By casting explicitly to object, we guarantee that our type is a reference type, so this makes sure that we don’t get weird behavior on value types, like integers.

So really, this is just an awkward way of saying objectCouldBeNull is null.

NotAThingThatHappens adds:

The code never made it to production… but I was surprised that the compiler allowed this.
It’s stupid, but it WORKS!

It’s definitely stupid, it definitely works, I’m definitely glad it’s not in your codebase.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!