Planet Russell

,

Planet DebianPetter Reinholdtsen: Buster update of Norwegian Bokmål edition of Debian Administrator's Handbook almost done

Thanks to the good work of several volunteers, the updated edition of the Norwegian translation for "The Debian Administrator's Handbook" is now almost completed. After many months of proof reading, I consider the proof reading complete enough for us to move to the next step, and have asked for the print version to be prepared and sent of to the print on demand service lulu.com. While it is still not to late if you find any incorrect translations on the hosted Weblate service, but it will be soon. :) You can check out the Buster edition on the web until the print edition is ready.

The book will be for sale on lulu.com and various web book stores, with links available from the web site for the book linked to above. I hope a lot of readers find it useful.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Worse Than FailureError'd: This Could Break the Bank!

"Sure, free for the first six months is great, but what exactly does happen when I hit month seven?" Stuart L. wrote.

 

"In order to add an app on the App Store Connect dashboard, you need to 'Register a new bundle ID in Certificates, Identifiers & Profiles'," writes Quentin, "Open the link, you have a nice 'register undefined' and cannot type anything in the identifier input field!"

 

"I was taught to keep money amounts as pennies rather than fractional dollars, but I guess I'm an old-fashioned guy!" writes Paul F.

 

Anthony C. wrote, "I was looking for headphones on Walmart.com and well, I guess they figured I'd like to look at something else for a change?"

 

"Build an office chair using only a spork, a napkin, and a coffee stirrer? Sounds like a job for McGuyver!"

 

"Translation from Swedish, 'We assume that most people who watch Just Chatting probably also like Just Chatting.' Yes, I bet it's true!," Bill W. writes.

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianLouis-Philippe Véronneau: Hire me!

I'm happy to announce I handed out my Master's Thesis last Monday. I'm not publishing the final copy just yet1, as it still needs to go through the approval committee. If everything goes well, I should have my Master of Economics diploma before Christmas!

It sure hasn't been easy, and although I regret nothing, I'm also happy to be done with university.

Looking for a job

What an odd time to be looking for a job, right? Turns out for the first time in 12 years, I don't have an employer. It's oddly freeing, but also a little scary. I'm certainly not bitter about it though and it's nice to have some time on my hands to work on various projects and read things other than academic papers. Look out for my next blog posts on using the NeTV2 as an OSHW HDMI capture card, on hacking at security tokens and much more!

I'm not looking for anything long term (I'm hoping to teach Economics again next Winter), but for the next few months, my calendar is wide open.

For the last 6 years, I worked as Linux system administrator, mostly using a LAMP stack in conjunction with Puppet, Shell and Python. Although I'm most comfortable with Puppet, I also have decent experience with Ansible, thanks to my work in the DebConf Videoteam.

I'm not the most seasoned Debian Developer, but I have some experience packaging Python applications and libraries. Although I'm no expert at it, lately I've also been working on Clojure packages, as I'm trying to get Puppet 6 in Debian in time for the Bullseye freeze. At the rate it's going though, I doubt we're going to make it...

If your company depends on Puppet and cares about having a version in Debian 11 that is maintained (Puppet 5 is EOL in November 2020), I'm your guy!

Oh, and I guess I'm a soon-to-be Master of Economics specialising in Free and Open Source Software business models and incentives theory. Not sure I'll ever get paid putting that in application, but hey, who knows.

If any of that resonates with you, contact me and let's have a chat! I promise I don't bite :)


  1. The title of the thesis is What are the incentive structures of Free Software? An economic analysis of Free Software's specific development model. Once the final copy is approved, I'll be sure to write a longer blog post about my findings here. 

Planet DebianReproducible Builds (diffoscope): diffoscope 160 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 160. This version includes the following changes:

* Check that pgpdump is actually installed before attempting to run it.
  Thanks to Gianfranco Costamagna (locutusofborg). (Closes: #969753)
* Add some documentation for the EXTERNAL_TOOLS dictionary.
* Ensure we check FALLBACK_FILE_EXTENSION_SUFFIX, otherwise we run pgpdump
  against all files that are recognised by file(1) as "data".

You find out more by visiting the project homepage.

,

Planet DebianDaniel Silverstone: Broccoli Sync Conversation

Broccoli Sync Conversation

A number of days ago (I know, I'm an awful human who failed to post this for over a week), myself, Lars, Mark, and Vince discussed Dropbox's article about Broccoli Sync. It wasn't quite what we'd expected but it was an interesting discussion of compression and streamed data.

Vince observed that it was interesting in that it was a way to move storage compression cost to the client edge. This makes sense because decompression (to verify the uploaded content) is cheaper than compression; and also since CPU and bandwidth are expensive, spending the client CPU to reduce bandwidth is worthwhile.

Lars talked about how even in situations where everyone has gigabit data connectivity with no limit on total transit, bandwidth/time is a concern, so it makes sense.

We liked how they determined the right compresison level to use available bandwidth (i.e. not be CPU throttled) but also gain the most compression possible. Their diagram showing relative compression sizes for level 1 vs. 3 vs. 5 suggests that the gain for putting the effort in for 5 rather than 1. It's interesting in that diagram that 'documents' don't compress well but then again it is notable that such documents are likely DEFLATE'd zip files. Basically if the data is already compressed then there's little hope Brotli will gain much.

I raised that it was interesting that they chose Brotli, in part, due to the availability of a pure Rust implementation of Brotli. Lars mentioned that Microsoft and others talk about how huge quantities of C code has unexpected memory safety issues and so perhaps that is related. Daniel mentioned that the document talked about Dropbox having a policy of not running unconstrained C code which was interesting.

Vince noted that in their deployment challenges it seemed like a very poor general strategy to cope with crasher errors; but Daniel pointed out that it might be an over-simplified description, and Mark suggested that it might be sufficient until a fix can be pushed out. Vince agreed that it's plausible this is a tiered/sharded deployment process and thus a good way to smoke out problems.

Daniel found it interesting that their block storage sounds remarkably like every other content-addressible storage and that while they make it clear in the article that encryption, client identification etc are elided, it looks like they might be able to deduplicate between theoretically hostile clients.

We think that the compressed-data plus type plus hash (which we assume also contains length) is an interesting and nice approach to durability and integrity validation in the protocol. And the compressed blocks can then be passed to the storage backend quickly and effectively which is nice for latency.

Daniel raised that he thought it was fun that their rust-brotli library is still workable on Rust 1.12 which is really quite old.

We ended up on a number of tangential discussions, about Rust, about deployment strategies, and so on. While the article itself was a little thin, we certainly had a lot of good chatting around topics it raised.

We'll meet again in a month (on the 28th Sept) so perhaps we'll have a chunkier article next time. (Possibly this and/or related articles)

Worse Than FailureCodeSOD: Put a Dent in Your Logfiles

Valencia made a few contributions to a large C++ project run by Harvey. Specifically, there were some pass-by-value uses of a large data structure, and changing those to pass-by-reference fixed a number of performance problems, especially on certain compilers.

“It’s a simple typo,” Valencia thought. “Anyone could have done that.” But they kept digging…

The original code-base was indented with spaces, but Harvey just used tabs. That was a mild annoyance, but Harvey used a lot of tabs, as his code style was “nest as many blocks as deeply as possible”. In addition to loads of magic numbers that should be enums, Harvey also had a stance that “never use an int type when you can store your number as a double”.

Then, for example, what if you have a char and you want to turn the char into a string? Do you just use the std::string() constructor that accepts a char parameter? Not if you’re Harvey!

std::string ToString(char c)
{
    std::stringstream ss;
    std::string out = "";
    ss << c;
    ss >> out;
    return out;
}

What if you wanted to cache some data in memory? A map would be a good place to start. How many times do you want to access a single key while updating a cache entry? How does “four times” work for you? It works for Harvey!

void WriteCache(std::string key, std::string value)
{
    Setting setting = mvCache["cache_"+key];
    if (!setting.initialized)
    {
        setting.initialized=true;
        setting.data = "";
        mvCache.insert(std::map<std::string,Cache>::value_type("cache_"+key,setting));
        mvCache["cache_"+key]=setting;
    }
    setting.data = value;
    mvCache["cache_"+key]=setting;
}

And I don’t know exactly what they are trying to communicate with the mv prefix, but people have invented all sorts of horrible ways to abuse Hungarian notation. Fortunately, Valencia clarifies: “Harvey used an incorrect Hungarian notation prefix while they were at it.”

That’s the easy stuff. Ugly, bad code, sure, but nothing that leaves you staring, stunned into speechlessness.

Let’s say you added a lot of logging messages, and you wanted to control how many logging messages appeared. You’ve heard of “logging levels”, and that gives you an inspiration for how to solve this problem:

bool LogLess(int iMaxLevel)
{
     int verboseLevel = rand() % 1000;
     if (verboseLevel < iMaxLevel) return true;
     return false;
}

//how it's used:
if(LogLess(500))
   log.debug("I appear half of the time");

Normally, I’d point out something about how they don’t need to return true or return false when they could just return the boolean expression, but what’d be the point? They’ve created probabilistic log levels. It’s certainly one way to solve the “too many log messages” problem: just randomly throw some of them away.

Valencia gives us a happy ending:

Needless to say, this has since been rewritten… the end result builds faster, uses less memory and is several orders of magnitude faster.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Rondam RamblingsThis is what the apocalypse looks like

This is a photo of our house taken at noon today:                 This is not the raw image.  I took this with an iPhone, whose auto-exposure made the image look much brighter than it actually is.  I've adjusted the brightness and color balance to match the actual appearance as much as I can.  Even so, this image doesn't do justice to the reality.  For one thing, the sky is much too blue.  The

Planet DebianReproducible Builds: Reproducible Builds in August 2020

Welcome to the August 2020 report from the Reproducible Builds project.

In our monthly reports, we summarise the things that we have been up to over the past month. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced from the original free software source code to the pre-compiled binaries we install on our systems. If you’re interested in contributing to the project, please visit our main website.




This month, Jennifer Helsby launched a new reproduciblewheels.com website to address the lack of reproducibility of Python wheels.

To quote Jennifer’s accompanying explanatory blog post:

One hiccup we’ve encountered in SecureDrop development is that not all Python wheels can be built reproducibly. We ship multiple (Python) projects in Debian packages, with Python dependencies included in those packages as wheels. In order for our Debian packages to be reproducible, we need that wheel build process to also be reproducible

Parallel to this, transparencylog.com was also launched, a service that verifies the contents of URLs against a publicly recorded cryptographic log. It keeps an append-only log of the cryptographic digests of all URLs it has seen. (GitHub repo)

On 18th September, Bernhard M. Wiedemann will give a presentation in German, titled Wie reproducible builds Software sicherer machen (“How reproducible builds make software more secure”) at the Internet Security Digital Days 2020 conference.


Reproducible builds at DebConf20

There were a number of talks at the recent online-only DebConf20 conference on the topic of reproducible builds.

Holger gave a talk titled “Reproducing Bullseye in practice”, focusing on independently verifying that the binaries distributed from ftp.debian.org are made from their claimed sources. It also served as a general update on the status of reproducible builds within Debian. The video (145 MB) and slides are available.

There were also a number of other talks that involved Reproducible Builds too. For example, the Malayalam language mini-conference had a talk titled എനിയ്ക്കും ഡെബിയനില്‍ വരണം, ഞാന്‍ എന്തു് ചെയ്യണം? (“I want to join Debian, what should I do?”) presented by Praveen Arimbrathodiyil, the Clojure Packaging Team BoF session led by Elana Hashman, as well as Where is Salsa CI right now? that was on the topic of Salsa, the collaborative development server that Debian uses to provide the necessary tools for package maintainers, packaging teams and so on.

Jonathan Bustillos (Jathan) also gave a talk in Spanish titled Un camino verificable desde el origen hasta el binario (“A verifiable path from source to binary”). (Video, 88MB)


Development work

After many years of development work, the compiler for the Rust programming language now generates reproducible binary code. This generated some general discussion on Reddit on the topic of reproducibility in general.

Paul Spooren posted a ‘request for comments’ to OpenWrt’s openwrt-devel mailing list asking for clarification on when to raise the PKG_RELEASE identifier of a package. This is needed in order to successfully perform rebuilds in a reproducible builds context.

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update.

Chris Lamb provided some comments and pointers on an upstream issue regarding the reproducibility of a Snap / SquashFS archive file. []

Debian

Holger Levsen identified that a large number of Debian .buildinfo build certificates have been “tainted” on the official Debian build servers, as these environments have files underneath the /usr/local/sbin directory []. He also filed against bug for debrebuild after spotting that it can fail to download packages from snapshot.debian.org [].

This month, several issues were uncovered (or assisted) due to the efforts of reproducible builds.

For instance, Debian bug #968710 was filed by Simon McVittie, which describes a problem with detached debug symbol files (required to generate a traceback) that is unlikely to have been discovered without reproducible builds. In addition, Jelmer Vernooij called attention that the new Debian Janitor tool is using the property of reproducibility (as well as diffoscope when applying archive-wide changes to Debian:

New merge proposals also include a link to the diffoscope diff between a vanilla build and the build with changes. Unfortunately these can be a bit noisy for packages that are not reproducible yet, due to the difference in build environment between the two builds. []

56 reviews of Debian packages were added, 38 were updated and 24 were removed this month adding to our knowledge about identified issues. Specifically, Chris Lamb added and categorised the nondeterministic_version_generated_by_python_param and the lessc_nondeterministic_keys toolchain issues. [][]

Holger Levsen sponsored Lukas Puehringer’s upload of the python-securesystemslib pacage, which is a dependency of in-toto, a framework to secure the integrity of software supply chains. []

Lastly, Chris Lamb further refined his merge request against the debian-installer component to allow all arguments from sources.list files (such as [check-valid-until=no]) in order that we can test the reproducibility of the installer images on the Reproducible Builds own testing infrastructure and sent a ping to the team that maintains that code.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of these patches, including:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can not only locate and diagnose reproducibility issues, it provides human-readable diffs of all kinds. In August, Chris Lamb made the following changes to diffoscope, including preparing and uploading versions 155, 156, 157 and 158 to Debian:

  • New features:

    • Support extracting data of PGP signed data. (#214)
    • Try files named .pgp against pgpdump(1) to determine whether they are Pretty Good Privacy (PGP) files. (#211)
    • Support multiple options for all file extension matching. []
  • Bug fixes:

    • Don’t raise an exception when we encounter XML files with <!ENTITY> declarations inside the Document Type Definition (DTD), or when a DTD or entity references an external resource. (#212)
    • pgpdump(1) can successfully parse some binary files, so check that the parsed output contains something sensible before accepting it. []
    • Temporarily drop gnumeric from the Debian build-dependencies as it has been removed from the testing distribution. (#968742)
    • Correctly use fallback_recognises to prevent matching .xsb binary XML files.
    • Correct identify signed PGP files as file(1) returns “data”. (#211)
  • Logging improvements:

    • Emit a message when ppudump version does not match our file header. []
    • Don’t use Python’s repr(object) output in “Calling external command” messages. []
    • Include the filename in the “… not identified by any comparator” message. []
  • Codebase improvements:

    • Bump Python requirement from 3.6 to 3.7. Most distributions are either shipping with Python 3.5 or 3.7, so supporting 3.6 is not only somewhat unnecessary but also cumbersome to test locally. []
    • Drop some unused imports [], drop an unnecessary dictionary comprehensions [] and some unnecessary control flow [].
    • Correct typo of “output” in a comment. []
  • Release process:

    • Move generation of debian/tests/control to an external script. []
    • Add some URLs for the site that will appear on PyPI.org. []
    • Update “author” and “author email” in setup.py for PyPI.org and similar. []
  • Testsuite improvements:

    • Update PPU tests for compatibility with Free Pascal versions 3.2.0 or greater. (#968124)
    • Mark that our identification test for .ppu files requires ppudump version 3.2.0 or higher. []
    • Add an assert_diff helper that loads and compares a fixture output. [][][][]
  • Misc:

In addition, Mattia Rizzolo documented in setup.py that diffoscope works with Python version 3.8 [] and Frazer Clews applied some Pylint suggestions [] and removed some deprecated methods [].

Website

This month, Chris Lamb updated the main Reproducible Builds website and documentation to:

  • Clarify & fix a few entries on the “who” page [][] and ensure that images do not get to large on some viewports [].
  • Clarify use of a pronoun re. Conservancy. []
  • Use “View all our monthly reports” over “View all monthly reports”. []
  • Move a “is a” suffix out of the link target on the SOURCE_DATE_EPOCH age. []

In addition, Javier Jardón added the freedesktop-sdk project [] and Kushal Das added SecureDrop project [] to our projects page. Lastly, Michael Pöhn added internationalisation and translation support with help from Hans-Christoph Steiner [].

Testing framework

The Reproducible Builds project operate a Jenkins-based testing framework to power tests.reproducible-builds.org. This month, Holger Levsen made the following changes:

  • System health checks:

    • Improve explanation how the status and scores are calculated. [][]
    • Update and condense view of detected issues. [][]
    • Query the canonical configuration file to determine whether a job is disabled instead of duplicating/hardcoding this. []
    • Detect several problems when updating the status of reporting-oriented ‘metapackage’ sets. []
    • Detect when diffoscope is not installable  [] and failures in DNS resolution [].
  • Debian:

    • Update the URL to the Debian security team bug tracker’s Git repository. []
    • Reschedule the unstable and bullseye distributions often for the arm64 architecture. []
    • Schedule buster less often for armhf. [][][]
    • Force the build of certain packages in the work-in-progress package rebuilder. [][]
    • Only update the stretch and buster base build images when necessary. []
  • Other distributions:

    • For F-Droid, trigger jobs by commits, not by a timer. []
    • Disable the Archlinux HTML page generation job as it has never worked. []
    • Disable the alternative OpenWrt rebuilder jobs. []
  • Misc;

Many other changes were made too, including:

Finally, build node maintenance was performed by Holger Levsen [], Mattia Rizzolo [][] and Vagrant Cascadian [][][][]


Mailing list

On our mailing list this month, Leo Wandersleb sent a message to the list after he was wondering how to expand his WalletScrutiny.com project (which aims to improve the security of Bitcoin wallets) from Android wallets to also monitor Linux wallets as well:

If you think you know how to spread the word about reproducibility in the context of Bitcoin wallets through WalletScrutiny, your contributions are highly welcome on this PR []

Julien Lepiller posted to the list linking to a blog post by Tavis Ormandy titled You don’t need reproducible builds. Morten Linderud (foxboron) responded with a clear rebuttal that Tavis was only considering the narrow use-case of proprietary vendors and closed-source software. He additionally noted that the criticism that reproducible builds cannot prevent against backdoors being deliberately introduced into the upstream source (“bugdoors”) are decidedly (and deliberately) outside the scope of reproducible builds to begin with.

Chris Lamb included the Reproducible Builds mailing list in a wider discussion regarding a tentative proposal to include .buildinfo files in .deb packages, adding his remarks regarding requiring a custom tool in order to determine whether generated build artifacts are ‘identical’ in a reproducible context. []

Jonathan Bustillos (Jathan) posted a quick email to the list requesting whether there was a list of To do tasks in Reproducible Builds.

Lastly, Chris Lamb responded at length to a query regarding the status of reproducible builds for Debian ISO or installation images. He noted that most of the technical work has been performed but “there are at least four issues until they can be generally advertised as such”. He pointed that the privacy-oriented Tails operation system, which is based directly on Debian, has had reproducible builds for a number of years now. []



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Cryptogram Ranking National Cyber Power

Harvard Kennedy School’s Belfer Center published the “National Cyber Power Index 2020: Methodology and Analytical Considerations.”

The rankings:

  1. US
  2. China
  3. UK
  4. Russia
  5. Netherlands
  6. France
  7. Germany
  8. Canada
  9. Japan
  10. Australia

We could — and should — argue about the criteria and the methodology, but it’s good that someone is starting this conversation.

Executive Summary: The Belfer National Cyber Power Index (NCPI) measures 30 countries’ cyber capabilities in the context of seven national objectives, using 32 intent indicators and 27 capability indicators with evidence collected from publicly available data.

In contrast to existing cyber related indices, we believe there is no single measure of cyber power. Cyber Power is made up of multiple components and should be considered in the context of a country’s national objectives. We take an all-of-country approach to measuring cyber power. By considering “all-of-country” we include all aspects under the control of a government where possible. Within the NCPI we measure government strategies, capabilities for defense and offense, resource allocation, the private sector, workforce, and innovation. Our assessment is both a measurement of proven power and potential, where the final score assumes that the government of that country can wield these capabilities effectively.

The NCPI has identified seven national objectives that countries pursue using cyber means. The seven objectives are:

  1. Surveilling and Monitoring Domestic Groups;
  2. Strengthening and Enhancing National Cyber Defenses;
  3. Controlling and Manipulating the Information Environment;
  4. Foreign Intelligence Collection for National Security;
  5. Commercial Gain or Enhancing Domestic Industry Growth;
  6. Destroying or Disabling an Adversary’s Infrastructure and Capabilities; and,
  7. Defining International Cyber Norms and Technical Standards.

In contrast to the broadly held view that cyber power means destroying or disabling an adversary’s infrastructure (commonly referred to as offensive cyber operations), offense is only one of these seven objectives countries pursue using cyber means.

Cryptogram The Third Edition of Ross Anderson’s Security Engineering

Ross Anderson’s fantastic textbook, Security Engineering, will have a third edition. The book won’t be published until December, but Ross has been making drafts of the chapters available online as he finishes them. Now that the book is completed, I expect the publisher to make him take the drafts off the Internet.

I personally find both the electronic and paper versions to be incredibly useful. Grab an electronic copy now while you still can.

Planet DebianGunnar Wolf: RPi 4 + 8GB, Finally, USB-functional!

So… Finally, kernel 5.8 entered the Debian Unstable repositories. This means that I got my Raspberry image from their usual location and was able to type the following, using only my old trusty USB keyboard:

So finally, the greatest and meanest Raspberry is fully supported with a pure Debian image! (only tarnished by the nonfree raspi-firmware package.

Oh, in case someone was still wondering — The images generated follow the stable release. Only the kernel and firmware are installed from unstable. If / when kernel 5.8 enters Backports, I will reduce the noise of adding a different suit to the sources.list.

Worse Than FailureWeb Server Installation

Connect the dots puzzle

Once upon a time, there lived a man named Eric. Eric was a programmer working for the online development team of a company called The Company. The Company produced Media; their headquarters were located on The Continent where Eric happily resided. Life was simple. Straightforward. Uncomplicated. Until one fateful day, The Company decided to outsource their infrastructure to The Service Provider on Another Continent for a series of complicated reasons that ultimately benefited The Budget.

Part of Eric's job was to set up web servers for clients so that they could migrate their websites to The Platform. Previously, Eric would have provisioned the hardware himself. Under the new rules, however, he had to request that The Service Provider do the heavy lifting instead.

On Day 0 of our story, Eric received a server request from Isaac, a representative of The Client. On Day 1, Eric asked for the specifications for said server, which were delivered on Day 2. Day 2 being just before a long weekend, it was Day 6 before the specs were delivered to The Service Provider. The contact at The Service Provider, Thomas, asked if there was a deadline for this migration. Eric replied with the hard cutover date almost two months hence.

This, of course, would prove to be a fatal mistake. The following story is true; only the names have been changed to protect the guilty. (You might want some required listening for this ... )

Day 6

  • Thomas delivers the specifications to a coworker, Ayush, without requesting a GUI.
  • Ayush declares that the servers will be ready in a week.

Day 7

  • Eric informs The Client that the servers will be delivered by Day 16, so installations could get started by Day 21 at the latest.
  • Ayush asks if The Company wants a GUI.

Day 8

  • Eric replies no.

Day 9

  • Another representative of The Service Provider, Vijay, informs Eric that the file systems were not configured according to Eric's request.
  • Eric replies with a request to configure the file systems according to the specification.
  • Vijay replies with a request for a virtual meeting.
  • Ayush tells Vijay to configure the system according to the specification.

Day 16

  • The initial delivery date comes and goes without further word. Eric's emails are met with tumbleweeds. He informs The Client that they should be ready to install by Day 26.

Day 19

  • Ayush asks if any ports other than 22 are needed.
  • Eric asks if the servers are ready to be delivered.
  • Ayush replies that if port 22 needs to be opened, that will require approval from Eric's boss, Jack.

Day 20

  • Ayush delivers the server names to Eric as an FYI.

Day 22

  • Thomas asks Eric if there's been any progress, then asks Ayush to schedule a meeting to discuss between the three of them.

Day 23

  • Eric asks for the login credentials to the aforementioned server, as they were never provided.
  • Vijay replies with the root credentials in a plaintext email.
  • Eric logs in and asks for some network configuration changes to allow admin access from The Client's network.
  • Mehul, yet another person at The Service Provider, asks for the configuration change request to be delivered via Excel spreadsheet.
  • Eric tells The Client that Day 26 is unlikely, but they should probably be ready by end of Day 28, still well before the hard deadline of Day 60.

Day 28

  • The Client reminds Eric that they're decommissioning the old datacenter on Day 60 and would very much like to have their website moved by then.
  • Eric tells Mehul that the Excel spreadsheet requires information he doesn't have. Could he make the changes?
  • Thomas asks Mehul and Ayush if things are progressing. Mehul replies that he doesn't have the source IP (which was already sent). Thomas asks whom they're waiting for. Mehul replies and claims that Eric requested access from the public Internet.
  • Mehul escalates to Jack.
  • Thomas reminds Ayush and Mehul that if their work is pending some data, they should work toward getting that obstacle solved.

Day 29

  • Eric, reading the exchange from the evening before, begins to question his sanity as he forwards the original email back over, along with all the data they requested.

Day 30

  • Mehul replies that access has been granted.

Day 33

  • Eric discovers he can't access the machine from inside The Client's network, and requests opening access again.
  • Mehul suggests trying from the Internet, claiming that the connection is blocked by The Client's firewall.
  • Eric replies that The Client's datacenter cannot access the Internet, and that the firewall is configured properly.
  • Jack adds more explicit instructions for Mehul as to exactly how to investigate the network problem.

Day 35

  • Mehul asks Eric to try again.

Day 36

  • It still doesn't work.
  • Mehul replies with instructions to use specific private IPs. Eric responds that he is doing just that.
  • Ayush asks if the problem is fixed.
  • Eric reminds Thomas that time is running out.
  • Thomas replies that the firewall setting changes must have been stepped on by changes on The Service Provider's side, and he is escalating the issue.

Day 37

  • Mehul instructs Eric to try again.

Day 40

  • It still doesn't work.

Day 41

  • Mehul asks Eric to try again, as he has personally verified that it works from the Internet.
  • Eric reminds Mehul that it needs to work from The Client's datacenter—specifically, for the guy doing the migration at The Client.

Day 42

  • Eric confirms that the connection does indeed work from Internet, and that The Client can now proceed with their work.
  • Mehul asks if Eric needs access through The Company network.
  • Eric replies that the connection from The Company network works fine now.

Day 47

  • Ayush requests a meeting with Eric about support handover to operations.

Day 48

  • Eric asks what support is this referring to.
  • James (The Company, person #3) replies that it's about general infrastructure support.

Day 51

  • Eric notifies Ayush and Mehul that server network configurations were incorrect, and that after fixing the configuration and rebooting the server, The Client can no longer log in to the server because the password no longer works.
  • Ayush instructs Vijay to "setup the repository ASAP." Nobody knows what repository he's talking about.
  • Vijay responds that "licenses are not updated for The Company servers." Nobody knows what licenses he is talking about.
  • Vijay sends original root credentials in a plaintext email again.

Day 54

  • Thomas reminds Ayush and Mehul that the servers need to be moved by day 60.
  • Eric reminds Thomas that the deadline was extended to the end of the month (day 75) the previous week.
  • Eric replies to Vijay that the original credentials sent no longer work.
  • Vijay asks Eric to try again.
  • Mehul asks for the details of the unreachable servers, which were mentioned in the previous email.
  • Eric sends a summary of current status (can't access from The Company's network again, server passwords not working) to Thomas, Ayush, Mehul and others.
  • Vijay replies, "Can we discuss on this."
  • Eric replies that he's always reachable by Skype or email.
  • Mehul says that access to private IPs is not under his control. "Looping John and Jared," but no such people were added to the recipient list. Mehul repeats that from The Company's network, private IPs should be used.
  • Thomas tells Eric that the issue has been escalated again on The Service Provider's side.
  • Thomas complains to Roger (The Service Provider, person #5), Theodore (The Service Provider, person #6) and Matthew (The Service Provider, person #7) that the process isn't working.

Day 55

  • Theodore asks Peter (The Service Provider, person #8), Mehul, and Vinod (The Service Provider, person #9) what is going on.
  • Peter responds that websites should be implemented using Netscaler, and asks no one in particular if they could fill an Excel template.
  • Theodore asks who should be filling out the template.
  • Eric asks Thomas if he still thinks the sites can be in production by the latest deadline, Day 75, and if he should install the server on AWS instead.
  • Thomas asks Theodore if configuring the network really takes two weeks, and tells the team to try harder.

Day 56

  • Theodore replies that configuring the network doesn't take two weeks, but getting the required information for that often does. Also that there are resourcing issues related to such configurations.
  • Thomas suggests a meeting to fill the template.
  • Thomas asks if there's any progress.

Day 57

  • Ayush replies that if The Company provides the web service name, The Service Provider can fill out the rest.
  • Eric delivers a list of site domains and required ports.
  • Thomas forwards the list to Peter.
  • Tyler (The Company, person #4) informs Eric that any AWS servers should be installed by Another Service Provider.
  • Eric explains that the idea was that he would install the server on The Company's own AWS account.
  • Paul (The Company, person #5) informs Eric that all AWS server installations are to be done by Another Service Provider, and that they'll have time to do it ... two months down the road.
  • Kane (The Company, person #6) asks for a faster solution, as they've been waiting for nearly two months already.
  • Eric sets up the server on The Company's AWS account before lunch and delivers it to The Client.

Day 58

  • Peter replies that he needs a list of fully qualified domain names instead of just the site names.
  • Eric delivers a list of current blockers to Thomas, Theodore, Ayush and Jagan (The Service Provider, person #10).
  • Ayush instructs Vijay and the security team to check network configuration.
  • Thomas reminds Theodore, Ayush and Jagan to solve the issues, and reminds them that the original deadline for this was a month ago.
  • Theodore informs everyone that the servers' network configuration wasn't compatible with the firewall's network configuration, and that Vijay and Ayush are working on it.

Day 61

  • Peter asks Thomas and Ayush if they can get the configuration completed tomorrow.
  • Thomas asks Theodore, Ayush, and Jagan if the issues are solved.

Day 62

  • Ayush tells Eric that they've made configuration changes, and asks if he can now connect.

Day 63

  • Eric replies to Ayush that he still has trouble connecting to some of the servers from The Company's network.
  • Eric delivers network configuration details to Peter.
  • Ayush tells Vijay and Jai (The Service Provider, person #11) to reset passwords on servers so Eric can log in, and asks for support from Theodore with network configurations.
  • Matthew replies that Theodore is on his way to The Company.
  • Vijay resets the password and sends it to Ayush and Jai.
  • Ayush sends the password to Eric via plaintext email.
  • Theodore asks Eric and Ayush if the problems are resolved.
  • Ayush replies that connection from The Company's network does not work, but that the root password was emailed.

Day 64

  • Tyler sends an email to everyone and cancels the migration.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Cryptogram US Space Cybersecurity Directive

The Trump Administration just published “Space Policy Directive – 5“: “Cybersecurity Principles for Space Systems.” It’s pretty general:

Principles. (a) Space systems and their supporting infrastructure, including software, should be developed and operated using risk-based, cybersecurity-informed engineering. Space systems should be developed to continuously monitor, anticipate,and adapt to mitigate evolving malicious cyber activities that could manipulate, deny, degrade, disrupt,destroy, surveil, or eavesdrop on space system operations. Space system configurations should be resourced and actively managed to achieve and maintain an effective and resilient cyber survivability posture throughout the space system lifecycle.

(b) Space system owners and operators should develop and implement cybersecurity plans for their space systems that incorporate capabilities to ensure operators or automated control center systems can retain or recover positive control of space vehicles. These plans should also ensure the ability to verify the integrity, confidentiality,and availability of critical functions and the missions, services,and data they enable and provide.

These unclassified directives are typically so general that it’s hard to tell whether they actually matter.

News article.

Krebs on SecurityMicrosoft Patch Tuesday, Sept. 2020 Edition

Microsoft today released updates to remedy nearly 130 security vulnerabilities in its Windows operating system and supported software. None of the flaws are known to be currently under active exploitation, but 23 of them could be exploited by malware or malcontents to seize complete control of Windows computers with little or no help from users.

The majority of the most dangerous or “critical” bugs deal with issues in Microsoft’s various Windows operating systems and its web browsers, Internet Explorer and Edge. September marks the seventh month in a row Microsoft has shipped fixes for more than 100 flaws in its products, and the fourth month in a row that it fixed more than 120.

Among the chief concerns for enterprises this month is CVE-2020-16875, which involves a critical flaw in the email software Microsoft Exchange Server 2016 and 2019. An attacker could leverage the Exchange bug to run code of his choosing just by sending a booby-trapped email to a vulnerable Exchange server.

“That doesn’t quite make it wormable, but it’s about the worst-case scenario for Exchange servers,” said Dustin Childs, of Trend Micro’s Zero Day Initiative. “We have seen the previously patched Exchange bug CVE-2020-0688 used in the wild, and that requires authentication. We’ll likely see this one in the wild soon. This should be your top priority.”

Also not great for companies to have around is CVE-2020-1210, which is a remote code execution flaw in supported versions of Microsoft Sharepoint document management software that bad guys could attack by uploading a file to a vulnerable Sharepoint site. Security firm Tenable notes that this bug is reminiscent of CVE-2019-0604, another Sharepoint problem that’s been exploited for cybercriminal gains since April 2019.

Microsoft fixed at least five other serious bugs in Sharepoint versions 2010 through 2019 that also could be used to compromise systems running this software. And because ransomware purveyors have a history of seizing upon Sharepoint flaws to wreak havoc inside enterprises, companies should definitely prioritize deployment of these fixes, says Alan Liska, senior security architect at Recorded Future.

Todd Schell at Ivanti reminds us that Patch Tuesday isn’t just about Windows updates: Google has shipped a critical update for its Chrome browser that resolves at least five security flaws that are rated high severity. If you use Chrome and notice an icon featuring a small upward-facing arrow inside of a circle to the right of the address bar, it’s time to update. Completely closing out Chrome and restarting it should apply the pending updates.

Once again, there are no security updates available today for Adobe’s Flash Player, although the company did ship a non-security software update for the browser plugin. The last time Flash got a security update was June 2020, which may suggest researchers and/or attackers have stopped looking for flaws in it. Adobe says it will retire the plugin at the end of this year, and Microsoft has said it plans to completely remove the program from all Microsoft browsers via Windows Update by then.

Before you update with this month’s patch batch, please make sure you have backed up your system and/or important files. It’s not uncommon for Windows updates to hose one’s system or prevent it from booting properly, and some updates even have known to erase or corrupt files.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Cryptogram More on NIST’s Post-Quantum Cryptography

Back in July, NIST selected third-round algorithms for its post-quantum cryptography standard.

Recently, Daniel Apon of NIST gave a talk detailing the selection criteria. Interesting stuff.

NOTE: We’re in the process of moving this blog to WordPress. Comments will be disabled until the move is complete. The management thanks you for your cooperation and support.

Cory DoctorowMy first-ever Kickstarter: the audiobook for Attack Surface, the third Little Brother book

I have a favor to ask of you. I don’t often ask readers for stuff, but this is maybe the most important ask of my career. It’s a Kickstarter – I know, ‘another crowdfunder?’ – but it’s:

a) Really cool;

b) Potentially transformative for publishing.

c) Anti-monopolistic

Here’s the tldr: Attack Surface – AKA Little Brother 3- is coming out in 5 weeks. I retained audio rights and produced an amazing edition that Audible refuses to carry. You can pre-order the audiobook, ebook (and previous volumes), DRM- and EULA-free.

https://www.kickstarter.com/projects/doctorow/attack-surface-audiobook-for-the-third-little-brother-book

That’s the summary, but the details matter. First: the book itself. ATTACK SURFACE is a standalone Little Brother book about Masha, the young woman from the start and end of the other two books; unlike Marcus, who fights surveillance tech, Masha builds it.

Attack Surface is the story of how Masha has a long-overdue moral reckoning with the way that her work has hurt people, something she finally grapples with when she comes home to San Francisco.

Masha learns her childhood best friend is leading a BLM-style uprising – and she’s being targeted by the same cyberweapons that Masha built to hunt Iraqi insurgents and post-Soviet democracy movements.

I wrote Little Brother in 2006, it came out in 2008, and people tell me it’s “prescient” because the digital human rights issues it grapples with – high-tech authoritarianism and high-tech resistance – are so present in our current world.

But it’s not so much prescient as observant. I wrote Little Brother during the Bush administration’s vicious, relentless, tech-driven war on human rights. Little Brother was a bet that these would not get better on their own.

And it was a bet that tales of seizing the means of computation would inspire people to take up digital arms of their own. It worked. Hundreds of cryptographers, security experts, cyberlawyers, etc have told me that Little Brother started them on their paths.

ATTACK SURFACE – a technothriller about racial injustice, police brutality, high-tech turnkey totalitarianism, mass protests and mass surveillance – was written between May 2016 and Nov 2018, before the current uprisings and the tech worker walkouts.

https://twitter.com/search?q=%20(%23dailywords)%20(from%3Adoctorow)%20%22crypto%20wars%22&src=typed_query&f=live

But just as with Little Brother, the seeds of the current situation were all around us in 2016, and if Little Brother inspired a cohort of digital activists, I hope Attack Surface will give a much-needed push to a group of techies (currently) on the wrong side of history.

As I learned from Little Brother, there is something powerful about technologically rigorous thrillers about struggles for justice – stories that marry excitement, praxis and ethics. Of all my career achievements, the people I’ve reached this way matter the most.

Speaking of careers and ethics. As you probably know, I hate DRM with the heat of 10000 suns: it is a security/privacy nightmare, a monopolist’s best friend, and a gross insult to human rights. As you may also know, Audible will not carry any audiobooks unless they have DRM.

Audible is Amazon’s audiobook division, a monopolist with a total stranglehold on the audiobook market. Audiobooks currently account for almost as much revenue as hardcovers, and if you don’t sell on Audible, you sacrifice about 95% of that income.

That’s a decision I’ve made, and it means that publishers are no longer willing to pay for my audiobook rights (who can blame them?). According to my agent, living my principles this way has cost me enough to have paid off my mortgage and maybe funding my retirement.

I’ve tried a lot of tactics to get around Audible; selling through the indies (libro.fm, downpour.com, etc), through Google Play, and through my own shop (craphound.com/shop).

I appreciate the support there but it’s a tiny fraction of what I’m giving up – both in terms of dollars and reach – by refusing to lock my books (and my readers) (that’s you) to Amazon’s platform for all eternity with Audible DRM.

Which brings me to this audiobook.

Look, this is a great audiobook. I hired Amber Benson (a brilliant writer and actor who played Tara on Buffy), Skyboat Media and director Cassandra de Cuir, and Wryneck Studios, and we produced a 15h long, unabridged masterpiece.

It’s done. It’s wild. I can’t stop listening to it. It drops on Oct 13, with the print/ebook edition.

It’ll be on sale in all audiobook stores (except Audible) on the 13th,for $24.95.

But! You can get it for a mere $20 via my first Kickstarter.

https://www.kickstarter.com/projects/doctorow/attack-surface-audiobook-for-the-third-little-brother-book

What’s more, you can pre-order the ebook – and also buy the previous ebooks and audiobooks (read by Wil Wheaton and Kirby Heyborne) – all DRM free, all free of license “agreements.”

The deal is: “You bought it, you own it, don’t violate copyright law and we’re good.”

And here’s the groundbreaking part. For this Kickstarter, I’m the retailer. If you pre-order the ebook from my KS, I get the 30% that would otherwise go to Jeff Bezos – and I get the 25% that is the standard ebook royalty.

This is a first-of-its-kind experiment in letting authors, agents, readers and a major publisher deal directly with one another in a transaction that completely sidesteps the monopolists who have profited so handsomely during this crisis.

Which is where you come in: if you help me pre-sell a ton of ebooks and audiobooks through this crowdfunder, it will show publishing that readers are willing to buy their ebooks and audiobooks without enriching a monopolist, even if it means an extra click or two.

So, to recap:

Attack Surface is the third Little Brother book

It aims to radicalize a generation of tech workers while entertaining its audience as a cracking, technologically rigorous thriller

The audiobook is amazing, read by the fantastic Amber Benson

If you pre-order through the Kickstarter:

You get a cheaper price than you’ll get anywhere else

You get a DRM- and EULA-free purchase

You’ll fight monopolies and support authorship

If you’ve ever enjoyed my work and wondered how you could pay me back: this is it. This is the thing. Do this, and you will help me artistically, professionally, politically, and (ofc) financially.

Thank you!

https://www.kickstarter.com/projects/doctorow/attack-surface-audiobook-for-the-third-little-brother-book

PS: Tell your friends!

Cory DoctorowAttack Surface Kickstarter Promo Excerpt!

This week’s podcast is a generous excerpt – 3 hours! – of the audiobook for Attack Surface, the third Little Brother book, which is available for pre-order today on my very first Kickstarter.

This Kickstarter is one of the most important moments in my professional career, an experiment to see if I can viably publish audiobooks without caving into Amazon’s monopolistic requirement that all Audible books be sold with DRM that locks it to Amazon’s corporate platform…forever. If you’ve ever wanted to thank me for this podcast or my other work, there has never been a better way than to order the audiobook (or ebook) (or both!).

Attack Surface is a standalone novel, meaning you can enjoy it without reading Little Brother or its sequel, Homeland. Please give this extended preview a listen and, if you enjoy it, back the Kickstarter and (this is very important): TELL YOUR FRIENDS.

Thank you, sincerely.

MP3

Planet DebianGunnar Wolf: Welcome to the family

Need I say more? OK, I will…

Still wanting some more details? Well…

I have had many cats through my life. When I was about eight years old, my parents tried to have a dog… but the experiment didn’t work, and besides those few months, I never had one.

But as my aging cats spent the final months of their last very long lifes, it was clear to us that, after them, we would be adopting a dog.

Last Saturday was the big day. We had seen some photos of the mother and the nine (!) pups. My children decided almost right away her name; they were all brownish, so the name would be corteza (tree bark. They didn’t know, of course, that dogs also have a bark! 😉)

Anyway, welcome little one!

Kevin RuddSMH: Scott Morrison is Yearning for a Donald Trump victory

Published in The Sydney Morning Herald on 08 September 2020

The PM will be praying for a Republican win in the US to back up his inaction on climate and the Paris Agreement.

A year out from Barack Obama’s election in 2008, John Howard made a stunning admission that he thought Americans should be praying for a Republican victory. Ideologically this was unremarkable. But the fact Howard said so publicly was because he knew just how uncomfortable an Obama victory would be for him given his refusal to withdraw our troops from Iraq.

Fast forward more than a decade, and Scott Morrison – even in the era of Donald Trump – will also be yearning desperately for a Republican victory come November. But this time it is the conservative recalcitrance on a very different issue that risks Australia being isolated on the world stage: climate change.

And as the next summer approaches, Australians will be reminded afresh of how climate change, and its impact on our country and economy, has not gone away.

Former vice-president Joe Biden has put at the centre of his campaign a historic plan to fight climate change both at home and abroad. On his first day in office, he has promised to return the US to the Paris Agreement. And he recently unveiled an unprecedented $2 trillion green investment plan, including the complete decarbonisation of the domestic electricity system by 2035.

By contrast, Morrison remains hell-bent on Australia doing its best to disrupt global momentum to tackle the climate crisis and burying our head in the sand when it comes to embracing the new economic opportunities that come with effective climate change action.

As a result, if Biden is elected this November, we will be on track for a collision course with our American ally in a number of areas.

First, Morrison remains recklessly determined on being able to carry over so-called “credits” from the overachievement of our 2020 Kyoto target to help it meet its already lacklustre 2030 target under the new Paris regime.

No other government in the world is digging their heels in like this. None. It is nothing more than an accounting trick to allow Australia to do less. Perhaps the greatest irony is that this “overachievement” was also in large part because of the mitigation actions of our government.

That aside, these carbon credits also do nothing for the atmosphere. At worst, using them beyond 2020 could be considered illegal and only opens the back door for other countries to also do less by following Morrison’s lead.

This will come to a head at the next UN climate talks in Glasgow next year. While Australia has thus far been able to dig in against objections by most of the rest of the world, a Biden victory would only strengthen the hand of the UK hosts to simply ride over the top of any further Australian intransigence. Morrison would be foolhardy to believe that Boris Johnson’s government will burn its political capital at home and abroad to defend the indefensible Australian position.

Second, unlike 114 countries around the world, Morrison remains hell-bent on ignoring the central promise of Paris: that all governments increase their 2030 targets by the time they get to Glasgow. That’s because even if all those commitments were fully implemented, it would only give the planet one-third of what is necessary to keep average temperature increases within 1.5 degrees by 2100, as the Paris Agreement requires. This is why governments agreed to increase their ambition every five years as technologies improved, costs lowered and political momentum built.

In 2014, the Liberal government explained our existing Paris target on the basis that it was the same as what the Americans were doing. In reality, the Obama administration planned to achieve the same cut of 26 to 28 per cent on 2005 emissions by 2025 – not 2030 as we pledged and sought to disguise.

So based on the logic that what America does is this government’s benchmark for its global climate change commitments, if the US is prepared to increase it’s Paris target (as it will under Biden), so too should we. Biden himself has not just committed the US to the goal of net zero emissions by 2050, but has undertaken to embed it in legislation as countries such as Britain and New Zealand have done, and rally others to do the same.

Unsurprisingly, despite the decisions of 121 countries around the world, Morrison also refuses to even identify a timeline for achieving the Paris Agreement’s long-term goal to reach net zero emissions. As the science tells us, this needs to be by 2050 to have any shot of protecting the world’s most vulnerable populations – including in the Pacific – and saving Australia from a rolling apocalypse of weather-related disasters that will wreak havoc on our economy.

For our part, the government insists that it won’t “set a target without a plan”. But governments exist to do the hard work. And politically, it goes against the myriad of support domestically for a net zero by 2050 goal, including from the peak business, industry and union groups, the top bodies for energy and agriculture (two sectors that together account for almost half of our emissions), as well as our national airline, two largest mining companies, every state and territory government, and even a majority of conservative voters.

As Tuvalu’s recent prime minister Enele Sopoaga reminded us recently, the fact that Morrison himself looked Pacific island leaders in the eye last year and promised to develop such a long-term plan – a promise he reiterated at the G20 – also shows we risk being a country that does not do what we say. For those in the Pacific, this just rubs salt into the wound of Morrison’s decision to blindly follow Trump’s lead in halting payments to the Green Climate Fund (something Biden would also reverse), requiring them to navigate a bureaucratic maze of individual aid programs as a result.

Finally, Biden has undertaken to also align trade and climate policy by imposing carbon tariffs against those countries that fail to do their fair share in global greenhouse gas reductions. The EU is in the process of embracing the same approach. So if Morrison doesn’t act, he’s going to put our entire export sector at risk of punitive tariffs because the Liberals have so consistently failed to take climate change seriously.

Under Trump, Morrison has been able to get one giant leave pass for doing nothing on climate. But under Biden, he’ll be seen as nothing more than the climate change free-loader that he is. As he will by the rest of the world. And our economy will be punished as a result.

The post SMH: Scott Morrison is Yearning for a Donald Trump victory appeared first on Kevin Rudd.

Planet DebianSven Hoexter: Cloudflare Bot Management, MITM Boxes and TLS 1.3

This is just a "warn your brothers" post for those who use Cloudflare Bot Management, and have customers which use MITM boxes to break up TLS 1.3 connections.

Be aware that right now some heuristic rules in the Cloudflare Bot Management score TLS 1.3 requests made by some MITM boxes with 1 - which equals "we're 99.99% sure that this is none human browser traffic". While technically correct - the TLS connection hitting the Cloudflare Edge node is not established by a browser - that does not help your customer if you block those requests. If you do something like blocking requests with a BM score of 1 at the Cloudflare Edge, you might want to reconsider that at the moment and sent a captcha challenge instead. While that is not a lot nicer, and still pisses people off, you might find a balance there between protecting yourself and still having some customers.

I've a confirmation for this happening with Cisco WSA, but it's likely to be also the case with other vendors. Breaking up TLS 1.2 seems to be stealthy enough in those appliances that it's not detected, so this issue creeps in with more enterprises rolling out modern browser.

You can now enter youself here a rant about how bad the client-server internet of 2020 is, and how bad it is that some of us rely on Cloudflare, and that they have accumulated a way too big market share. But the world is as it is. :(

LongNowTime-Binding and The Music History Survey

Musicologist Phil Ford, co-host of the Weird Studies podcast, makes an eloquent argument for the preservation of the “Chants to Minimalism” Western Music History survey—the standard academic curriculum for musicology students, akin to the “fish, frogs, lizards, birds” evolutionary spiral taught in bio classes—in an age of exponential change and an increased emphasis on “relevance” over the remembrance of canonical works:

Perhaps paradoxically, the rate of cultural change increases in proportional measure to the increase in cultural memory. Writing and its successor media of prosthetic memory enact a contradiction: the easy preservation of cultural memory enables us to break with the past, to unbind time. At its furthest extremes, this is manifested in the familiar and dismal spectacle of fascist and communist regimes, impelled by intellectual notions permitted by the intensified time-binding of literacy, imagining utopias that will ‘wipe the slate clean’ and trying to force people to live in a world entirely divorced from the bound time of social/cultural tradition.

See, for instance, Mao Zedong’s crusade to destroy Tibetan Buddhism by the erasure of context that held the Dalai Lama’s social role in place for fourteen generations. How is the culture to find a fifteenth Dalai Lama if no child can identify the relics of a prior Dalai Lama? Ironically this speaks to larger questions of the agency of landscapes and materials, and how it isn’t just our records as we understand them that help scaffold our identity; but that we are in some sense colonial organisms inextricable from, made by, our environments — whether built or wild. As recounted in countless change-of-station stories, monarchs and other leaders, like whirlpools or sea anemones, dissolve when pulled out of the currents that support them.

That said, the current isn’t just stability but change. Both novelty and structure are required to bind time. By pushing to extremes, modernity self-undermines, imperils its own basis for existence; and those cultures that slam on the brakes and dig into conservative tradition risk self-suffocation, or being ripped asunder by the friction of collision with the moving edge of history:

Modernity is (among other things) the condition in which time-binding is threatened by its own exponential expansion, and yet where it’s not clear exactly how we are to slow its growth.  Very modern people are reflexively opposed to anything that would slow down the acceleration: for them, the essence of the human is change. Reactionaries are reflexively opposed to anything that will speed up acceleration: for them, the essence of the human is continuity. Both are right!  Each side, given the opportunity to realize its imagined utopia of change or continuity, would make a world no sensible person would be caught dead in.

Ultimately, therefore, a conservative-yet-innovative balance must be found in the embrace of both new information technologies and their use for preservation of historic repertoires. When on a rocket into space, a look back at the whole Earth is essential to remember where we come from:

The best argument for keeping Sederunt in the classroom is that it is one of the nearly-infinite forms of music that the human mind has contrived, and the memory of those forms — time-binding — is crucial not only to the craft of musicians but to our continued sense of what it is to be a human being.

This isn’t just future-shocked reactionary work but a necessary integrative practice that enables us to reach beyond:

To tell the story of who we are is to engage in the scholar’s highest mission. It is the gift that shamans give their tribe.

Worse Than FailureCodeSOD: Sleep on It

If you're fetching data from a remote source, "retry until a timeout is hit" is a pretty standard pattern. And with that in mind, this C++ code from Auburus doesn't look like much of a WTF.

bool receiveData(uint8_t** data, std::chrono::milliseconds timeToWait) { start = now(); while ((now() - start) < timeToWait) { if (/* successfully receive data */) { return true; } std::this_thread::sleep_for(100ms); } return false; }

Track the start time. While the difference between the current time and the start is less than our timeout, try and get the data. If you don't, sleep for 100ms, then retry.

This all seems pretty reasonable, at first glance. We could come up with better ways, certainly, but that code isn't quite a WTF.

This code is:

// The ONLY call to that function receiveData(&dataPtr, 100ms);

By calling this with a 100ms timeout, and because we hard-coded in a 100ms sleep, we've guaranteed that we will never retry. That may or not be intentional, and that's what really bugs me about this code. Maybe they meant to do that (because they originally retried, and found it caused other bugs?). Maybe they didn't. But they didn't document it, either in the code or as a commit comment, so we'll never know.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

LongNowStudy Group for Progress Launches with Discount for Long Now Members

Long Now Member Jason Crawford, founder of The Roots of Progress, is starting up a weekly learning group on progress with a steep discount for Long Now Members:

The Study Group for Progress is a weekly discussion + Q&A on the history, economics and philosophy of progress. Long Now members can get 50% off registration using the link below.


Each week will feature a special guest for Q&A. Confirmed speakers so far include top economists and historians such as Robert J. Gordon (Northwestern, The Rise & Fall of American Growth), Margaret Jacob (UCLA), Richard Nelson (Columbia), Patrick Collison, and Anton Howes. Readings from each author will be given out ahead of time. Participants will also receive a set of readings originally created for the online learning program Progress Studies for Young Scholars: a summary of the history of technology, including advances in materials and manufacturing, agriculture, energy, transportation, communication, and disease.


The group will meet weekly on Sundays at 4:00–6:30pm Pacific, from September 13 through December 13 (recordings available privately afterwards). See the full announcement here and register for 50% off with this link

Planet DebianArturo Borrero González: Debconf 2020 online, summary

Debconf2020 logo

Debconf2020 took place when I was on personal vacations time. But anyway I’m lucky enough that my company, the Wikimedia Foundation, paid the conference registration fee for me and allowed me to take the time (after my vacations) to watch recordings from the conference.

This is my first time attending (or watching) a full-online conference, and I was curious to see first hand how it would develop. I was greatly surprised to see it worked pretty nicely, so kudos to the organization, video team, volunteers, etc!

What follows is my summary of the conference, from the different sessions and talks I watched (again, none of them live but recordings).

The first thing I saw was the Welcome to Debconf 2020 opening session. It is obvious the video was made with lots of love, I found it entertaining and useful. I love it :-)

Then I watched the BoF Can Free Software improve social equality. It was introduced and moderated by Hong Phuc Dang. Several participants, about 10 people, shared their visions on the interaction between open source projects and communities. I’m pretty much aware of the interesting social advancement that FLOSS can enable in communities, but sometimes is not so easy, it may also present challenges and barriers. The BoF was joined by many people from the Asia Pacific region, and for me, it has been very interesting to take a step back from the usual western vision of this topic. Anyway, about the session itself, I have the feeling the participants may have spent too much time on presentations, sharing their local stories (which are interesting, don’t get me wrong), perhaps leaving little room for actual proposal discussions or the like.

Next I watched the Bits from the DPL talk. In the session, Jonathan Carter goes over several topics affecting the project, both internally and externally. It was interesting to know more about the status of the project from a high level perspective, as an organization, including subjects such as money, common project problems, future issues we are anticipating, the social aspect of the project, etc.

The Lightning Talks session grabbed my attention. It is usually very funny to watch and not as dense as other talks. I’m glad I watched this as it includes some interesting talks, ranging from HAM radios (I love them!), to personal projects to help in certain tasks, and even some general reflections about life.

Just when I’m writing this very sentence, the video for the Come and meet your Debian Publicity team! talk has been uploaded. This team does an incredible work in keeping project information flowing, and social networks up-to-date and alive. Mind that the work of this team is mostly non-engineering, but still, is a vital part of the project. The folks in session explain what the team does, and they also discuss how new people can contribute, the different challenges related to language barriers, etc.

I have to admit I also started watching a couple other sessions that turned out to don’t be interesting to me (and therefore I didn’t finish the video). Also, I tried to watch a couple more sessions that didn’t publish their video recording just yet, for example the When We Virtualize the Whole Internet talk by Sam Hartman. Will check again in a couple of days.

It is a real pleasure the video recordings from the conference are made available online. One can join the conference anytime (like I’m doing!) and watch the sessions at any pace at any time. The video archive is big, I won’t be able to go over all of it. I won’t lie, I still have some pending videos to watch from last year Debconf2019 :-)

Worse Than FailureCodeSOD: Classic WTF: Covering All Cases… And Then Some

It's Labor Day in the US, where we celebrate the labor movement and people who, y'know, do actual work. So let's flip back to an old story, which does a lot of extra work. Original -- Remy

Ben Murphy found a developer who liked to cover all of his bases ... then cover the dug-out ... then the bench. If you think this method to convert input (from 33 to 0.33) is a bit superflous, you should see data validation.

Static Function ConvertPercent(v_value As Double)
  If v_value > 1 Then
    ConvertPercent = v_value / 100
  ElseIf v_value = 1 Then
    ConvertPercent = v_value / 100
  ElseIf v_value < 1 Then
    ConvertPercent = v_value / 100
  ElseIf v_value = -1 Then
    ConvertPercent = v_value / 100
  Else 
    ConvertPercent = v_value
  End If 
End Function


The original article- from 2004!- featured Alex asking for a logo. Instead, let me remind you to submit your WTF. Our stories come from our readers. If nothing else, it's a great chance to anonymously vent about work.
[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Planet DebianEnrico Zini: Learning resources links

Cognitive bias cheat sheet has another elegant infographic summarising cognitive biases. On this subject, you might want to also check out 15 Insane Things That Correlate With Each Other.

Get started | Learning Music (Beta) has a nice interactive introduction to music making.

If you leave in a block of flats and decide to learn music making, please use headphones when experimenting. Our neighbour, sadly, didn't.

You can also learn photography with Photography for Beginners (The Ultimate Guide in 2020) and somewhat related, Understanding Aspect Ratios: A Comprehensive Guide

Planet DebianJonathan Carter: DebConf 20 Online

This week, last week, Last month, I attended DebConf 20 Online. It was the first DebConf to be held entirely online, but it’s the 7th DebConf I’ve attended from home.

My first one was DebConf7. Initially I mostly started watching the videos because I wanted to learn more about packaging. I had just figured out how to create binary packages by hand, and have read through the new maintainers guide, but a lot of it was still a mystery. By the end of DebConf7 my grasp of source packages was still a bit thin, but other than that, I ended up learning a lot more about Debian during DebConf7 than I had hoped for, and over the years, the quality of online participation for each DebConf has varied a lot.

I think having a completely online DebConf, where everyone was remote, helped raise awareness about how important it is to make the remote experience work well, and I hope that it will make people who run sessions at physical events in the future consider those who are following remotely a bit more.

During some BoF sessions, it was clear that some teams haven’t talked to each other face to face in a while, and I heard at least 3 teams who said “This was nice, we should do more regular video calls!”. Our usual communication methods of e-mail lists and IRC serve us quite well, for the most part, but sometimes having an actual conversation with the whole team present at the same time can do wonders for dealing with many kind of issues that is just always hard to deal with in text based mediums.

There were three main languages used in this DebConf. We’ve had more than one language at a DebConf before, but as far as I know it’s the first time that we had multiple talks over 3 languages (English, Malayalam and Spanish).

It was also impressive how the DebConf team managed to send out DebConf t-shirts all around the world and in time before the conference! To my knowledge only 2 people didn’t get theirs in time due to customs.

I already posted about the new loop that we worked on for this DebConf. It was an unintended effect that we ended up having lots of shout-outs which ended up giving this online DebConf a much more warmer, personal feel to it than if we didn’t have it. I’m definitely planning to keep on improving on that for the future, for online and in-person events. There were also some other new stuff from the video team during this DebConf, we’ll try to co-ordinate a blog post about that once the dust settled.

Thanks to everyone for making this DebConf special, even though it was virtual!

Planet DebianThorsten Alteholz: My Debian Activities in August 2020

FTP master

This month I accepted 159 packages and rejected 16. The overall number of packages that got accepted was 172.

Debian LTS

This was my seventy-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 21.75h. During that time I did LTS uploads of:

  • [DLA 2336-1] firejail security update for two CVEs
  • [DLA 2337-1] python2.7 security update for nine CVEs
  • [DLA 2353-1] bacula security update for one CVE
  • [DLA 2354-1] ndpi security update for one CVE
  • [DLA 2355-1] bind9 security update for two CVEs
  • [DLA 2359-1] xorg-server security update for five CVEs

I also started to work on curl but did not upload a fixed version yet. As usual, testing the package takes up some time.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twenty sixth ELTS month.

During my allocated time I uploaded:

  • ELA-265-1 for python2.7
  • ELA-270-1 for bind9
  • ELA-272-1 for xorg-server

Like in LTS, I also started to work on curl and encountered the same problems as in LTS above.

Last but not least I did some days of frontdesk duties.

Other stuff

This month I found again some time for other Debian work and uploaded packages to fix bugs, mainly around gcc10:

I also uploaded new upstream versions of:

All package called *osmo* are developed by the Osmocom project, that is about Open Source MObile COMmunication. They are really doing a great job and I apologize that my uploads of new versions are mostly far behind their development.

Some of the uploads are related to new packages:

Planet DebianDirk Eddelbuettel: inline 0.3.16: Now with system2()

A new minor release of the inline package just arrived on CRAN. inline facilitates writing code in-line in simple string expressions or short files. The package is mature and stable, and can be considered to be in maintenance mode: Rcpp used it extensively in the vrey early days before Rcpp Attributes provided an even better alternative. Seveal other package still rely on inline.

One of these package is rstan, and Ben Goodrich updated our use of system() to system2() allowing for better error diagnostics. We also did a bit of standard maintenance to Travis CI and the README.md file.

See below for a detailed list of changes extracted from the NEWS file.

Changes in inline version 0.3.16 (2020-09-06)

  • Maintenance updates to README.md standardizing badges (Dirk).

  • Maintenance update to Travis CI setup (Dirk).

  • Switch to using system2() for better error diagnostics (Ben Goodrich in #12).

Courtesy of CRANberries, there is a comparison to the previous release.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianMolly de Blanc: NYU VPN

I needed to setup a VPN in order to access my readings for class. The instructions for Linux are located: https://nyu.service-now.com/sp?id=kb_article_view&sysparm_article=KB0014932

After you download the VPN client of your choice (they recommend Cisco AnyConnect), connect to: vpn.nyu.edu.

It will ask for two passwords: your NYU username and password and a multi-factor authentication (MFA) code from Duo. Use the Duo. See below for stuff on Duo.

Hit connect and viola, you can connect to the VPN.

Duo Authentication Setup

Go to: https://start.nyu.edu and follow the instructions for MFA. They’ll tell you that a smart phone is the most secure method of setting up. I am skeptical.

Install the Duo Authentication App on your phone, enter your phone number into the NYU web page (off of ) and it will send a thing to your phone to connect it.

Commentary

Okay, I have to complain at least a little bit about this. I had to guess what the VPN address was because the instructions are for NYU Shanghai. I also had to install the VPN client using the terminal. These sorts of things make it harder for people to use Linux. Boo.

Planet DebianBen Armstrong: Dronefly relicensed under copyleft licenses

To ensure Dronefly always remains free, the Dronefly project has been relicensed under two copyleft licenses. Read the license change and learn more about copyleft at these links.

I was prompted to make this change after a recent incident in the Red DiscordBot development community that made me reconsider my prior position that the liberal MIT license was best for our project. While on the face of it, making your license as liberal as possible might seem like the most generous and hassle-free way to license any project, I was shocked into the realization that its liberality was also its fatal flaw: all is well and good so long as everyone is being cooperative, but it does not afford any protection to developers or users should things suddenly go sideways in how a project is run. A copyleft license is the best way to avoid such issues.

In this incident – a sad story of conflict between developers I respect on both sides of the rift, and owe a debt to for what they’ve taught me – three cogs we had come to depend on suddenly stopped being viable for us to use due to changes to the license & the code. Effectively, those cogs became unsupported and unsupportable. To avoid any such future disaster with the Dronefly project, I started shopping for a new license that would protect developers and users alike from similarly losing support, or losing control of their contributions. I owe thanks to Dale Floer, a team member who early on advised me the AGPL might be a better fit, and later was helpful in selecting the doc license and encouraging me to follow through. We ran the new licenses by each contributor and arrived at this consensus: the AGPL is best suited for our server-based code, and CC-BY-SA is best suited for our documentation. The relicensing was made official this morning.

On Discord platform alternatives

You might well question what I, a Debian developer steeped in free software culture and otherwise in agreement with its principles, am doing encouraging a community to grow on the proprietary Discord platform! I have no satisfying answer to that. I explained when I introduced my project here some of the backstory, but that’s more of an account of its beginnings than justification for it to continue on this platform. Honestly, all I can offer is a rather dissatisfying it seemed like the right thing to do at the time.

Time will tell whether we could successfully move off of it to a freedom-respecting and privacy-respecting alternative chat platform that is both socially and technically viable to migrate to. That platform would ideally:

  • not be under the control of a single, central commercial entity running proprietary code, so their privacy is safeguarded, and they are protected from disaster, should it become unattractive to remain on the platform;
  • have a vibrant and supportive open source third party extension development community;
  • support our image-rich content and effortless sharing of URLs with previews automatically provided from the page’s content (e.g. via OpenGraph tags);
  • be effortless to join regardless of what platform/device each user uses;
  • keep a history of messages so that future members joining the community can benefit from past conversations, and existing members can catch up on conversations they missed;
  • but above all else: be acceptable and compelling to the existing community to move over onto it.

I’m intrigued by Matrix and wonder if it provides some or all of the above in its current form. Are you a developer writing bots for this platform? If so, I especially want to hear from you in the comments about your experience. Or in any case, if you’ve been there before – if you’ve faced the same issue with your community and have a success story to share, I would love to hear from you.

Planet DebianJunichi Uekawa: Trying to boot Debian on Chromebook Kevin.

Trying to boot Debian on Chromebook Kevin. But I haven't yet found how to get the USB serial console device to work, and display is black. So I can't see anything.

Planet DebianRuss Allbery: September haul

So many good books, so little reading time.

Jairus Banaji — A Brief History of Commercial Capitalism (nonfiction)
Steven Brust — The Baron of Magister Valley (sff)
Micaiah Johnson — The Space Between Worlds (sff)
Ian McDonald — Luna: New Moon (sff)
Elizabeth Moon — Trading in Danger (sff)
Tamsyn Muir — Harrow the Ninth (sff)
Suzanne Palmer — Finder (sff)
Kit Rocha — Beyond Shame (sff)
Kit Rocha — Beyond Control (sff)
Kit Rocha — Beyond Pain (sff)
Arundhati Roy — Azadi (nonfiction)
Jeff VanderMeer — Authority (sff)
Jeff VanderMeer — Acceptance (sff)
K.B. Wagers — Behind the Throne (sff)
Jarrett Walker — Human Transit (nonfiction)

I took advantage of a few sales to get books I know I'm going to want to read eventually for a buck or two.

,

Planet DebianMike Gabriel: My Work on Debian LTS (August 2020)

In August 2020, I have worked on the Debian LTS project for 16 hours (of 8 hours planned, plus another 8 hours that I carried over from July).

For ELTS, I have worked for another 8 hours (of 8 hours planned).

LTS Work

  • LTS frontdesk: triage wireshark, yubico-piv-tool, trousers, software-properties, qt4-x11, qtbase-opensource-src, openexr, netty and netty-3.9
  • upload to stretch-security: libvncserver 0.9.11+dfsg-1.3~deb9u5 (fixing 9 CVEs, DLA-2347-1 [1])
  • upload to stretch-security: php-horde-core 2.27.6+debian1-2+deb9u1 (1 CVE, DLA-2348 [2])
  • upload to stretch-security: php-horde 5.2.13+debian0-1+deb9u3 (fixing 1 CVE, DLA-2349-1 [3])
  • upload to stretch-security: php-horde-kronolith 4.2.19-1+deb9u1 (fixing 1 CVE, DLA-2350-1 [4])
  • upload to stretch-security: php-horde-kronolith 4.2.19-1+deb9u2 (fixing 1 more CVE, DLA-2351-1 [5])
  • upload to stretch-security: php-horde-gollem 3.0.10-1+deb9u2 (fixing 1 CVE, DLA-2352-1 [6])
  • upload to stretch-security: freerdp 1.1.0~git20140921.1.440916e+dfsg1-13+deb9u4 (fixing 14 CVEs, DLA-2356-1 [7])
  • prepare salsa MRs for gnome-shell (for gnome-shell in stretch [8] and buster [9])

ELTS Work

  • Look into open CVEs for Samba in Debian jessie ELTS. Revisit issues affecting the Samba AD code that have previously been considered as issues.

Other security related work for Debian

  • upload to buster (SRU): libvncserver 0.9.11+dfsg-1.3+deb10u4 (fixing 9 CVEs) [10]

References

Cryptogram Schneier.com is Moving

I’m switching my website software from Movable Type to WordPress, and moving to a new host.

The migration is expected to last from approximately 3 AM EST Monday until 4 PM EST Tuesday. The site will still be visible during that time, but comments will be disabled. (This is to prevent any new comments from disappearing in the move.)

This is not a site redesign, so you shouldn’t notice many differences. Even the commenting system is pretty much the same, though you’ll be able to use Markdown instead of HTML if you want to.

The conversion to WordPress was done by Automattic, who did an amazing job of getting all of the site’s customizations and complexities — this website is 17 years old — to work on a new platform. Automattic is also providing the new hosting on their Pressable service. I’m not sure I could have done it without them.

Hopefully everything will work smoothly.

Planet DebianElana Hashman: Three talks at DebConf 2020

This year has been a really unusual one for in-person events like conferences. I had already planned to take this year off from travel for the most part, attending just a handful of domestic conferences. But the pandemic has thrown those plans into chaos; I do not plan to attend large-scale in-person events until July 2021 at the earliest, per my employer's guidance.

I've been really sad to have turned down multiple speaking invitations this year. To try to set expectations, I added a note to my Talks page that indicates I will not be writing any new talks for 2020, but am happy to join panels or reprise old talks.

And somehow, with all that background, I still ended up giving three talks at DebConf 2020 this year. In part, I think it's because this is the first DebConf I've been able to attend since 2017, and I was so happy to have the opportunity! I took time off work to give myself enough space to focus on the conference. International travel is very difficult for me, so DebConf is generally challenging if not impossible for me to attend.

A panel a day keeps the FTP Team away?

On Thursday, August 27th, I spoke on the Leadership in Debian panel, where I discussed some of the challenges leadership in the project must face, including an appropriate response to the BLM movement and sustainability for volunteer positions that require unsustainable hours (such as DPL).

On Friday, August 28th, I hosted the Debian Clojure BoF, attended by members of the Clojure and Puppet teams. The Puppet team is working to package the latest versions of Puppet Server/DB, which involve significant Clojure components, and I am doing my best to help.

On Saturday, August 29th, I spoke on the Meet the Technical Committee panel. The Committee presented a number of proposals for improving how we work within the project. I was responsible for presenting our first proposal on allowing folks to engage the committee privately.

,

Planet DebianDirk Eddelbuettel: nanotime 0.3.2: Tweaks

Another (minor) nanotime release, now at version 0.3.2. This release brings an endianness correction which was kindly contributed in a PR, switches to using the API header exported by RcppCCTZ, and tweaks test coverage a little with respect to r-devel.

nanotime relies on the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from work by co-author Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.

The NEWS snippet adds full details.

Changes in version 0.3.2 (2020-09-03)

  • Correct for big endian (Elliott Sales de Andrade in #81).

  • Use the RcppCCTZ_API.h header (Dirk in #82).

  • Conditionally reduce test coverage (Dirk in #83).

Thanks to CRANberries there is also a diff to the previous version. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Cryptogram Friday Squid Blogging: Morning Squid

Asa ika means “morning squid” in Japanese.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Worse Than FailureError'd: We Must Explore the Null!

"Beyond the realm of Null, past the Black Stump, lies the mythical FILE_NOT_FOUND," writes Chris A.

 

"I know, Zwift, I should have started paying you 50 years ago," Jeff wrote, "But hey, thanks for still giving leeches like me a free ride!"

 

Drake C. writes, "Must...not...click...forbidden...newsletter!"

 

"I'm having a hard time picking between these 'Exclusives'. It's a shame they're both scheduled for the same time," wrote Rutger.

 

"Wait, so is this beer zero raised to hex FF? At that price I'll take 0x02!" wrote Tony B.

 

Kevin F. writes, "Some weed killers say they're powerful. This one backs up that claim!"

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cryptogram Hacking AI-Graded Tests

The company Edgenuity sells AI systems for grading tests. Turns out that they just search for keywords without doing any actual semantic analysis.

Planet DebianReproducible Builds (diffoscope): diffoscope 159 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 159. This version includes the following changes:

[ Chris Lamb ]
* Show "ordering differences only" in strings(1) output.
  (Closes: reproducible-builds/diffoscope#216)
* Don't alias output from "os.path.splitext" to variables that we do not end
  up using.
* Don't raise exceptions when cleaning up after a guestfs cleanup failure.

[ Jean-Romain Garnier ]
* Make "Command" subclass a new generic Operation class.

You find out more by visiting the project homepage.

,

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.9.900.3.0

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 769 other packages on CRAN.

A few days ago, Conrad released a new minor version 9.900.3 of Armadillo which we packaged and tested as usual. Given the incremental release character, we only tested the release and not candidate release. No regressions were found, and, as usual, logs from reverse-depends runs are in the rcpp-logs repo.

All changes in the new release are noted below.

Changes in RcppArmadillo version 0.9.900.3.0 (2020-09-02)

  • Upgraded to Armadillo release 9.900.3 (Nocturnal Misbehaviour)

    • More efficient code for initialising matrices with fill::zeros

    • Fixes for various error messages

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBen Hutchings: Debian LTS work, August 2020

I was assigned 16 hours of work by Freexian's Debian LTS initiative, but only worked 6.25 hours this month and have carried over the rest to September.

I finished my work to add Linux 4.19 to the stretch-security suite, providing an upgrade path for those previously installing it from stretch-backports (DLA-2323-1, DLA-2324-1). I also updated the firmware-nonfree package (DLA-2321-1) so that firmware needed by drivers in Linux 4.19 is also be available in the non-free section of the stretch-security suite.

I also reviewed the report of the Debian LTS survey and commented on the presentation of results. This report was presented in the Debian LTS BoF at DebConf.

Planet DebianMolly de Blanc: “All Animals Are Equal,” Peter Singer

I recently read “Disability Visibility,” which opens with a piece by Harriet McBryde Johnson about debating Peter Singer. When I got my first reading for my first class and saw it was Peter Singer, I was dismayed because of his (heinous) stances in disability. I assumed “All Animals Are Equal” was one of Singer’s pieces about animal rights. While I agree with many of the principles Singer discusses around animal rights, I feel as though his work on this front is significantly diminished by his work around disability. To put it simply, I can’t take Peter Singer seriously.

Because of this I had a lot of trouble reading “All Animals Are Equal” and taking it in good faith. I judged everything from his arguments to his writing harshly. While I don’t disagree with his basic point (all animals have rights) I disagree with how he made the point and the argument supporting it.

One of the things I was told to ask when reading any philosophy paper is “What is the argument?” or “What are they trying to convince you of?” In this case, you could frame the answer as: Animals have {some of) the same rights people do. I think it would be more accurate though to frame it as “All animals (including humans) have (some of) the same rights” or even “Humans are as equally worthy of consideration as animals are.”

I think when we usually talk about animal rights, we do it from a perspective of wanting to elevate animals to human status. From one perspective, I don’t like this approach because I feel as though it turns the framing of rights as something you deserve or earn, privileges you get for being “good enough.” The point about rights is that they are inherent — you get them because they are.

The valuable thing I got out of “All Animals Are Equal” is that “rights” are not universal. When we talk about things like abortion, for example, we talk about the right to have an abortion. Singer asks whether people who cannot get pregnant have the right to an abortion? What he doesn’t dig into is that the “right to an abortion” is really just an extension of bodily autonomy — turning one facet of bodily autonomy into the legal right to have a medical procedure.  I think this is worth thinking about more — turning high level human rights into the mundane rights, and acknowledging that not everyone can or needs them.

Worse Than FailureCodeSOD: Learning the Hard Way

If you want millions in VC funding, mumble the words “machine learning” and “disruption” and they’ll blunder out of the woods to just throw money at your startup.

At its core, ML is really about brute-forcing a statistical model. And today’s code from Norine could have possibly been avoided by applying a little more brute force to the programmer responsible.

This particular ML environment, like many, uses Python to wrap around lower-level objects. The ease of Python coupled with the speed of native/GPU-accelerated code. It has a collection of Model datatypes, and at runtime, it needs to decide which concrete Model type it should instantiate. If you come from an OO background in pretty much any other language, you’re thinking about factory patterns and abstract classes, but that’s not terribly Pythonic. Not that this developer’s solution is Pythonic either.

def choose_model(data, env):
  ModelBase = getattr(import_module(env.modelpath), env.modelname)
  
  class Model(ModelBase):
    def __init__(self, data, env):
      if env.data_save is None:
        if env.counter == 0:
          self.data = data
        else:
          raise ValueError("data unavailable with counter > 0")
      
      else:
        with open(env.data_save, "r") as df:
          self.data = json.load(df)
      ModelBase.__init__(self, **self.data)
  
  return Model(data, env)

This is an example of metaprogramming. We use import_module to dynamically load a module at runtime- potentially smart, because modules may take some time to load, so we shouldn’t load a module we don’t know that we’re going to use. Then, with get_attr, we extract the definition of a class with whatever name is stored in env.modelname.

This is the model class we want to instantiate. But instead of actually instantiating it, we instead create a new derived class, and slap a bunch of logic and file loading into it.

Then we instantiate and return an instance of this dynamically defined derived class.

There are so many things that make me cringe. First, I hate putting file access in the constructor. That’s maybe more personal preference, but I hate constructors which can possibly throw exceptions. See also the raise ValueError, where we explicitly throw exceptions. That’s just me being picky, though, and it’s not like this constructor will ever get called from anywhere else.

More concretely bad, these kinds of dynamically defined classes can have some… unusual effects in Python. For example, in Python2 (which this is), each call to choose_model will tag the returned instance with the same type, regardless of which base class it used. Since this method might potentially be using a different base class depending on the env passed in, that’s asking for confusion. You can route around these problems, but they’re not doing that here.

But far, far more annoying is that the super-class constructor, ModelBase.__init__, isn’t called until the end.

You’ll note that our child class manipulates self.data, and while it’s not pictured here, our base model classes? They also use a property called data, but for a different purpose. So our child class inits a child class property, specifically to build a dictionary of key/value pairs, which it then passes as kwargs, or keyword arguments (the ** operator) to the base class constructor… which then overwrites the self.data our child class was using.

So why do any of that?

Norine changed the code to this simpler, more reliable version, which doesn’t need any metaprogramming or dynamically defined classes:

def choose_model(data, env):
  Model = getattr(import_module(env.modelpath), env.modelname)
  
  if env.data_save is not None:
    with open(env.data_save, "r") as df:
      data = json.load(df)
  elif env.counter != 0:
    raise ValueError('if env.counter > 0 then must use data_save parameter')

  return Model(**data)

Norine adds:

I’m thinking of holding on to the original, and showing it to interviewees like a Rorschach test. What do you see in this? The fragility of a plugin system? The perils of metaprogramming? The hollowness of an overwritten argument? Do you see someone with more cleverness than sense? Or someone intelligent but bored? Or perhaps you see, in the way the superclass init is called, TRWTF: a Python 2 library written within the last 3 years.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Cryptogram 2017 Tesla Hack

Interesting story of a class break against the entire Tesla fleet.

Planet DebianNorbert Preining: KDE/Plasma Status Update 2020-09-03

Yesterday I have updated my builds of Plasma for Debian to Plasma 5.19.5, which are now available from the usual sources, nothing has changed.

[Update 2020-09-03: KDE Apps 20.08.1 are also now available]

On a different front, there are good news concerning updates in Debian proper: Together with Scarlett Moore and Patrick Franz we are in the process of updating the official Debian packages. The first bunch of packages has been uploaded to experimental, and after NEW processing the next group will go there, too. This is still 5.19.4, but a great step forward. I expect that all of Plasma 5.19.4 will be available in experimental in the next weeks, and soon after also in Debian/unstable.

Again, thanks to Scarlett and Patrick for the good collaboration, this is very much appreciated!

Krebs on SecurityThe Joys of Owning an ‘OG’ Email Account

When you own a short email address at a popular email provider, you are bound to get gobs of spam, and more than a few alerts about random people trying to seize control over the account. If your account name is short and desirable enough, this kind of activity can make the account less reliable for day-to-day communications because it tends to bury emails you do want to receive. But there is also a puzzling side to all this noise: Random people tend to use your account as if it were theirs, and often for some fairly sensitive services online.

About 16 years ago — back when you actually had to be invited by an existing Google Mail user in order to open a new Gmail account — I was able to get hold of a very short email address on the service that hadn’t yet been reserved. Naming the address here would only invite more spam and account hijack attempts, but let’s just say the account name has something to do with computer hacking.

Because it’s a relatively short username, it is what’s known as an “OG” or “original gangster” account. These account names tend to be highly prized among certain communities, who busy themselves with trying to hack them for personal use or resale. Hence, the constant account takeover requests.

What is endlessly fascinating is how many people think it’s a good idea to sign up for important accounts online using my email address. Naturally, my account has been signed up involuntarily for nearly every dating and porn website there is. That is to be expected, I suppose.

But what still blows me away is the number of financial and other sensitive accounts I could access if I were of a devious mind. This particular email address has accounts that I never asked for at H&R Block, Turbotax, TaxAct, iTunes, LastPass, Dashlane, MyPCBackup, and Credit Karma, to name just a few. I’ve lost count of the number of active bank, ISP and web hosting accounts I can tap into.

I’m perpetually amazed by how many other Gmail users and people on similarly-sized webmail providers have opted to pick my account as a backup address if they should ever lose access to their inbox. Almost certainly, these users just lazily picked my account name at random when asked for a backup email — apparently without fully realizing the potential ramifications of doing so. At last check, my account is listed as the backup for more than three dozen Yahoo, Microsoft and other Gmail accounts and their associated file-sharing services.

If for some reason I ever needed to order pet food or medications online, my phantom accounts at Chewy, Coupaw and Petco have me covered. If any of my Weber grill parts ever fail, I’m set for life on that front. The Weber emails I periodically receive remind me of a piece I wrote many years ago for The Washington Post, about companies sending email from [companynamehere]@donotreply.com, without considering that someone might own that domain. Someone did, and the results were often hilarious.

It’s probably a good thing I’m not massively into computer games, because the online gaming (and gambling) profiles tied to my old Gmail account are innumerable.

For several years until recently, I was receiving the monthly statements intended for an older gentleman in India who had the bright idea of using my Gmail account to manage his substantial retirement holdings. Thankfully, after reaching out to him he finally removed my address from his profile, although he never responded to questions about how this might have happened.

On balance, I’ve learned it’s better just not to ask. On multiple occasions, I’d spend a few minutes trying to figure out if the email addresses using my Gmail as a backup were created by real people or just spam bots of some sort. And then I’d send a polite note to those that fell into the former camp, explaining why this was a bad idea and ask what motivated them to do so.

Perhaps because my Gmail account name includes a hacking term, the few responses I’ve received have been less than cheerful. Despite my including detailed instructions on how to undo what she’d done, one woman in Florida screamed in an ALL CAPS reply that I was trying to phish her and that her husband was a police officer who would soon hunt me down. Alas, I still get notifications anytime she logs into her Yahoo account.

Probably for the same reason the Florida lady assumed I was a malicious hacker, my account constantly gets requests from random people who wish to hire me to hack into someone else’s account. I never respond to those either, although I’ll admit that sometimes when I’m procrastinating over something the temptation arises.

Losing access to your inbox can open you up to a cascading nightmare of other problems. Having a backup email address tied to your inbox is a good idea, but obviously only if you also control that backup address.

More importantly, make sure you’re availing yourself of the most secure form of multi-factor authentication offered by the provider. These may range from authentication options like one-time codes sent via email, phone calls, SMS or mobile app, to more robust, true “2-factor authentication” or 2FA options (something you have and something you know), such as security keys or push-based 2FA such as Duo Security (an advertiser on this site and a service I have used for years).

Email, SMS and app-based one-time codes are considered less robust from a security perspective because they can be undermined by a variety of well-established attack scenarios, from SIM-swapping to mobile-based malware. So it makes sense to secure your accounts with the strongest form of MFA available. But please bear in mind that if the only added authentication options offered by a site you frequent are SMS and/or phone calls, this is still better than simply relying on a password to secure your account.

Maybe you’ve put off enabling multi-factor authentication for your important accounts, and if that describes you, please take a moment to visit twofactorauth.org and see whether you can harden your various accounts.

As I noted in June’s story, Turn on MFA Before Crooks Do It For You, people who don’t take advantage of these added safeguards may find it far more difficult to regain access when their account gets hacked, because increasingly thieves will enable multi-factor options and tie the account to a device they control.

Are you in possession of an OG email account? Feel free to sound off in the comments below about some of the more gonzo stuff that winds up in your inbox.

,

Planet DebianKees Cook: security things in Linux v5.6

Previously: v5.5.

Linux v5.6 was released back in March. Here’s my quick summary of various features that caught my attention:

WireGuard
The widely used WireGuard VPN has been out-of-tree for a very long time. After 3 1/2 years since its initial upstream RFC, Ard Biesheuvel and Jason Donenfeld finished the work getting all the crypto prerequisites sorted out for the v5.5 kernel. For this release, Jason has gotten WireGuard itself landed. It was a twisty road, and I’m grateful to everyone involved for sticking it out and navigating the compromises and alternative solutions.

openat2() syscall and RESOLVE_* flags
Aleksa Sarai has added a number of important path resolution “scoping” options to the kernel’s open() handling, covering things like not walking above a specific point in a path hierarchy (RESOLVE_BENEATH), disabling the resolution of various “magic links” (RESOLVE_NO_MAGICLINKS) in procfs (e.g. /proc/$pid/exe) and other pseudo-filesystems, and treating a given lookup as happening relative to a different root directory (as if it were in a chroot, RESOLVE_IN_ROOT). As part of this, it became clear that there wasn’t a way to correctly extend the existing openat() syscall, so he added openat2() (which is a good example of the efforts being made to codify “Extensible Syscall” arguments). The RESOLVE_* set of flags also cover prior behaviors like RESOLVE_NO_XDEV and RESOLVE_NO_SYMLINKS.

pidfd_getfd() syscall
In the continuing growth of the much-needed pidfd APIs, Sargun Dhillon has added the pidfd_getfd() syscall which is a way to gain access to file descriptors of a process in a race-less way (or when /proc is not mounted). Before, it wasn’t always possible make sure that opening file descriptors via /proc/$pid/fd/$N was actually going to be associated with the correct PID. Much more detail about this has been written up at LWN.

openat() via io_uring
With my “attack surface reduction” hat on, I remain personally suspicious of the io_uring() family of APIs, but I can’t deny their utility for certain kinds of workloads. Being able to pipeline reads and writes without the overhead of actually making syscalls is pretty great for performance. Jens Axboe has added the IORING_OP_OPENAT command so that existing io_urings can open files to be added on the fly to the mapping of available read/write targets of a given io_uring. While LSMs are still happily able to intercept these actions, I remain wary of the growing “syscall multiplexer” that io_uring is becoming. I am, of course, glad to see that it has a comprehensive (if “out of tree”) test suite as part of liburing.

removal of blocking random pool
After making algorithmic changes to obviate separate entropy pools for random numbers, Andy Lutomirski removed the blocking random pool. This simplifies the kernel pRNG code significantly without compromising the userspace interfaces designed to fetch “cryptographically secure” random numbers. To quote Andy, “This series should not break any existing programs. /dev/urandom is unchanged. /dev/random will still block just after booting, but it will block less than it used to.” See LWN for more details on the history and discussion of the series.

arm64 support for on-chip RNG
Mark Brown added support for the future ARMv8.5’s RNG (SYS_RNDR_EL0), which is, from the kernel’s perspective, similar to x86’s RDRAND instruction. This will provide a bootloader-independent way to add entropy to the kernel’s pRNG for early boot randomness (e.g. stack canary values, memory ASLR offsets, etc). Until folks are running on ARMv8.5 systems, they can continue to depend on the bootloader for randomness (via the UEFI RNG interface) on arm64.

arm64 E0PD
Mark Brown added support for the future ARMv8.5’s E0PD feature (TCR_E0PD1), which causes all memory accesses from userspace into kernel space to fault in constant time. This is an attempt to remove any possible timing side-channel signals when probing kernel memory layout from userspace, as an alternative way to protect against Meltdown-style attacks. The expectation is that E0PD would be used instead of the more expensive Kernel Page Table Isolation (KPTI) features on arm64.

powerpc32 VMAP_STACK
Christophe Leroy added VMAP_STACK support to powerpc32, joining x86, arm64, and s390. This helps protect against the various classes of attacks that depend on exhausting the kernel stack in order to collide with neighboring kernel stacks. (Another common target, the sensitive thread_info, had already been moved away from the bottom of the stack by Christophe Leroy in Linux v5.1.)

generic Page Table dumping
Related to RISCV’s work to add page table dumping (via /sys/fs/debug/kernel_page_tables), Steven Price extracted the existing implementations from multiple architectures and created a common page table dumping framework (and then refactored all the other architectures to use it). I’m delighted to have this because I still remember when not having a working page table dumper for ARM delayed me for a while when trying to implement upstream kernel memory protections there. Anything that makes it easier for architectures to get their kernel memory protection working correctly makes me happy.

That’s in for now; let me know if there’s anything you think I missed. Next up: Linux v5.7.

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

Planet DebianVincent Bernat: Syncing SSH keys on Cisco IOS-XR with a custom Ansible module

The cisco.iosxr collection from Ansible Galaxy provides an iosxr_user module to manage local users, along with their SSH keys. However, the module is quite slow, do not display a diff for changed SSH keys, never signal change when a key is modified, and does not delete obsolete keys. Let’s write a custom Ansible module managing only the SSH keys while fixing these issues.

Notice

I recommend that you read “Writing a custom Ansible module” as an introduction.

How to add an SSH key to a user

Adding SSH keys to users in Cisco IOS-XR is quite undocumented. First, you need to encode the key with the “ssh-rsa” key ASN.1 format, like an OpenSSH public key, but without the base64-encoding:

$ awk '{print $2}' id_rsa.pub \
    | base64 -d \
    > publickey_vincent.raw

Then, you upload the key with SCP to harddisk:/publickey_vincent.raw and import it for the current user with the following IOS command:

crypto key import authentication rsa harddisk:/publickey_vincent.b64

However, if you want to import a key for another user, you need to be part of the root-system group:

username vincent
 group root-lr
 group root-system

With the following admin command, you can attach a key to another user:

admin crypto key import authentication rsa username cedric harddisk:/publickey_cedric.b64

Code

The module has the following signature and it installs the specified key for each user and remove keys from retired users—the ones we do not specify.

iosxr_users:
  keys:
    vincent: ssh-rsa AAAAB3NzaC1yc2EAA[…]ymh+YrVWLZMJR
    cedric:  ssh-rsa AAAAB3NzaC1yc2EAA[…]RShPA8w/8eC0n

Prerequisites

Unlike the iosxr_user module, our custom module only handles SSH keys, one per user. Therefore, the user definitions have to already exist in the running configuration.1 Moreover, the user defined in ansible_user needs to be in the root-system group. The cisco.iosxr collection must also be installed as the module relies on its code.

When running the module, ansible_connection needs to be set to network_cli and ansible_network_os to iosxr. These variables are usually defined in the inventory.

Module definition

Starting from the skeleton described in the previous article, we define the module:

module_args = dict(
    keys=dict(type='dict', elements='str', required=True),
)

module = AnsibleModule(
    argument_spec=module_args,
    supports_check_mode=True
)

result = dict(
    changed=False
)

Getting the installed keys

The next step is to retrieve the keys currently installed. This can be done with the following command:

# show crypto key authentication rsa all
Key label: vincent
Type     : RSA public key authentication
Size     : 2048
Imported : 16:17:08 UTC Tue Aug 11 2020
Data     :
 30820122 300D0609 2A864886 F70D0101 01050003 82010F00 3082010A 02820101
 00D81E5B A73D82F3 77B1E4B5 949FB245 60FB9167 7CD03AB7 ADDE7AFE A0B83174
 A33EC0E6 1C887E02 2338367A 8A1DB0CE 0C3FBC51 15723AEB 07F301A4 B1A9961A
 2D00DBBD 2ABFC831 B0B25932 05B3BC30 B9514EA1 3DC22CBD DDCA6F02 026DBBB6
 EE3CFADA AFA86F52 CAE7620D 17C3582B 4422D24F D68698A5 52ED1E9E 8E41F062
 7DE81015 F33AD486 C14D0BB1 68C65259 F9FD8A37 8DE52ED0 7B36E005 8C58516B
 7EA6C29A EEE0833B 42714618 50B3FFAC 15DBE3EF 8DA5D337 68DAECB9 904DE520
 2D627CEA 67E6434F E974CF6D 952AB2AB F074FBA3 3FB9B9CC A0CD0ADC 6E0CDB2A
 6A1CFEBA E97AF5A9 1FE41F6C 92E1F522 673E1A5F 69C68E11 4A13C0F3 0FFC782D
 27020301 0001

[…]

ansible_collections.cisco.iosxr.plugins.module_utils.network.iosxr.iosxr contains a run_commands() function we can use:

command = "show crypto key authentication rsa all"
out = run_commands(module, command)
out = out[0].replace(' \n', '\n')

A common library to parse a command output is textfsm: a Python module using a template-based state machine for parsing semi-formatted text.

template = r"""
Value Required Label (\w+)
Value Required,List Data ([A-F0-9 ]+)

Start
 ^Key label: ${Label}
 ^Data\s+: -> GetData

GetData
 ^ ${Data}
 ^$$ -> Record Start
""".lstrip()

re_table = textfsm.TextFSM(io.StringIO(template))
got = {data[0]: "".join(data[1]).replace(' ', '')
       for data in re_table.ParseText(out)}

got is a dictionary associating key labels, considered as usernames, with a hexadecimal representation of the public key currently installed. It looks like this:

>>> pprint(got)
{'alfred': '30820122300D0609[…]6F0203010001',
 'cedric': '30820122300D0609[…]710203010001',
 'vincent': '30820122300D0609[…]270203010001'}

Comparing with the wanted keys

Let’s now build the wanted dictionary using the same structure. In module.params['keys'], we have a dictionary associating usernames to public SSH keys in the OpenSSH format:

>>> pprint(module.params['keys'])
{'cedric': 'ssh-rsa AAAAB3NzaC1yc2[…]',
 'vincent': 'ssh-rsa AAAAB3NzaC1yc2[…]'}

We need to convert these keys in the same hexadecimal representation used by Cisco above. The ssh-keygen command and some glue can do the conversion:2

$ ssh-keygen -f id_rsa.pub -e -mPKCS8 \
   | grep -v '^---' \
   | base64 -d \
   | hexdump -e '4/1 "%0.2X"'
30820122300D06092[…]782D270203010001

Assuming we have a ssh2cisco() function doing that, we can build the wanted dictionary:

wanted = {k: ssh2cisco(v)
          for k, v in module.params['keys'].items()}

Applying changes

Back to the skeleton described in the previous article, the last step is to apply the changes if there is a difference between got and wanted when not running with check mode. The part comparing got and wanted is taken verbatim from the skeleton module:

if got != wanted:
    result['changed'] = True
    result['diff'] = dict(
        before=yaml.safe_dump(got),
        after=yaml.safe_dump(wanted)
    )

if module.check_mode or not result['changed']:
    module.exit_json(**result)

Let’s copy the new or changed keys and attach them to their respective users. For this purpose, we reuse the get_connection() and copy_file() functions from ansible_collections.cisco.iosxr.plugins.module_utils.network.iosxr.iosxr.

conn = get_connection(module)
for user in wanted:
    if user not in got or wanted[user] != got[user]:
        dst = f"/harddisk:/publickey_{user}.raw"
        with tempfile.NamedTemporaryFile() as src:
            decoded = base64.b64decode(
                module.params['keys'][user].split()[1])
            src.write(decoded)
            src.flush()
            copy_file(module, src.name, dst)
    command = ("admin crypto key import authentication rsa "
               f"username {user} {dst}")
    conn.send_command(command, prompt="yes/no", answer="yes")

Then, we remove obsolete keys:

for user in got:
    if user not in wanted:
        command = ("admin crypto key zeroize authentication rsa "
                   f"username {user}")
        conn.send_command(command, prompt="yes/no", answer="yes")

The complete code is available on GitHub. Compared to the iosxr_user module, this one displays a diff when running with --diff, correctly signals a change, is faster, 3 and deletes unwanted SSH keys. However, it is unable to create users and cannot configure passwords or multiple SSH keys.


  1. In our environment, the Ansible playbook pushes a full configuration, including the user definitions. Then, it synchronizes the SSH keys. ↩︎

  2. Despite the argument provided to ssh-keygen, the format used by Cisco is not PKCS#8. This is the ASN.1 representation of a Subject Public Key Info structure, as defined in RFC 2459. Moreover, PKCS#8 is a format for a private key, not a public one. ↩︎

  3. The main factors for being faster are:

    • not creating users, and
    • not reuploading existing SSH keys.

    ↩︎

Planet DebianVincent Bernat: Syncing MySQL tables with a custom Ansible module

The community.mysql collection from Ansible Galaxy provides a mysql_query module to run arbitrary MySQL queries. Unfortunately, it does not support check mode nor the --diff flag. It is also unable to tell if there was a change. Let’s write a specific Ansible module to workaround these issues.

Notice

I recommend that you read “Writing a custom Ansible module” as an introduction.

Code

The module has the following signature and it executes the provided SQL statements in a single transaction. It needs a list of the affected tables to be able to detect and show the changes.

mysql_sync:
  sql: |
    DELETE FROM rules WHERE name LIKE 'CMDB:%';
    INSERT INTO rules (name, rule) VALUES
      ('CMDB: check for cats', ':is(object, "CAT")'),
      ('CMDB: check for dogs', ':is(object, "DOG")');
    REPLACE INTO webhooks (name, url) VALUES
      ('OpsGenie', 'https://opsgenie/something/token'),
      ('Slack', 'https://slack/something/token');
  user: monitoring
  password: Yooghah5
  database: monitoring
  tables:
    - rules
    - webhooks

Prerequisites

The module does not enforce idempotency, but it is expected you provide appropriate SQL queries. In the above example, idempotency is achieved because the content of the rules table is deleted and recreated from scratch while the rows in the webhooks table are replaced if they already exist.

You need the PyMySQL package.

Module definition

Starting from the skeleton described in the previous article, here is the module definition:

module_args = dict(
    sql=dict(type='str', required=True),
    user=dict(type='str', required=True),
    password=dict(type='str', required=True, no_log=True),
    database=dict(type='str', required=True),
    tables=dict(type='list', required=True, elements='str'),
)

result = dict(
    changed=False
)

module = AnsibleModule(
    argument_spec=module_args,
    supports_check_mode=True
)

The password is marked with no_log to ensure it won’t be displayed or stored, notably when ansible-playbook runs in verbose mode. There is no host option as the module is executed on the MySQL host. Strong authentication using certificates is not implemented either. This matches our goal with custom modules: only implement what you strictly need.

Getting the current rows

The next step is to retrieve the records currently in the database. The got dictionary is a mapping from table names to the list of rows they contain:

got = {}
tables = module.params['tables']

connection = pymysql.connect(
    user=module.params['user'],
    password=module.params['password'],
    db=module.params['database'],
    charset='utf8mb4',
    cursorclass=pymysql.cursors.DictCursor
)

with connection.cursor() as cursor:
    for table in tables:
        cursor.execute("SELECT * FROM {}".format(table))
        got[table] = cursor.fetchall()

Computing the changes

Let’s now build the wanted dictionary. The trick is to execute the SQL statements in a transaction without issuing a final commit. The changes will be invisible1 to other readers and we can compare the final rows with the rows collected in got:

wanted = {}
sql = module.params['sql']
statements = [statement.strip()
              for statement in sql.split(";\n")
              if statement.strip()]

with connection.cursor() as cursor:
    for statement in statements:
        try:
            cursor.execute(statement)
        except pymysql.OperationalError as err:
            code, message = err.args
            result['msg'] = "MySQL error for {}: {}".format(
                statement,
                message)
            module.fail_json(**result)
    for table in tables:
        cursor.execute("SELECT * FROM {}".format(table))
        wanted[table] = cursor.fetchall()

The first for loop executes each statement. On error, we return a helpful message containing the faulty one. The second for loop records the final rows of each table in wanted.

Applying changes

Back to the skeleton described in the previous article, the last step is to apply the changes if there is a difference between got and wanted when not running with check mode. The diff object is a bit more elaborate as it is built table by table. This enables Ansible to display the name of each table before the diff representation:

if got != wanted:
    result['changed'] = True
    result['diff'] = [dict(
        before_header=table,
        after_header=table,
        before=yaml.safe_dump(got[table]),
        after=yaml.safe_dump(wanted[table]))
                      for table in tables
                      if got[table] != wanted[table]]

if module.check_mode or not result['changed']:
    module.exit_json(**result)

Applying the changes is quite trivial: just commit them! Otherwise, they are lost when the module exits.

connection.commit()

The complete code is available on GitHub. Compared to the mysql_query module, this one supports the check mode, signals correctly if there is a change and displays the differences. However, it should not be used with huge tables, as it would try to load them in memory.


  1. The tables need to use the InnoDB storage engine. Moreover, MySQL does not know how to use transactions with DDL statements: do not modify table definitions! ↩︎

Planet DebianVincent Bernat: Writing a custom Ansible module

Ansible ships a lot of modules you can combine for your configuration management needs. However, the quality of these modules may vary widely. Sometimes, it may be quicker and more robust to write your own module instead of shopping and assembling existing ones.1

In my opinion, a robust module exhibits the following characteristics:

  • idempotency,
  • diff support,
  • check mode compatibility,
  • correct change signaling, and
  • lifecycle management.

In a nutshell, it means the module can run with --diff --check and shows the changes it would apply. When run twice in a row, the second run won’t apply or signal changes. The last bullet point suggests the module should be able to delete outdated objects configured during previous runs.2

The module code should be minimal and tailored to your needs. Making the module generic for use by other users is a non-goal. Less code usually means less bugs and easier to understand.

I do not cover testing here. It is undeniably a good practice, but it requires a significant effort. In my opinion, it is preferable to have a well written module matching the above characteristics rather than a module that is well tested but without them or a module requiring further (untested) assembly to meet your needs.

Module skeleton

Ansible documentation contains instructions to build a module, along with some best practices. As one of our non-goal is to distribute it, we choose to take some shortcuts and skip some of the boilerplate. Let’s assume we build a module with the following signature:

custom_module:
  user: someone
  password: something
  data: "some random string"

There are various locations you can put a module in Ansible. A common possibility is to include it into a role. In a library/ subdirectory, create an empty __init__.py file and a custom_module.py file with the following code:3

#!/usr/bin/python

import yaml
from ansible.module_utils.basic import AnsibleModule


def main():
    # Define options accepted by the module. ❶
    module_args = dict(
        user=dict(type='str', required=True),
        password=dict(type='str', required=True, no_log=True),
        data=dict(type='str', required=True),
    )

    module = AnsibleModule(
        argument_spec=module_args,
        supports_check_mode=True
    )

    result = dict(
        changed=False
    )

    got = {}
    wanted = {}

    # Populate both `got` and `wanted`. ❷
    # [...]

    if got != wanted:
        result['changed'] = True
        result['diff'] = dict(
            before=yaml.safe_dump(got),
            after=yaml.safe_dump(wanted)
        )

    if module.check_mode or not result['changed']:
        module.exit_json(**result)

    # Apply changes. ❸
    # [...]

    module.exit_json(**result)


if __name__ == '__main__':
    main()

The first part, in ❶, defines the module, with the accepted options. Refer to the documentation on argument_spec for more details.

The second part, in ❷, builds the got and wanted variables. got is the current state while wanted is the target state. For example, if you need to modify records in a database server, got would be the current rows while wanted would be the modified rows. Then, we compare got and wanted. If there is a difference, changed is switched to True and we prepare the diff object. Ansible uses it to display the differences between the states. If we are running in check mode or if no change is detected, we stop here.

The last part, in ❸, applies the changes. Usually, it means iterating over the two structures to detect the differences and create the missing items, delete the unwanted ones and update the existing ones.

Documentation

Ansible provides a fairly complete page on how to document a module. I advise you to take a more minimal approach by only documenting each option sparingly,4 skipping the examples and only documenting return values if it needs to. I usually limit myself to something like this:

DOCUMENTATION = """
---
module: custom_module.py
short_description: Pass provided data to remote service
description:
  - Mention anything useful for your workmate.
  - Also mention anything you want to remember in 6 months.
options:
  user:
    description:
      - user to identify to remote service
  password:
    description:
      - password for authentication to remote service
  data:
    description:
      - data to send to remote service
"""

Error handling

If you run into an error, you can stop the execution with module.fail_json():

module.fail_json(
    msg=f"remote service answered with {code}: {message}",
    **result
)

There is no requirement to intercept all errors. Sometimes, not swallowing an exception provides better information than replacing it with a generic message.

Returning additional values

A module may return additional information that can be captured to be used in another task through the register directive. For this purpose, you can add arbitrary fields to the result dictionary. Have a look at the documentation for common return values. You should try to add these fields before exiting the module when in check mode. The returned values can be documented.

Examples

Here are several examples of custom modules following the previous skeleton. Each example highlight why a custom module was written instead of assembling existing modules. ⚙️


  1. Also, when using modules from Ansible Galaxy, you introduce a dependency to a third-party. This is not something that should be decided lightly: it may break later, it may only meet 80% of the needs, it may add bugs. ↩︎

  2. Some declarative systems, like Terraform, exhibits all these behaviors. ↩︎

  3. Do not worry about the shebang. It is hardcoded to /usr/bin/python. Ansible will modify it to match the chosen interpreter on the remote host. You can write Python 3 code if ansible_python_interpreter evaluates to a Python 3 interpreter. ↩︎

  4. The main issue I have with this non-programmatic approach to documentation is that it partly repeats the information contained in argument_spec. I think an auto-documenting structure would avoid this. ↩︎

Planet DebianElana Hashman: My term at the Open Source Initiative thus far

When I ran for the OSI board in early 2019, I set three goals for myself:

  • Grow the OSI's membership, and build a more representative organization.
  • Defend the Open Source Definition and FOSS commons.
  • Define the future of open source, as part of the larger community.

Now that the OSI has announced hiring an interim General Manager, I thought it would be a good time to publicly reflect on what I've accomplished and what I'd like to see next.

As I promised in my campaign pitch, I aim to be publicly accountable :)

Growing the OSI's membership

I have served as our Membership Committee Chair since the May 2019 board meeting, tasked with devising and supervising strategy to increase membership and deliver value to members.

As part of my election campaign last year, I signed up over 50 new individual members. Since May 2019, we've seen strong 33% growth of individual members, to reach a new all-time high over 600 (638 when I last checked).

I see the OSI as a relatively neutral organization that occupies a unique position to build bridges among organizations within the FOSS ecosystem. In order to facilitate this, we need a representative membership, and we need to engage those members and provide forums for cross-pollination. As Membership Committee Chair, I have been running quarterly video calls on Jitsi for our affiliate members, where we can share updates between many global organizations and discuss challenges we all face.

But it's not enough just to hold the discussion; we also need to bring fresh new voices into the conversation. Since I've joined the board, I'm thrilled to say that 16 new affiliate members joined (in chronological order) for a total of 81:

I was also excited to run a survey of the OSI's individual and affiliate membership to help inform the future of the organization that received 58 long-form responses. The survey has been accepted by the board at our August meeting and should be released publicly soon!

Defending the Open Source Definition

When I joined the board, the first committee I joined was the License Committee, which is responsible for running the licence review process, making recommendations on new licenses, and maintaining our existing licenses.

Over the past year, under Pamela Chestek's leadership as Chair, the full board has approved the following licenses (with SPDX identifiers in brackets) on the recommendation of the License Committee:

We withheld approval of the following licenses:

I've also worked to define the scope of work for hiring someone to improve our license review process, which we have an open RFP for!

Chopping wood and carrying water

I joined the OSI with the goal of improving an organization I didn't think was performing up to its potential. Its membership and board were not representative of the wider open source community, its messaging felt outdated, and it seemed to be failing to rise to today's challenges for FOSS.

But before one can rise to meet these challenges, you need a strong foundation. The OSI needed the organizational structure, health, and governance in order to address such questions. Completing that work is essential, but not exactly glamourous—and it's a place that I thrive. Honestly, I don't (yet?) want to be the public face of the organization, and I apologize to those who've missed me at events like FOSDEM.

I want to talk a little about some of my behind-the-scenes activities that I've completed as part of my board service:

All of this work is intended to improve the organization's health and provide it with an excellent foundation for its mission.

Defining the future of open source

Soon after I was elected to the board, I gave a talk at Brooklyn.js entitled "The Future of Open Source." In this presentation, I pondered about the history and future of the free and open source software movement, and the ethical questions we must face.

In my election campaign, I wrote "Software licenses are a means, not an end, to open source software. Focusing on licensing is necessary but not sufficient to ensure a vibrant, thriving open source community. Focus on licensing to the exclusion of other serious community concerns is to our collective detriment."

My primary goal for my first term on the board was to ensure the OSI would be positioned to answer wider questions about the open source community and its future beyond licenses. Over the past two months, I supported Megan Byrd-Sanicki's suggestion to hold (and then participated in, with the rest of the board) organizational strategy sessions to facilitate our long-term planning. My contribution to help inform these sessions was providing the member survey on behalf of the Membership Committee.

Now, I think we are much better equiped to face the hard questions we'll have to tackle. In my opinion, the Open Source Initiative is better positioned than ever to answer them, and I can't wait to see what the future brings.

Hope to see you at our first State of the Source conference next week!

Cryptogram Insider Attack on the Carnegie Library

Greg Priore, the person in charge of the rare book room at the Carnegie Library, stole from it for almost two decades before getting caught.

It’s a perennial problem: trusted insiders have to be trusted.

Worse Than FailureBidirectional

Merge-short arrows

Trung worked for a Microsoft and .NET framework shop that used AutoMapper to simplify object mapping between tiers. Their application's mapping configuration was performed at startup, as in the following C# snippet:

public void Configure(ConfigurationContext context)
{
AutoMapper.Mapper.CreateMap().AfterMap(Map); 
...
}

where the AfterMap() method's Map delegate was to map discrepancies that AutoMapper couldn't.

One day, a senior dev named Van approached Trung for help. He was repeatedly getting AutoMapper's "Missing type map configuration or unsupported mapping. Mapping types Y -> X ..." error.

Trung frowned a little, wondering what was mysterious about this problem. "You're ... probably missing mapping configuration for Y to X," he said.

"No, I'm not!" Van pointed to his monitor, at the same code snippet above.

Trung shook his head. "That mapping is one-way, from X to Y only. You can create the reverse mapping by using the Bidirectional() extension method. Here ..." He leaned over to type in the addition:

AutoMapper.Mapper.CreateMap()
.AfterMap(Map)
.Bidirectional();

This resolved Van's error. Both men returned to their usual business.

A few weeks later, Van approached Trung again, this time needing help with refactoring due to a base library change. While they huddled over Van's computer and dug through compilation errors, Trung kept seeing strange code within multiple AfterMap() delegates:

void Map(X src, Y desc)
{
desc.QueueId = src.Queue.Id;
src.Queue = Queue.GetById(desc.QueueId);
...
}

"Wait a minute!" Trung reached for the mouse to highlight two such lines and asked, "Why is this here?"

"The mapping is supposed to be bidirectional! Remember?" Van replied. "I’m copying from X to Y, then from Y to X."

Trung resisted the urge to clap a hand to his forehead or mutter something about CS101 and variable-swapping—not that this "swap" was necessary. "You realize you'd have nothing but X after doing that?"

The quizzical look on the senior developer's face assured Trung that Van hadn't realized any such thing.

Trung could only sigh and help Van trudge through the delegates he'd "fixed," working out a better mapping procedure for each.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianJunichi Uekawa: Updated page to record how-to video.

Updated page to record how-to video. I don't need big camera image, I just need a tiny image with probably my face on it. As a start, I tried extracting a rectangle.

Planet DebianNorbert Preining: Multiple GPUs for graphics and deep learning

For long time I have been using a good old nvidia GeForce GTX 1050 for my display and deep learning needs. I reported a few times how to get Tensorflow running on Debian/Sid, see here and here. Later on I switched to AMD GPU in the hope that an open source approach to both GPU driver as well as deep learning (ROCm) would improve the general experience. Unfortunately it turned out that AMD GPUs are generally not ready for deep learning usage.

The problems with AMD and ROCm are far and wide. First of all, it seems that for anything more complicated then simple stuff, AMD’s flagship RX 5700(XT) and all GFX10 (Navi) based cards are not(!!!) supported in ROCm. Yes, you read correct … AMD does not support 5700(XT) cards in the ROCm stack. Some simple stuff works, but nothing for real computations.

Then, even IF they would support, ROCm as distributed is currently a huge pain in the butt. The source code is a huge mess, and building usable packages from it is probably possible, but quite painful (I am member of the ROCm packaging team in Debian, and have tried many hours). And the packages provided by AMD are not installable on Debian/sid due to library incompatibilities.

So that left me with a bit a problem: for work I need to train quite some neural networks, do model selection, etc. Doing this on a CPU is a bit a burden. So at the end I decided to put the nVidia card back into the computer (well, after moving it to a bigger case – but that is a different story to tell). Here are the steps I did to get both cards working for their respective target: AMD GPU for driving the console and X (and games!), and the nVidia card doing the deep learning stuff (tensorflow using the GPU).

Starting point

Starting point was a working AMD GPU installation. The AMD GPU is also the first GPU card (top slot) and thus the one that is used by the BIOS and the Linux console. If you want the video output on the second card you need to trick, and probably don’t have console output, etc etc. So not a solution for me.

Installing libcuda1 and the nvidia kernel drivers

Next step was installing the libcuda1 package:

apt install libcuda1

This installs a lot of stuff, including the nvidia drivers, GLX libraries, alternatives setup, and update-glx tool and package.

The kernel module should be built and installed automatically for your kernel.

Installing CUDA

Follow more or less the instructions here and do

wget -O- https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub | sudo tee /etc/apt/trusted.gpg.d/nvidia-cuda.asc
echo "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /" | sudo tee /etc/apt/sources.list.d/nvidia-cuda.list
sudo apt-get update
sudo apt-get install cuda-libraries-10-1

Warning! At the moment Tensorflow packages require CUDA 10.1, so don’t install the 10.0 version. This might change in the future!

This will install lots of libs into /usr/local/cuda-10.1 and add the respective directory to the ld.so path by creating a file /etc/ld.so.conf.d/cuda-10-1.conf.

Install CUDA CuDNN

One difficult to satisfy dependency are the CuDNN libraries. In our case we need the version 7 library for CUDA 10.1. To download these files one needs to have a NVIDIA developer account, which is quick and painless. After that go to the CuDNN page where one needs to select Archived releases and then Download cuDNN v7.N.N (xxxx NN, YYYY), for CUDA 10.1 and then cuDNN Runtime Library for Ubuntu18.04 (Deb).

At the moment (as of today) this will download a file libcudnn7_7.6.5.32-1+cuda10.1_amd64.deb which needs to be installed with dpkg -i libcudnn7_7.6.5.32-1+cuda10.1_amd64.deb.

Updating the GLX setting

Here now comes the very interesting part – one needs to set up the GLX libraries. Reading the output of update-glx --help and then the output of update-glx --list glx:

$ update-glx --help
update-glx is a wrapper around update-alternatives supporting only configuration
of the 'glx' and 'nvidia' alternatives. After updating the alternatives, it
takes care to trigger any follow-up actions that may be required to complete
the switch.
 
It can be used to switch between the main NVIDIA driver version and the legacy
drivers (eg: the 304 series, the 340 series, etc).
 
For users with Optimus-type laptops it can be used to enable running the discrete
GPU via bumblebee.
 
Usage: update-glx <command>
 
Commands:
  --auto <name>            switch the master link <name> to automatic mode.
  --display <name>         display information about the <name> group.
  --query <name>           machine parseable version of --display <name>.
  --list <name>            display all targets of the <name> group.
  --config <name>          show alternatives for the <name> group and ask the
                           user to select which one to use.
  --set <name> <path>      set <path> as alternative for <name>.
 
<name> is the master name for this link group.
  Only 'nvidia' and 'glx' are supported.
<path> is the location of one of the alternative target files.
  (e.g. /usr/lib/nvidia)
 
$ update-glx --list glx
/usr/lib/mesa-diverted
/usr/lib/nvidia

I was tempted into using

update-glx --config glx /usr/lib/mesa-diverted

because at the end the Mesa GLX libraries should be used to drive the display via the AMD GPU.

Unfortunately, with this neither the nvidia kernel module was loaded, the nvidia persistenced couldn’t run because the library libnvidia-cfg1 wasn’t found (not sure it was needed at all…), and with that also no way to run tensorflow on GPU.

So what I did I tried

update-glx --auto glx

(which is the same as update-glx --config glx /usr/lib/nvidia), and rebooted, and decided to check afterwards what is broken.

To my big surprise, the AMD GPU still worked out of the box, including direct rendering, and the games I tried (Overload, Supraland via Wine) all worked without a hinch.

Not that I really understand why the GLX libraries that are seemingly now in use are from nvidia but work the same (if anyone has an explanation, that would be great!), but since I haven’t had any problems till now, I am content.

Checking GPU usage in tensorflow

Make sure that you remove tensorflow-rocm and reinstall tensorflow with GPU support:

pip3 uninstall tensorflow-rocm
pip3 install --upgrade tensorflow-gpu

After that a simple

$ python3 -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
....(lots of output)
2020-09-02 11:57:04.673096: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3581 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:05:00.0, compute capability: 6.1)
tf.Tensor(1093.4915, shape=(), dtype=float32)
$

should indicate that the GPU is used by tensorflow!

The R Keras package should also work out of the box and pick up the system-wide tensorflow which in turn picks the GPU, see this post for example code to run for tests.

Conclusion

All in all it was easier than expected, despite the dances one has to do for nvidia to get the correct libraries. What still puzzles me is the selection option in update-glx, and might need a better support for secondary nvidia GPU cards.

,

Cory DoctorowGet Radicalized for a mere $2.99

The ebook of my 2019 book RADICALIZED — finalist for the Canada Reads award, LA Library book of the year, etc — is on sale today for $2.99 on all major platforms!

Books

There are a lot of ways to get radicalized in 2020, but this is arguably the cheapest.

Planet DebianUtkarsh Gupta: FOSS Activites in August 2020

Here’s my (eleventh) monthly update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 20th month of contributing to Debian. I became a DM in late March last year and a DD last Christmas! \o/

Well, this month we had DebConf! \o/
(more about this later this week!)

Anyway, here are the following things I did in Debian this month:

Uploads and bug fixes:

Other $things:

  • Mentoring for newcomers.
  • FTP Trainee reviewing.
  • Moderation of -project mailing list.
  • Sponsored php-dasprid-enum and php-bacon-baconqrcode for William and ruby-unparser, ruby-morpher, and ruby-path-exapander for Cocoa.

Goodbye GSoC! \o/

In May, I got selected as a Google Summer of Code student for Debian again! \o/
I am working on the Upstream-Downstream Cooperation in Ruby project.

The other 5 blogs can be found here:

Also, I log daily updates at gsocwithutkarsh2102.tk.

Since this is a wrap and whilst the daily updates are already available at the above site^, I’ll quickly mention the important points and links here.


Whilst working on Rubocop::Packaging, I contributed to more Ruby projects, refactoring their library a little bit and mostly fixing RuboCop issues and fixing issues that the Packaging extension reports as “offensive�.
Following are the PRs that I raised:


Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my eleventh month as a Debian LTS and my second as a Debian ELTS paid contributor.
I was assigned 21.75 hours for LTS and 14.25 hours for ELTS and worked on the following things:

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

  • Issued ELA 255-1, fixing CVE-2020-14344, for libx11.
    For Debian 8 Jessie, these problems have been fixed in version 2:1.6.2-3+deb8u3.
  • Issued ELA 259-1, fixing CVE-2020-10177, for pillow.
    For Debian 8 Jessie, these problems have been fixed in version 2.6.1-2+deb8u5.
  • Issued ELA 269-1, fixing CVE-2020-11985, for apache2.
    For Debian 8 Jessie, these problems have been fixed in version 2.4.10-10+deb8u17.
  • Started working on clamAV update, it’s a major bump from v0.101.5 to v0.102.4. There were lots of movings parts. Contacted upstream maintainers to help reduce the risk of regression. Came up with a patch to loosen the libcurl version requirement. Hopefully, the update could be rolled out soon!

Other (E)LTS Work:

  • I spent an additional 11.15 hours working on compiling the responses of the LTS survey and preparing a gist of it for its presentation during the Debian LTS BoF at DebConf20.
  • Triaged qemu, pillow, gupnp, clamav, apache2, and uwsgi.
  • Marked CVE-2020-11538/pillow as not-affected for Stretch.
  • Marked CVE-2020-11984/apache2 as not-affected for Stretch.
  • Marked CVE-2020-10378/pillow as not-affected for Jessie.
  • Marked CVE-2020-11538/pillow as not-affected for Jessie.
  • Marked CVE-2020-3481/clamav as not-affected for Jessie.
  • Marked CVE-2020-11984/apache2 as not-affected for Jessie.
  • Marked CVE-2020-{9490,11993}/apache2 as not-affected for Jessie.
  • Hosted Debian LTS BoF at DebConf20. Recording here.
  • General discussion on LTS private and public mailing list.

Until next time.
:wq for today.

Planet DebianRussell Coker: BBB vs Jitsi

I previously wrote about how I installed the Jitsi video-conferencing system on Debian [1]. We used that for a few unofficial meetings of LUV to test it out. Then we installed Big Blue Button (BBB) [2]. The main benefit of Jitsi over BBB is that it supports live streaming to YouTube. The benefits of BBB are a better text chat system and a “whiteboard” that allows conference participants to draw shared diagrams. So if you have the ability to run both systems then it’s best to use Jitsi when you have so many viewers that a YouTube live stream is needed and to use BBB in all other situations.

One problem is with the ability to run both systems. Jitsi isn’t too hard to install if you are installing it on a VM that is not used for anything else. BBB is a major pain no matter what you do. The latest version of BBB is 2.2 which was released in March 2020 and requires Ubuntu 16.04 (which was released in 2016 and has “standard support” until April next year) and doesn’t support Ubuntu 18.04 (released in 2018 and has “standard support” until 2023). The install script doesn’t check for correct apt repositories and breaks badly with no explanation if you don’t have Ubuntu Multiverse enabled.

I expect that they rushed a release because of the significant increase in demand for video conferencing this year. But that’s no reason for demanding the 2016 version of Ubuntu, why couldn’t they have developed on version 18.04 for the last 2 years? Since that release they have had 6 months in which they could have released a 2.2.1 version supporting Ubuntu 18.04 or even 20.04.

The dependency list for BBB is significant, among other things it uses LibreOffice for the whiteboard. This adds to the pain of installing and maintaining it. It wouldn’t surprise me if some of the interactions between all the different components have security issues.

Conclusion

If you want something that’s not really painful to install and run then use Jitsi.

If you need YouTube live streaming use Jitsi.

If you need whiteboards and a good text chat system or if you generally need to run things like a classroom then BBB is a good option. But only if you can manage it, know someone who can manage it for you, or are happy to pay for a managed service provider to do it for you.

Planet DebianSylvain Beucler: Debian LTS and ELTS - August 2020

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

In August, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 21.75h for LTS (out of my 30 max; all done) and 14.25h for ELTS (out of my 20 max; all done).

We had a Birds of a Feather videoconf session at DebConf20, sadly with varying quality for participants (from very good to unusable), where we shared the first results of the LTS survey.

There were also discussions about evaluating our security reactivity, which proved surprisingly hard to estimate (neither CVE release date and criticality metrics are accurate nor easily available), and about when it is appropriate to use public naming in procedures.

Interestingly ELTS gained new supported packages, thanks to a new sponsor -- so far I'd seen the opposite, because we were close to the EOL.

As always, there were opportunities to de-dup work through mutual cooperation with the Debian Security team, and LTS/ELTS similar updates.

ELTS - Jessie

  • Fresh build VMs
  • rails/redmine: investigate issue, initially no-action as it can't be reproduced on Stretch and isn't supported in Jessie; follow-up when it's supported again
  • ghostscript: global triage: identify upstream fixed version, distinguish CVEs fixed within a single patch, bisect non-reproducible CVEs, reference missing commit (including at MITRE)
  • ghostscript: fix 25 CVEs, security upload ELA-262-1
  • ghostscript: cross-check against the later DSA-4748-1 (almost identical)
  • software-properties: jessie triage: mark back for update, at least for consistency with Debian Stretch and Ubuntu (all suites)
  • software-properties: security upload ELA-266-1
  • qemu: global triage: update status and patch/regression/reproducer links for 6 pending CVEs
  • qemu: jessie triage: fix 4 'unknown' lines for qemu following changes in package attribution for XSA-297, work continue in September

LTS - Stretch

  • sane-backends: global triage: sort and link patches for 7 CVEs
  • sane-backends: fix dep-8 test and notify the maintainer,
  • sane-backends: security upload DLA-2332-1
  • ghostscript: security upload DLA 2335-1 (cf. common ELTS work)
  • ghostscript: rebuild ("give back") on armhf, blame armhf, get told it was a concurrency / build system issue -_-'
  • software-properties: security upload DLA 2339-1 (cf. common ELTS work)
  • wordpress: global triage: reference regression for CVE-2020-4050
  • wordpress: stretch triage: update past CVE status, work continues in September with probably an upstream upgrade 4.7.5 -> 4.7.18
  • nginx: cross-check my July update against the later DSA-4750-1 (same fix)
  • DebConf BoF + IRC follow-up

Documentation/Scripts

Kevin Rudd2GB: Morrison’s Retirement Rip-Off

E&OE TRANSCRIPT
RADIO INTERVIEW
BEN FORDHAM LIVE
2GB, SYDNEY

Topics: Superannuation; Cheng Lei consular matter

Ben Fordham
Now, superannuation to increase or not? Right now 9.5% of your wage goes towards super, and it sits there until you retire. From July 1 next year, the compulsory rate is going up. It will climb by half a per cent every year until it hits 12% in 2025. So it’s slowly going from 9.5% to 12%. Now that was legislated long before Coronavirus. Now we are in recession, and the government is hinting strongly that it’s ready to dump or delay the policy to increase super contributions. Now I reckon this is a genuine barbecue stopper. It’s not a question of Labor versus Liberal or left versus right. Some want their money now to help them out of hardship. Others say no, we have super for a reason and that is to save for the future. Former Prime Minister Kevin Rudd has got a strong view on this and he joins us on the line. Kevin Rudd, good morning to you.

Kevin Rudd
Morning, Ben. Thanks for having me on the program.

Ben Fordham
No problem. You want Scott Morrison and Josh Frydenberg to leave super alone.

Kevin Rudd
That’s right. And Mr Morrison did promise to maintain this policy which we brought in when he went to the people at the last election. And remember, Ben, back in 2014, they already deferred this for five years. Otherwise, this thing would be done and dusted and it’d be all the way up to 12 by now. I’m just worried we’re going to find one excuse after another to kick this into the Never Never Land. And the result is that working families, those people listening to your program this morning, are not going to have a decent nest egg for their retirement.

Ben Fordham
All right, most of those hard-working Aussies are telling me that they would like the option of having the money right now.

Kevin Rudd
Well, the problem with super is that if you open the floodgates and allow people what Morrison calls as ‘early access’, then what happens is they hollow out and then if you take out $10,000 now as a 35-year-old, by the time you retire you’re going to be $65,000 to $130,000 worse off. That’s how it builds up. So I’m really worried about that. And also, you know Ben, then we’re living longer. Once upon a time, we used to retire at 65 and we’d all be dead by 70. Guess what, that’s not the case anymore. People are living to 80, 90 and the young people listen to your program, or a large number of them, are going to be around until they’re 100. So what we have for retirement income is really important, otherwise you’re back on the age pension which, despite changes I made in office, is not hugely generous.

Ben Fordham
I’m sure you respect the view of the Reserve Bank governor Philip Lowe. Philip Lowe says lifting the super guarantee would reduce wages, cut consumer spending and cost jobs. So he’s got a very different view to you.

Kevin Rudd
Well, I’ve actually had a look at what Governor Lowe had to say. I’ve been reading his submission in the last 24 hours or so. On the question of the impact on wages, yes, he says it would be a potential deferral of wages, but he doesn’t express a view one way or the other to whether that is good or bad. But on employment and the argument used by the government that this is somehow some negative effect on employment, it just doesn’t stack up. By the way, Ben, remember, if this logic held that somehow if we don’t have the superannuation guarantee levy going up, that wages would increase; well, after the government deferred this for five years, starting from 2014, guess what, working people got no increase in their super, but also their wages have flatlined as well. I’m just worried about how this all lands at the end for working people wanting to have a decent retirement.

Ben Fordham
Okay, but don’t we need to be aware of the times that we’re living in? You said earlier, you’re concerned that the government’s looking for excuses to put this thing off or kill this thing off. Well, we do have a global health pandemic at the moment. Isn’t that the ultimate reason why we should be adjusting our position?

Kevin Rudd
There’s always a crisis. I took the country through the global financial crisis, which threw every economy in the world, every major one, into recession. We managed to avoid it here in Australia through a combination of good policy and some other factors as well. It didn’t cross our mind to kill super during that period of time, or superannuation increases. It was simply not in our view the right approach, because we were concerned about keeping the economy going in here and now, but also making proper preparations for the future. But then here’s the rub. If 9% is good enough for everybody, or 9.5% where it is at the moment, then why the politicians and their staffers currently on 15.4%? Very generous for them. Not so generous for working families. That’s what worries me.

Ben Fordham
We know that you keep a keen eye on China. We wake up this morning to the news that Chinese authorities have detained an Australian journalist, Cheng Lei, without charge. Is the timing of this at all suspicious?

Kevin Rudd
You know, Ben, I don’t know enough of the individual circumstances surrounding this case. I don’t want to say anything which jeopardizes the individual concerned. All I’d say is, the Australian Government has a responsibility to look after any Australian — Chinese Australian, Anglo Saxon Australian, whoever Australian — if they get into strife abroad. And I’m sure, knowing the professionalism of the Australian Foreign Service, that they’re doing everything physically possible at present to try and look after this person.

Ben Fordham
Yeah, we know that Marise Payne’s doing that this morning. We appreciate you jumping on the phone and talking to us.

Kevin Rudd
Thanks, Ben. Appreciate it.

Ben Fordham
Former Prime Minister Kevin Rudd, I reckon this is one of these issues where you can’t just put a line down the middle of the page and say, ‘okay, Labor supporters are going to think this way and Liberal supporters are going to think that way’. I think there are two schools of thought and it depends on your age, it depends on your circumstance, it depends on your attitude. Some say ‘give me the money now, it’s my money, not yours’. Others say ‘no, we have super for a reason, it’s there for our retirement’. Where do you stand? It’s 7.52 am.

The post 2GB: Morrison’s Retirement Rip-Off appeared first on Kevin Rudd.

Kevin RuddSunrise: Protecting Australian Retirees

E&OE TRANSCRIPT
TELEVISION INTERVIEW
SUNRISE, SEVEN NETWORK
1 SEPTEMBER 2020

Journalist
Now two former Labor prime ministers have taken aim at the government demanding it go ahead with next year’s planned increased to compulsory super. Paul Keating introduced the scheme back in 1992 and says workers should not miss out.

Paul Keating
[Recording] They want to gyp ordinary people by two and a half per cent of their income for the rest of their life. I mean, the gall of it. I mean, the heartlessness of it.

Journalist
Kevin Rudd, who moved to increase super contributions as well says the rise to 12% in the years ahead, should not be stalled.

Kevin Rudd
[Recording] This is a cruel assault by Morrison on the retirement income of working Australians and using the cover of COVID to try and get away with it.

Journalist
The government is yet to make an official decision. Joining me now is the former prime minister Kevin Rudd. Kevin Rudd, good morning to you. Rather than being a cruel assault by the federal government, is it an acknowledgment that we’re going into the worst recession since the Depression?

Kevin Rudd
Well, you know, Kochie, there’s always been an excuse not to do super and not to continue with super. And what we’ve seen in the past is the Liberal Party at various stages just trying to kill this scheme which Paul Keating got going for the benefit of working Australians all those years ago. They had no real excuse for deferring this move from nine to 12%. When they did it back in 2014, and this would be a further deferral. Look, what’s really at stake here, Kochie, is just working families watching your program this morning, having a decent retirement. That’s why Paul brought it in.

Journalist
Sure.

Kevin Rudd
That’s why we both decided to come and speak out.

Journalist
I absolutely agree with it, but it’s a matter of timing. What do you say to all the small business owners out there who are just trying to keep afloat? To say, hey gang, you’re gonna have to pay an extra half a per cent in super, that you’re going to have to pay on a quarterly basis, to add to your bills again, to try and survive this.

Kevin Rudd
Well, what Mr Morrison saying to those small business folks is the reason we don’t want to do this super increase is because it’s going to get in the road of a wage increase and you can’t have this both ways, mate. Either you’ve got an employer adding 0.5 by way of a wage increase, or by super that’s the bottom line here.

Journalist
OK.

Kevin Rudd
You can’t simply argue that this is all going to disappear into some magic pudding. The bottom line is: the reason we did it this way, and Paul before me, was a small increment each year.

Journalist
Right.

Kevin Rudd
But it builds up as you know, you’re a finance guy Kochie, into a ginormous nest egg for people.

Journalist
Absolutely.

Kevin Rudd
And for the country.

Journalist
I do not disagree with the overall theory of it. It’s just in the timing. So what you’re saying is to Australian bosses around the country is to go to your staff and say, ‘no, you’re not going to get a pay increase, because I’m going to put more into your super, and you’ve got to like it or lump it’.

Kevin Rudd
Well, Kochie, if that was the case, why is it that we’ve had no super increase in the guarantee levy over the last five or six years, and wages growth has been absolutely doodly-squat over that period of time? In other words, the argument for the last five years is we couldn’t do an SGL increase from nine to 12 because would impact on wages. Guess what, we got no increase in super and no increase in real wages. And it just doesn’t hold, mate.

Journalist
The Reserve Bank is saying don’t do it. Social services group are saying don’t do it.

Kevin Rudd
Well, mate, if you look carefully at what the governor the RBA says, he says on the impact on wages, yes, it is, in his language of wage deferral on which he does not express an opinion. And as for the employment, the jobs impact, he says he does not have a view. I think we need to be very careful in reading the detail of what governor Lowe has had to say. Our argument is just, what’s decent for working families? And why are the pollies and their staffers getting 15.4% and yet working families who Paul would try to look after with this massive reform 30 years ago, stuck at nine? I don’t think that’s fair. It’s a double standard.

Journalist
Yep. I absolutely agree with you on that as well.

The post Sunrise: Protecting Australian Retirees appeared first on Kevin Rudd.

Planet DebianJunichi Uekawa: September.

September. Recently I've been writing and reading more golang code and I feel more comfortable with it. However every day I feel frustrated by the lack of features.

Worse Than FailureCodeSOD: Unknown Purpose

Networks are complex beasts, and as they grow, they get more complicated. Diagnosing and understanding problems on networks rapidly gets hard. “Fortunately” for the world, IniTech ships one of those tools.

Leonore works on IniTech’s protocol analyzer. As you might imagine, a protocol analyzer gathers a lot of data. In the case of IniTech’s product, the lowest level of data acquisition is frequently sampled voltage measurements over time. And it’s a lot of samples- depending on the protocol in question, it might need samples on the order of nanoseconds.

In Leonore’s case, those raw voltage samples are the “primary data”. Now, there are all sorts of cool things that you can do with that primary data, but those computations become expensive. If your goal is to be able to provide realtime updates to the UI, you can’t do most of those computations- you do those outside of the UI update loop.

But you can do some of them. Things like level crossings and timing information can be built quickly enough for the UI. These values are “secondary data”.

As data is collected, there are a number of other sections of the application which need to be notified: the UI and the various high-level analysis components. Architecturally, Leonore’s team made an event-driven approach to doing this. As data is collected, a DataUpdatedEvent fires. The DataUpdatedEvent fires twice: once for the “primary data” and once for the “secondary data”. These two events always happen in lockstep, and they happen so closely together that, for all other modules in the application, they can safely be considered simultaneous, and no components in the application ever only care about one- they always want to see both the primary and the secondary data.

So, to review: the data collection module outputs a pair of data updated events, one containing primary data, one containing secondary data, and can never do anything else, and these two events could basically be viewed as the same event by everything else in the application.

Which raises a question about this C++/COM enum, used to tag the different events:

  enum DataUpdatedEventType 
  {
    [helpstring("Unknown data type.")] UnknownDataType = 0, 
    [helpstring("Primary data.")] PrimaryData = 1,
    [helpstring("Secondary data.")] SecondaryData = 2,
  };

As stated, the distinction between primary/secondary events is unnecessary. In fact, sending two events makes all the consuming code more complicated, because in many cases, they can’t start working until they’ve received the secondary data, and thus have to cache the primary data until the next event arrives.

But that’s minor. The UnknownDataType is never used. It can never be used. There is no case in which the data collection module will ever output that. There’s no reason why it would ever need to output that. None of the consumers are prepared to handle that- sending an UnknownDataType would almost certainly cause a crash in most configurations.

So why is it there? I’ll let Leonore explain:

The only answer I can give is this: When this was written, half of us didn’t know what we were doing most of the time, and most of us didn’t know what we were doing half of the time. So now there’s an enum in the code base that has never been used and, I would submit, CAN never be used. Or maybe I ought to say SHOULD never be used. I would just delete it, but I’ve never quite been able to bring myself to do so.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Cryptogram North Korea ATM Hack

The US Cybersecurity and Infrastructure Security Agency (CISA) published a long and technical alert describing a North Korea hacking scheme against ATMs in a bunch of countries worldwide:

This joint advisory is the result of analytic efforts among the Cybersecurity and Infrastructure Security Agency (CISA), the Department of the Treasury (Treasury), the Federal Bureau of Investigation (FBI) and U.S. Cyber Command (USCYBERCOM). Working with U.S. government partners, CISA, Treasury, FBI, and USCYBERCOM identified malware and indicators of compromise (IOCs) used by the North Korean government in an automated teller machine (ATM) cash-out scheme­ — referred to by the U.S. Government as “FASTCash 2.0: North Korea’s BeagleBoyz Robbing Banks.”

The level of detail is impressive, as seems to be common in CISA’s alerts and analysis reports.

Planet DebianRuss Allbery: Review: Riot Baby

Review: Riot Baby, by Tochi Onyebuchi

Publisher: Tor.com
Copyright: January 2020
ISBN: 1-250-21476-9
Format: Kindle
Pages: 176

From Ella's childhood, she sees visions of the future. They come at first with nose bleeds and other physical symptoms, but their worst aspect is that they're sad and dark. Ella is black, as are those around her, and their futures are full of shootings and gangs, death and trouble. As she grows older, she develops her Thing: powers that let her bend, move, and destroy things with her mind, and later to become invisible, teleport, and reshape the world. Ella has superpowers.

Ella is not the viewpoint character of most of Riot Baby, however. That is Kev, her younger brother, the riot baby of the title, born in South Central on the day of the Rodney King riots. Kev grows up in Harlem where they move after the destruction from the riots: keeping Ella's secret, making friends, navigating gang politics, watching people be harassed by the cops. Growing up black in the United States. Then Ella sees something awful in the future and disappears, and some time afterwards Kev ends up in Rikers Island.

One of the problems with writing reviews of every book I read is that sometimes I read books that I am utterly unqualified to review. This is one of those books. This novella is about black exhaustion and rage, about the experience of oppression, about how it feels to be inside the prison system. It's also a story in dialogue with an argument that isn't mine, between the patience and suffering of endurance and not making things worse versus the rage of using all the power that one has to force a change. Some parts of it sat uncomfortably and the ending didn't work for me on the first reading, but it's not possible for me to separate my reactions to the novella from being a white man and having a far different experience of the world.

I'm writing a review anyway because that's what I do when I read books, but even more than normal, take this as my personal reaction expressed in my quiet corner of the Internet. I'm not the person whose opinion of this story should matter.

In many versions of this novella, Ella would be the main character, since she's the one with superpowers. She does get some viewpoint scenes, but most of the focus is on Kev even when the narrative is following Ella. Kev trying to navigate the world, trying to survive prison, seeing his friends murdered by the police, and living as the target of oppression that Ella can escape. This was an excellent choice. Ella wouldn't have been as interesting of a character if the story were more focused on her developing powers instead of on the problems that she cannot solve.

The writing is visceral, immediate, and very evocative. Onyebuchi builds the narrative with a series of short and vividly-described moments showing the narrowing of Kev's life and Ella's exploration of her growing anger and search for a way to support and protect him.

This is not a story about nonviolent resistance or about the arc of the universe bending towards justice. Ella confronts this directly in a memorable scene in a church towards the end of the novella that for me was the emotional heart of the story. The previous generations, starting with Kev and Ella's mother, preach the gospel of endurance and survival and looking on the good side. The prison system eventually provides Kev a path to quiet and a form of peace. Riot Baby is a story about rejecting that approach to the continuing cycle of violence. Ella is fed up, tired, angry, and increasingly unconvinced that waiting for change is working.

I wasn't that positive on this story when I finished it, but it's stuck with me since I read it and my appreciation for it has grown while writing this review. It uses the power fantasy both to make a hard point about the problems power cannot solve and to recast the argument about pacifism and nonviolence in a challenging way. I'm still not certain what I think of it, but I'm still thinking about it, which says a lot. It deserves the positive attention that it's gotten.

Rating: 7 out of 10

Planet DebianPaul Wise: FLOSS Activities August 2020

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian: restarted RAM eating service
  • Debian wiki: unblock IP addresses, approve accounts

Sponsors

The cython-blis/preshed/thinc/theano bugs and smart-open/python-importlib-metadata/python-pyfakefs/python-zipp/python-threadpoolctl backports were sponsored by my employer. All other work was done on a volunteer basis.

,

Planet DebianChris Lamb: Free software activities in August 2020

Here is another monthly update covering what I have been doing in the free software world during August 2020 (previous month):

  • Filed a pull request against django-enumfield, a library that provides an enumeration-like model field for the Django web development framework. The classproperty helper has been moved to django.utils.functional in newer versions of Django. [...]

  • Transferred the maintainership of my Strava Enhancement Suite Chrome extension to improve the user experience on the Strava athletic tracker to Pavel Dolecek.

  • As part of my role of being the assistant Secretary of the Open Source Initiative and a board director of Software in the Public Interest, I attended their respective monthly meetings and participated in various licensing and other discussions occurring on the internet, as well as the usual internal discussions, etc.

  • Filed a pull request for JSON-C, a reference counting library to allow you to easily manipulate JSON objects from C in order to make the documentation build reproducibly. [...]

  • Reviewed and merged some changes to my django-auto-one-to-one library for Django from Dan Palmer (which automatically creates and destroys associated model instances) to not configure signals for models that aren't installed and to honour INSTALLED_APPS during model setup. [...]

  • Merged a pull request from Michael K. to cleanup the codebase after dropping support for Python 2 and Django 1.x [...] in my django-slack library which provides a convenient wrapper between projects using the Django and the Slack chat platform.

  • Updated my django-staticfiles-dotd utility that concatenates Debian .d-style directories containing Javascript and CSS to drop unquote usage from the six compatibility library. [...]

I uploaded Lintian versions 2.86.0, 2.87.0, 2.88.0, 2.89.0, 2.90.0, 2.91.0 and 2.92.0, as well as made the following changes:

  • New features:

    • Check for StandardOutput= and StandardError= fields that use the deprecated syslog or syslog-console systemd facilities. (#966617)
    • Add support for clzip as an alternative for lzip. (#967083)
    • Check for User=nobody and Group=nogroup in systemd .service files. (#966623)
  • Bug fixes:

  • Reporting/interface:

  • Misc:

    • Add justification for the use of the lzip dependency in a previous debian/changelog entry. (#966817)
    • Update the generate-tag-summary release script to reflect change of tag definition filename extension change from .desc.tag. [...]
    • Revert a change to the spelling-error-in-rules-requires-root tag's severity; this is not a "spelling" check in the sense that it does not use our dictionary. [...]
    • Drop an unused $skip_tag argument in the extract_service_file_values routine. [...]

§


Reproducible Builds

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:

  • Filed a pull request for JSON-C, a reference counting library to allow you to easily construct JSON objects from C in order to make the documentation build reproducibly. [...]

  • In Debian, I:

  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.

  • Filed a build-failure bug against the muroar package that was discovered while doing reproducibility testing. (#968189)

  • We operate a large and many-featured Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, I updated the self-serve package rescheduler to use HTML <pre> tags when dumping any debugging data. [...]


§


diffoscope

I made the following changes to diffoscope, including preparing and uploading versions 155, 156, 157 and 158 to Debian:

  • New features:

    • Support extracting data of PGP signed data. (#214)
    • Try files named .pgp against pgpdump(1) to determine whether they are Pretty Good Privacy (PGP) files. (#211)
    • Support multiple options for all file extension matching. [...]
  • Bug fixes:

    • Don't raise an exception when we encounter XML files with <!ENTITY> declarations inside the Document Type Definition (DTD), or when a DTD or entity references an external resource. (#212)
    • pgpdump(1) can successfully parse some binary files, so check that the parsed output contains something sensible before accepting it. [...]
    • Temporarily drop gnumeric from the Debian build-dependies as it has been removed from the testing distribution. (#968742)
    • Correctly use fallback_recognises to prevent matching .xsb binary XML files.
    • Correct identify signed PGP files as file(1) returns "data". (#211)
  • Logging improvements:

    • Emit a message when ppudump version does not match our file header. [...]
    • Don't use Python's repr(object) output in "Calling external command" messages. [...]
    • Include the filename in the "... not identified by any comparator" message. [...]
  • Codebase improvements:

    • Bump Python requirement from 3.6 to 3.7. Most distributions are either shipping with Python 3.5 or 3.7, so supporting 3.6 is not only somewhat unnecessary but also cumbersome to test locally. [...]
    • Drop some unused imports [...], drop an unnecessary dictionary comprehensions [...] and some unnecessary control flow [...].
    • Correct typo of "output" in a comment. [...]
  • Release process:

    • Move generation of debian/tests/control to an external script. [...]
    • Add some URLs for the site that will appear on PyPI.org. [...]
    • Update "author" and "author email" in setup.py for PyPI.org and similar. [...]
  • Testsuite improvements:

    • Update PPU tests for compatibility with Free Pascal versions 3.2.0 or greater. (#968124) [...][...][...]
    • Mark that our identification test for .ppu files requires ppudump version 3.2.0 or higher. [...]
    • Add an assert_diff helper that loads and compares a fixture output. [...][...][...][...]
  • Misc:


§


Debian

Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

  • Investigated and triaged chrony [...], golang-1.8 [...], golang-go.crypto [...], golang-golang-x-net-dev [...], icingaweb2 [...], lua5.3 [...], mongodb [...], net-snmp, php7.0 [...], qt4-x11, qtbase-opensource-src, ruby-actionpack-page-caching [...], ruby-doorkeeper [...], ruby-json-jwt [...], ruby-kaminari [...], ruby-kaminari [...], ruby-rack-cors [...], shiro [...] & squirrelmail.

  • Frontdesk duties, responding to user/developer questions, reviewing others' packages, participating in mailing list discussions, attending the Debian LTS BoF at DebConf20 etc.

  • Issued DLA 2313-1 and ELA-257-1 to fix a privilege escalation vulnerability in Net-SNMP.

  • Issued ELA-263-1 for qtbase-opensource-src and ELA-261-1 for qt4-x11, two components of cross-platform C++ application framework. A specially-crafted XBM image file could have caused a buffer overread.

  • Issued ELA-268-1 to address unsafe serialisation vulnerabilities that were discovered in the PHP-based squirrelmail webmail client.

  • Issued DLA 2311-1 for zabbix, the PHP-based monitoring system to fix a potential cross-site scripting vulnerability via <iframe> HTML elements.

  • Issued DLA 2334-1 to fix a denial of service vulnerability in ruby-websocket-extensions, a library for managing long-lived HTTP 'WebSocket' connections.

  • Issued DLA 2345-1 for PHP 7.0 as it was discovered that there was a use-after-free vulnerability when parsing PHAR files, a method of putting entire PHP applications into a single file.

  • I also updated the Extended LTS website to pluralise the "Related CVEs" text in announcement emails [...] and dropped some trailing whitespace [...].

You can find out more about the project via the following video:


§


Uploads to Debian

  • memcached:

    • 1.6.6-2 — Enable TLS capabilities by default. (#968603)
    • 1.6.6-3 — Add libio-socket-ssl-perl to test TLS support and perform a general package refresh.
  • python-django:

    • 2.2.15-1 (unstable) — New upstream bugfix release
    • 3.1-1 (experimental) — New upstream release.
    • 2.2.15-2 (unstable) & 3.1-2 (experimental) — Set the same PYTHONPATH when executing the runtime tests as we do in the package build. (#968577)
  • docbook-to-man:

    • 2.0.0-43 — Refresh packaging, and upload some changes from the Debian Janitor.
    • 2.0.0-44 — Fix compatibility with GCC 10, restoring the missing /usr/bin/instant binary. (#968900)
  • hiredis (1.0.0-1) — New upstream release to experimental.

LongNowMichael McElligott, A Staple of San Francisco Art and Culture, Dies at 50

It is with great sadness that we share the news that Michael McElligott, an event producer, thespian, writer, long-time Long Now staff member, and relentless promoter of the San Francisco avant-garde, has died. He was 50 years old.

Michael battled an aggressive form of brain cancer over the past year. He kept his legendary sense of humor throughout his challenging treatment. He died surrounded by family, friends, and his long-time partner, Danielle Engelman, Long Now’s Director of Programs.

Most of the Long Now community knew Michael as the face of the Conversations at The Interval speaking series, which began in 02014 with the opening of Long Now’s Interval bar/cafe. But he did much more than host the talks. For the first five years of the series, each of the talks was painstakingly produced by Michael. This included finding speakers, developing the talk with the speakers, helping curate all the media associated with each talk, and oftentimes hosting the talks. Many of the production ideas explored in this series by Michael became adopted across other Long Now programs, and we are so thankful we got to work with him.

An event producer since his college days, Michael was active in San Francisco’s art and theater scene as a performer and instigator of unusual events for more than 20 years. From 01999 to 02003 Michael hosted and co-produced The Tentacle Sessions, a monthly series spotlighting accomplished individuals in the San Francisco Bay Area scene—including writers, artists, and scientists. He has produced and performed in numerous alternative theater venues over the years including Popcorn Anti-Theater, The EXIT, 21 Grand, Stage Werx, and the late, great Dark Room Theater. He also produced events for Speechless LIVE and The Battery SF.

Michael was a long-time blogger (usually under his nom de kunst mikl-em) for publications including celebrated arts magazine Hi Fructose and award-winning internet culture site Laughing Squid. His writing can be found in print in the Hi Fructose Collected Edition books and Tales of The San Francisco Cacophony Society in which he recounted some of his adventures with that noted countercultural group.

Beginning in the late 01990s as an employee at Hot Wired, Michael worked in technology in various marketing, technical and product roles. He worked at both startups and tech giants; helped launch products for both consumers and enterprise; and worked with some of the best designers and programmers in the industry.

Originally from Richmond, Virginia, he co-founded a college radio station and an underground art space before moving to San Francisco. In the Bay Area, he was involved with myriad artistic projects and creative ventures, including helping start the online radio station Radio Valencia.

Michael had been a volunteer and associate of Long Now since 02006; he helped at events and Seminars, wrote for the blog and newsletter, and was a technical advisor. In 02013 he officially joined the staff to help raise funds, run social media, and design and produce the Conversations at The Interval lecture series.

To honor Michael’s role in helping to fundraise for and build The Interval, we have finally completed the design on the donor wall which will be dedicated to him.  It should be completed in the next few weeks and will bear a plaque remembering Michael. You can watch a playlist of all of Michael’s Interval talks here. Below, you can find a compilation of moments from the more than a hundred talks that Michael hosted over the years.

“This moment is really both an end and a beginning. And like the name The Interval, which is a measure of time, and a place out of time, this is that interval.”

Michael McElligott

Kevin RuddPress Conference: Morrison’s Assault on Superannuation

E&OE TRANSCRIPT
PRESS CONFERENCE
31 AUGUST 2020
BRISBANE

Kevin Rudd
The reason I’m speaking to you this afternoon, here in Brisbane, is that Paul Keating, former Prime Minister of Australia, and myself, have a deep passion for the future of superannuation, retirement income adequacy for working families for the future, the future of our national savings and the national economy. So former prime minister Paul Keating is speaking to the media now in Sydney, and I’m speaking to national media now in Brisbane. And I don’t think Paul and I have ever done a joint press conference before, albeit socially distanced between Brisbane and Sydney. But the reason we’re doing it today is because this is a major matter of public importance for the country.

Let tell you why. Keating is the architect of our national superannuation policy. This was some 30 years ago. And as a result of his efforts, we now have the real possibility of decent retirement income policy for working families for the first time in this country’s history. And on top of that, we’ve accumulated something like $3 trillion worth of national savings. If you ask the question today, why is it that Australia still has a triple-A credit rating around the world, it’s because we have a bucketload of national savings. And so Paul Keating should be thanked for that, not just for the macroeconomy though, but also for delivering this enormous dividend to working families and giving them retirement dignity. Of course, what we did in government was announce that we would move the superannuation guarantee level from 9% to 12%. And we legislated to that effect. And prior to last election, Mr Morrison said that that was also Liberal and National Party policy as well. What Mr Keating and I are deeply concerned about is whether, in fact, this core undertaking to Australian working families is now in the process of being junked.

There are two arguments, which I think we need to bear in mind. The first is already we’ve had the Morison Government rip out $40 billion-plus from people’s existing superannuation accounts. And the reason why they’ve done that is because they haven’t had an economic policy alternative other than to say to working families, if you’re doing it tough as a result of the COVID crisis, then you can go and raid your super. Well, that’s all very fine and dandy, but when those working people then go to retire in the decades ahead, they will have gutted their retirement income. And that’s because this government has allowed them to do that, and in fact forced them to do that, in the absence of an economic policy alternative. Therefore, we’ve had this slug taken to the existing national superannuation pile. But furthermore, the second big slug is this indication increasingly from both Mr Morrison and Mr Frydenberg that they’re now going to betray the Australian people, betray working families, by repudiating their last pre-election commitment by abandoning the increase from 9.5% where it is now to 12%. This is a cruel assault by Morrison on the retirement income of working Australians and using the cover of COVID to try and get away with it.

The argument which the Australian Government seems to be advancing to justify this most recent assault on retirement income policy is that they say that if we go ahead with increasing the superannuation guarantee level from 9.5% and to 10, to 10.5 to 11, to 11.5 to 12 in the years that are to come, that that will somehow depress natural wages growth in the Australian economy. Pigs might fly. That is the biggest bullshit argument I have ever heard against going ahead with decent provision for people’s superannuation savings for the future. There is no statistical foundation for it. There is no logical foundation for it. There is no data-based argument sustained. This is an increment of half-a-percent a year out for the next several years until we get to 12%. What is magic about 12%? It’s fundamental in terms of the calculations that have been done to provide people with decent superannuation adequacy, retirement income adequacy, when they stop working. That’s why we’re doing it. But the argument that somehow by not proceeding with the increase from 9.5 to 12%, we’re going to deny people a proper increase in wages in the period ahead is an absolute nonsense. There is no basis to that argument whatsoever.

And what does it mean for an average working family? If you’re currently on $70,000 a year and superannuation is frozen at 9.5%, and not increased to 12, by the time you retire, you’re going to be at least $70,000 worse off, than would otherwise be the case. Why have we, in successive Labor government’s been so passionate about superannuation policy? Because we believe that every Australian, every working family should have the opportunity for some decency, dignity and independence in their retirement. And guess what: as we live longer, we’re going to spend longer in retirement and this is going to mean more and more for the generations to come. Of course, what’s the alternative if we don’t have superannuation adequacy, and if this raid on super continues under cover of COVID again? Well, it means that Mr Morrison and Mr Frydenberg in the future are going to be forcing more and more people under the age pension and my challenge to Australians is simply this: do you really trust your future and your retirement to Mr Morrison’s generosity in years to come on the age pension? It’s a bit like saying that you trust Mr Morrison in terms of his custodianship of the aged care system in this country. Successive conservative governments have never supported effective increases to the age pension, and they’ve never properly supported the aged care sector either.

But the bottom line is, if you deny people dignity and independence through the superannuation system, and these measures which the current conservative government are undertaking and are foreshadowing take us further in that direction. Then there’s only one course left for people when they retire and that’s to go onto the age pension. One of the things I’m proudest of in our period government was that we brought about the biggest single adjustment and the aged pension in its history. It was huge, something like $65. And we made that as a one-off adjustment which was indexed to the future. But let me tell you, that would never happen under a conservative government. And therefore entrust people’s future retirement to the future generosity of whichever conservative government might be around at the time is frankly folly. The whole logic of us having a superannuation system is that every working Australian can have their own independent dignity in their own retirement. That’s what it’s about.

So my appeal to Mr Morrison and Mr Frydenberg today is: Scotty, Joshy, think about it again. This is a really bad idea. My appeal to them as human beings as they look to the retirement of people who are near and dear to them in the future is: don’t take a further meataxe to the retirement income of working families for the future. It’s just un-Australian. Thank you.

Journalist
Well, what do you think of the argument that delaying the superannuation guarantee increase would actually give people more money in their take home pay? I know you’ve used fairly strong language.

Kevin Rudd
Well, it is a fraudulent argument. There’s nothing in the data to suggest that that would happen. Let me give you one small example. In the last seven or eight years, we’ve had significant productivity growth in the Australian economy, in part because of some of the reforms we brought about in the economy during our own period in government. These things flow through. But if you look at productivity growth on the one hand, and look at the negligible growth in real wage levels over that same period of time, there is no historical argument to suggest that somehow by sacrificing superannuation increases that you’re going to generate an increase in average wages and average income. There’s simply nothing in the argument whatsoever.

So therefore, I can only conclude that this is a made-up argument by Mr Morrison using COVID cover, when in fact, what is their motivation? The Liberal Party have never liked the compulsory superannuation scheme, ever. They’ve opposed it all the way through. And I can only think that the reason for that is because Mr Keating came up with the idea in the first place. And on top of it, that because we now have such large industry superannuation funds around Australia, and $3 trillion therefore worth of muscle in the superannuation industry, that somehow represents a threat to their side of politics. But the argument that this somehow is going to effect wage increases for future average Australians is simply without logical foundation.

Journalist
Sure, but you’re comparing historical data with not exactly like-for-like given we’re now in a recession and the immediate future will be deeply in recession. So, in terms of the argument that delaying [inaudible] will end up increasing take-home pay packets. Do admit that, you know, by looking at historical data and looking at the current trajectory it’s not like for like?

Kevin Rudd
The bottom line is we’ve had relatively flat growth in the economy in the last several years, and I have seen so many times in recent decades conservative parties [inaudible] that somehow, by increasing superannuation, we’re going to depress average income levels. Remember, the conservatives have already delayed the implementation of this increase of 2.5% since they came to power in 2013-14. Whatever excuses they managed to marshall that time in so doing. But the bottom line is, as this data indicates, that hasn’t resulted in some significant increase in wages. In fact, the data suggests the reverse.

So what I’m suggesting to you is: for them to argue that a 0.5% a year increase in the superannuation guarantee level, is going to send a torpedo a’midships into the prospects of wage increases for working Australians makes no sense. What doesn’t make sense is the accumulation of those savings over a lifetime. If Paul Keating hadn’t done what he did back then, there’d be no $3 trillion worth of Australian national savings. Paul had the vision to do it. Good on him. We tried to complete that vision by going from 9 to 12. And this mob have tried to stop it. But the real people who miss out are your parents, your parents, and I’m sorry to tell you both, you’ll both get older and you too in terms of the adequacy of your retirement income when the day comes.

Journalist
So if it’s so important then, why did you only increase it by 0.5% during your six years in government, sharing that period of course with Julia Gillard?

Kevin Rudd
Well, the bottom line is: we decided to increase it gradually, so that we would not present any one-off assault to the ability of employers and employees to enjoy reasonable wage increases. It was a small increase every year and, guess what: it continues to be a very small increase every year until we get to 12. The other thing I’d say, which I haven’t raised so far in our discussion today, is that for most of the last five years, I’ve been in the United States. I run an American think tank. When I’ve traveled around the world and people know of my background in Australian politics, I am always asked this question: how did you guys come up with such a brilliant national savings policy? Very few, if any other countries in the world have this. But what we have done is a marvelous piece of long-term planning for generations of Australians. And with great macroeconomic benefit for the Australian economy in terms of this pool of national savings. We’re the envy of the world.

And yet what are we doing? Turning around and trashing it. So the reason we are gradual about it was to be responsible, not give people a sudden 3% hit, to tailor it over time, and we did so, just like Paul did with the original move from zero, first to 3, then 6 to 9. It happened gradually. But the cumulative effect of this over time for people retiring in 10, 20, 30 40 years’ time is enormous. And that’s why these changes are so important to the future. As you know, I rarely call a press conference. Paul doesn’t call many press conferences either, but he and I are angry as hell that this mob have decided it seems to take a meataxe to this important part of our national economic future and our social wellbeing. That’s what it’s about.

Journalist
So we know that [inaudible] super accounts have been wiped completely. What damage do you think that would do if it’s extended? So that people can continue to access their super?

Kevin Rudd
The damage it does for individual working Australians, as I said before, it throws them back onto the age pension. And the age pension is simply the absolute basic backbone, the absolute basic provision, for people’s retirement for the future. If no other options exist. And as I said, in office, we undertook a fundamental reform to take it from below poverty level to above poverty level. But if you want for the future, for folks who are retiring to look at that as their option, well, if you continue to destroy this nation’s superannuation nest egg, that’s exactly where you’re going to end up. I can’t understand the logic of this. I thought conservatives were supposed to favour thrift. I thought conservatives were supposed to favour saving. They’re accusation against those of us who come from the Labor side of politics apparently is that we love to spend; actually, we like to save, and we do it through a national savings policy. Good for working families and good for the national economy.

And I think it’s just wrong that people have as their only option there for the future to be thrown back on the age pension and on that point, apart from the wellbeing of individual families, think about the impact in the future on the national budget. Most countries say to me that they envy our national savings policy because it takes pressure off the national budget in the future. Why do you think so many of the ratings agencies are marking economies down around the world? Because they haven’t made adequate future provision for retirement. They haven’t made adequate provision for the future superannuation entitlements of government employees as well. So what we have with the Future Fund, which I concede readily was an initiative of the conservative government, but supported by us on a bipartisan basis, is dealing with that liability in terms of the retirement income needs of federal public servants. But in terms of the rest of the nation, that’s what our national superannuation policy was about. Two arms to it. So I can’t understand why a conservative government would want to take the meataxe to [inaudible].

Journalist
Following on from your comments in 2018 when you said national Labor should look at distancing themselves from the CFMEU, do you think that’s something Queensland Labor should do given the events of last week?

Kevin Rudd
Who are you from by the way?

Journalist
The Courier-Mail.

Kevin Rudd
Well, when the Murdoch media ask me a question, I’m always skeptical in terms of why it’s been asked. So I don’t know the context of this particular question. I simply stand by my historical comments.

Journalist
Do you think in light of what happened last week, Michael Ravbar came out quite strongly against Queensland Labor as that they have no economic plan and that the left faction was a bit not tapped into what everyone was thinking normally. So I just wanted to know whether that’s something you think should happen at the state level?

Kevin Rudd
What I know about the Murdoch media is that you have no interest in the future of the Labor government and no interest in the future of the Labor Party. What you’re interested in is a headline in tomorrow’s Courier-Mail which attacks the Palaszczuk government. I don’t intend to provide that for you. I kind of know what the agenda is here. I’ve been around for a long time and I know what instructions you’re going to get.

But let me say this about the Palaszczuk government: the Palaszczuk government has a strong economic record. The Palaszczuk government has handled the COVID crisis well. The Palaszczuk government is up against an LNP opposition led by Frecklington which has repeatedly called for Queensland’s borders to be opened. For for those reasons, the state opposition has no credibility. And for those reasons, Annastacia Palaszczuk has bucketloads of credibility. So as the internal debates, I will leave it to you and all the journalists who will follow them from the Curious Mail.

Journalist
Do you think Labor will do well at the election, Mr Rudd?

Kevin Rudd
That’s a matter for the Queensland people but Annastacia Palaszczuk, given all the challenges that state premiers are facing right now, is doing a first-class job in very difficult circumstances. I used to work for state government. I was Wayne Goss’s chief of staff. I used to be director-general of the Cabinet Office. And I do know something about how state governments operate. And I think she should be commended given the difficult choices which are available to her at this time for running a steady ship. [inaudible] Thanks very much.

The post Press Conference: Morrison’s Assault on Superannuation appeared first on Kevin Rudd.

Cory DoctorowHow to Destroy Surveillance Capitalism

For this week’s podcast, I read an excerpt from “How to Destroy Surveillance Capitalism,” a free short book (or long pamphlet, or “nonfiction novella”) I published with Medium’s Onezero last week. HTDSC is a long critical response to Shoshanna Zuboff’s book and paper on the subject, which re-centers the critique on monopolism and the abusive behavior it abets, while expressing skepticism that surveillance capitalists are really as good at manipulating our behavior as they claim to be. It is a gorgeous online package, and there’s a print/ebook edition following.

MP3

Planet DebianJonathan Carter: Free Software Activities for 2020-08

Debian packaging

2020-08-07: Sponsor package python-sabyenc (4.0.2-1) for Debian unstable (Python team request).

2020-08-07: Sponsor package gpxpy (1.4.2-1) for Debian unstable (Python team request).

2020-08-07: Sponsor package python-jellyfish (0.8.2-1) for Debian unstable (Python team request).

2020-08-08: Sponsor package django-ipwire (3.0.0-1) for Debian unstable (Python team request).

2020-08-08: Sponsor package python-mongoengine (0.20.0-1) for Debian unstable (Python team request).

2020-08-08: Review package pdfminer (20191020+dfsg-3) (Needs some more work) (Python team request).

2020-08-08: Upload package bundlewrap (4.1.0-1) to Debian unstable.

2020-08-09: Sponsor package pdfminer (20200726-1) for Debian unstable (Python team request).

2020-08-09: Sponsor package spyne (2.13.15-1) for Debian unstable (Python team request).

2020-08-09: Review package mod-wsgi (4.6.8-2) (Needs some more work) (Python team request).

2020-08-10: Sponsor package nfoview (1.28-1) for Debian unstable (Python team request).

2020-08-11: Sponsor package pymupdf (1.17.4+ds1-1) for Debian unstable (Python team request).

2020-08-11: Upload package calamares (3.2.28-1) to Debian ubstable.

2020-08-11: Upload package xabacus (8.2.9-1) to Debian unstable.

2020-08-11: Upload package bashtop (0.9.25-1~bpo10+1) to Debian buster-backports.

2020-08-11: Upload package live-tasks (11.0.3) to Debian unstable (Closes: #942834, #965999, #956525, #961728).

2020-08-12: Upload package calamares-settings-debian (10.0.20-1+deb10u4) to Debian buster (Closes: #968267, #968296).

2020-08-13: Upload package btfs (2.22-1) to Debian unstable.

2020-08-14: Upload package calamares (3.2.28.2-1) to Debian unstable.

2020-08-14: Upload package bundlewrap (4.1.1-1) to Debian unstable.

2020-08-19: Upload package gnome-shell-extension-dash-to-panel (38-2) to Debian unstable) (Closes: #968613).

2020-08-19: Sponsor package mod-wsgi (4.7.1-1) for Debian unstable (Python team request).

2020-08-19: Review package tqdm (4.48.2-1) (Needs some more work) (Python team request).

2020-08-19: Sponsor package tqdm (4.48.2-1) to unstable (Python team request).

2020-08-19: Upload package calamares (3.2.28.3-2) to unstable (Python team request).

Worse Than FailureThoroughly Tested

Zak S worked for a retailer which, as so often happens, got swallowed up by Initech's retail division. Zak's employer had a big, ugly ERP systems. Initech had a bigger, uglier ERP and once the acquisition happened, they all needed to play nicely together.

These kinds of marriages are always problematic, but this particular one was made more challenging: Zak's company ran their entire ERP system from a cluster of Solaris servers- running on SPARC CPUs. Since upgrading that ERP system to run in any other environment was too expensive to seriously consider, the existing services were kept on life-support (with hardware replacements scrounged from the Vintage Computing section of eBay), while Zak's team was tasked with rebuilding everything- point-of-sale, reporting, finance, inventory and supply chain- atop Initech's ERP system.

The project was launched with the code name "Cold Stone", with Glenn as new CTO. At the project launch, Glenn stressed that, "This is a high impact project, with high visibility throughout the organization, so it's on us to ensure that the deliverables are completed on time, on budget, to provide maximum value to the business and to that end, I'll be starting a series of meetings to plan the meetings and checkpoints we'll use to ensure that we have an action-plan that streamlines our…"

"Cold Stone" launched with a well defined project scope, but about 15 seconds after launch, that scope exploded. New "business critical" systems were discovered under every rock, and every department in the company had a moment of, "Why weren't we consulted on this plan? Our vital business process isn't included in your plan!" Or, "You shouldn't have included us in this plan, because our team isn't interested in a software upgrade, we're going to continue using the existing system until the end of time, thank you very much."

The expanding scope required expanding resources. Anyone with any programming experience more complex than "wrote a cool formula in Excel" was press-ganged into the project. You know how to script sending marketing emails? Get on board. You wrote a shell script to purge old user accounts? Great, you're going to write a plugin to track inventory at retail stores.

The project burned through half a dozen business analysts and three project managers, and that's before the COVID-19 outbreak forced the company to quickly downsize, and squish together several project management roles into one person.

"Fortunately" for Initech, that one person was Edyth, who was one of those employees who has given their entire life over to the company, and refuses to sotp working until the work is done. She was the sort of manager who would schedule meetings at 12:30PM, because she knew no one else would be scheduling meetings during the lunch hour. Or, schedule a half hour meeting at 4:30PM, when the workday ends at 5PM, then let it run long, "Since we're all here anyway, let's keep going." She especially liked to abuse video conferencing for this.

As the big ball of mud grew, the project slowly, slowly eased its way towards completion. And as that deadline approached, Edyth started holding meetings which focused on testing. Which is where Edyth started to raise some concerns.

"Lucy," Edyth said, "I noticed that you've marked the test for integration between the e-commerce site and the IniRewards™ site as not-applicable?"

"Well, yeah," Lucy said. "It says to test IniRewards™ signups on the e-commerce site, but our site doesn't do that. Signups entirely happen on the IniRewards™ site. There isn't really any integration."

"Oh," Edyth said. "So that sounds like it's a Zak thing?"

Zak stared at his screen for a moment. He was responsible for the IniRewards™ site, a port of their pre-acquisition customer rewards system to work with Initech's rewards system. He hadn't written it, but somewhere along the way, he became the owner of it, for reasons which remain murky. "Uh… it's a static link."

Edyth nodded, as if she understood what that meant. "So how long will that take to test? A day? Do you need any special setup for this test?"

"It's… a link. I'll click it."

"Great, yes," Edyth said. "Why don't you write up the test plan document for this user story, and then we'll schedule the test for… next week? Can you do it any earlier?"

"I can do it right now," Zak said.

"No, no," Edyth said. "We need to schedule these tests in advance so you're not interacting with anyone else using the test environment. I'll set up a followup meeting to review your test plan."

Test plans, of course, had a template which needed to be filled out. It was a long document, loaded with boiler plate, for the test to be, "Click the 'Rewards Signup' link in the e-commerce site footer. Expected behavior: the browser navigates to the IniRewards™ home page."

Zak added the document to the project document site, labelled "IniRewards Hyper-Link Test", and waited for the next meeting with Edyth to discuss schedule. This time, Glenn, the CTO was in the meeting.

"This 'Hyper-Link' test sounds very important," Glenn said. He enunciated "hyper-link" like it was a word in a foreign language. "Can we move that up in the schedule? I'd like that done tomorrow."

"I… can do it right now," Zak said. "It won't interact with other tests-"

"No, we shouldn't rush things." Glenn's eyes shifted towards another window as he reviewed the testing schedule. "It looks like there's nothing scheduled for testing between 10AM and 2PM tomorrow. Do you think four hours is enough time? Yes? Great, I'll block that off for you."

Suffice to say, the test passed, and was verified quite thoroughly.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cryptogram Seny Kamara on "Crypto for the People"

Seny Kamara gave an excellent keynote talk this year at the (online) CRYPTO Conference. He talked about solving real-world crypto problems for marginalized communities around the world, instead of crypto problems for governments and corporations. Well worth watching and listening to.

Planet DebianRuss Allbery: Review: Men at Arms

Review: Men at Arms, by Terry Pratchett

Series: Discworld #15
Publisher: Harper
Copyright: 1993
Printing: November 2013
ISBN: 0-06-223740-3
Format: Mass market
Pages: 420

Men at Arms is the fifteenth Discworld novel and a direct plot sequel to Guards! Guards!. You could start here without missing too much, but starting with Guards! Guards! would make more sense. And of course there are cameos (and one major appearance) by other characters who are established in previous books.

Carrot, the adopted dwarf who joined the watch in Guards! Guards!, has been promoted to corporal. He is now in charge of training new recruits, a role that is more important because of the Night Watch's new Patrician-ordered diversity initiative. The Watch must reflect the ethnic makeup of the city. That means admitting a troll, a dwarf... and a woman?

Trolls and dwarfs hate each other because dwarfs mine precious things out of rock and trolls are composed of precious things embedded in rocks, so relations between the new recruits are tense. Captain Vimes is leaving the Watch, and no one is sure who would or could replace him. (The reason for this is a minor spoiler for Guards! Guards!) A magical weapon is stolen from the Assassin's Guild. And a string of murders begins, murders that Vimes is forbidden by Lord Vetinari from investigating and therefore clearly is going to investigate.

This is an odd moment at which to read this book.

The Night Watch are not precisely a police force, although they are moving in that direction. Their role in Ankh-Morpork is made much stranger by the guild system, in which the Thieves' Guild is responsible for theft and for dealing with people who steal outside of the quota of the guild. But Men at Arms is in part a story about ethics, about what it means to be a police officer, and about what it looks like when someone is very good at that job.

Since I live in the United States, that makes it hard to avoid reading Men at Arms in the context of the current upheavals about police racism, use of force, and lack of accountability. Men at Arms can indeed be read that way; community relations, diversity in the police force, the merits of making two groups who hate each other work together, and the allure of violence are all themes Pratchett is working with in this novel. But they're from the perspective of a UK author writing in 1993 about a tiny city guard without any of the machinery of modern police, so I kept seeing a point of clear similarity and then being slightly wrong-footed by the details. It also felt odd to read a book where the cops are the heroes, much in the style of a detective show. This is in no way a problem with the book, and in a way it was helpful perspective, but it was a strange reading experience.

Cuddy had only been a guard for a few days but already he had absorbed one important and basic fact: it is almost impossible for anyone to be in a street without breaking the law.

Vimes and Carrot are both excellent police officers, but in entirely different ways. Vimes treats being a cop as a working-class job and is inclined towards glumness and depression, but is doggedly persistent and unable to leave a problem alone. His ethics are covered by a thick layer of world-weary cynicism. Carrot is his polar opposite in personality: bright, endlessly cheerful, effortlessly charismatic, and determined to get along with everyone. On first appearance, this contrast makes Vimes seem wise and Carrot seem a bit dim. That is exactly what Pratchett is playing with and undermining in Men at Arms.

Beneath Vimes's cynicism, he's nearly as idealistic as Carrot, even though he arrives at his ideals through grim contrariness. Carrot, meanwhile, is nowhere near as dim as he appears to be. He's certain about how he wants to interact with others and is willing to stick with that approach no matter how bad of an idea it may appear to be, but he's more self-aware than he appears. He and Vimes are identical in the strength of their internal self-definition. Vimes shows it through the persistent, grumpy stubbornness of a man devoted to doing an often-unpleasant job, whereas Carrot verbally steamrolls people by refusing to believe they won't do the right thing.

Colon thought Carrot was simple. Carrot often struck people as simple. And he was. Where people went wrong was thinking that simple meant the same thing as stupid.

There's a lot going on in this book apart from the profiles of two very different models of cop. Alongside the mystery (which doubles as pointed commentary on the corrupting influence of violence and personal weaponry), there's a lot about dwarf/troll relations, a deeper look at the Ankh-Morpork guilds (including a horribly creepy clown guild), another look at how good Lord Vetinari is at running the city by anticipating how other people will react, a sarcastic dog named Gaspode (originally seen in Moving Pictures), and Pratchett's usual collection of memorable lines. It is also the origin of the now-rightfully-famous Vimes boots theory:

The reason that the rich were so rich, Vimes reasoned, was because they managed to spend less money.

Take boots, for example. He earned thirty-eight dollars a month plus allowances. A really good pair of leather boots cost fifty dollars. But an affordable pair of boots, which were sort of OK for a season or two and then leaked like hell when the cardboard gave out, cost about ten dollars. Those were the kind of boots Vimes always bought, and wore until the soles were so thin that he could tell where he was in Ankh-Morpork on a foggy night by the feel of the cobbles.

But the thing was that good boots lasted for years and years. A man who could afford fifty dollars had a pair of boots that'd still be keeping his feet dry in ten years' time, while the poor man who could only afford cheap boots would have spent a hundred dollars on boots in the same time and would still have wet feet.

This was the Captain Samuel Vimes 'Boots' theory of socioeconomic unfairness.

Men at Arms regularly makes lists of the best Discworld novels, and I can see why. At this point in the series, Pratchett has hit his stride. The plots have gotten deeper and more complex without losing the funny moments, movie and book references, and glorious turns of phrase. There is also a lot of life philosophy and deep characterization when one pays close attention to the characters.

He was one of those people who would recoil from an assault on strength, but attack weakness without mercy.

My one complaint is that I found it a bit overstuffed with both characters and subplots, and as a result had a hard time following the details of the plot. I found myself wanting a timeline of the murders or a better recap from one of the characters. As always with Pratchett, the digressions are wonderful, but they do occasionally come at the cost of plot clarity.

I'm not sure I recommend the present moment in the United States as the best time to read this book, although perhaps there is no better time for Carrot and Vimes to remind us what good cops look like. But regardless of when one reads it, it's an excellent book, one of the best in the Discworld series to this point.

Followed, in publication order, by Soul Music. The next Watch book is Feet of Clay.

Rating: 8 out of 10

Planet DebianDirk Eddelbuettel: RcppCCTZ 0.2.9: API Header Added

A new minor release 0.2.9 of RcppCCTZ is now on CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now at least three others do—using copies in their packages which remains less than ideal.

This version adds a header file for the recently-exported three functions.

Changes in version 0.2.9 (2020-08-30)

  • Provide a header RcppCCZT_API.h for client packages.
  • Show a simple example of parsing a YYYYMMDD HHMMSS.FFFFFF date.

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJacob Adams: Command Line 101

How to Work in a Text-Only Environment.

What is this thing?

When you first open a command-line (note that I use the terms command-line and shell interchangably here, they’re basically the same, but command-line is the more general term, and shell is the name for the program that executes commands for you) you’ll see something like this:

thisfolder

This line is called a “command prompt” and it tells you four pieces of information:

  1. jaadams: The username of the user that is currently running this shell.
  2. bg7: The name of the computer that this shell is running on, important for when you start accessing shells on remote machines.
  3. /tmp/thisfolder: The folder or directory that your shell is currently running in. Like a file explorer (like Window’s Explorer or Mac’s Finder) a shell always has a “working directory,” from which all relative paths (see sidenote below) are resolved.

When you first opened a shell, however, you might notice that is looks more like this:

home

This is a shorthand notation that the shell uses to make this output shorter when possible. ~ stands for your home directory, usually /home/<username>. Like C:\Users\<username>\ on Windows or /Users/<username> on Mac, this directory is where all your files should go by default.

Thus a command prompt like this:

downloads

actually tells you that you are currently in the /home/jaadams/Downloads directory.

Sidenote: The Unix Filesystem and Relative Paths

“folders” on Linux and other Unix-derived systems like MacOS are usually called “directories.”

These directories are represented by paths, strings that indicate where the directory is on the filesystem.

The one unusual part is the so-called “root directory”. All files are stored in this folder or directories under it. Its path is just / and there are no directories above it.

For example, the directory called home typically contains all user directories. This is stored in the root directory, and each users specific data is stored in a directory named after that user under home. Thus, the home directory of the user jacob is typically /home/jacob, the directory jacob under the home directory stored in the root directory /.

If you’re interested in more details about what goes in what directory, man hier has the basics and the Filesystem Hierarchy Standard governs the layout of the filesystem on most Linux distributions.

You don’t always have to use the full path, however. If the path does not begin with a /, it is assumed that the path actually begins with the path of the current directory. So if you use a path like my/folders/here, and you’re in the /home/jacob directory, the path will be treated like /home/jacob/my/folders/here.

Each folder also contains the symbolic links .. and .. Symbolic links are a very powerful kind of file that is actually a reference to another file. .. always represents the parent directory of the current directory, so /home/jacob/.. links to /home/. . always links to the current directory, so /home/jacob/. links to /home/jacob.

Running commands

To run a command from the command prompt, you type its name and then usually some arguments to tell it what to do.

For example, the echo command displays the text passed as arguments.

jacob@lovelace/home/jacob$ echo hello world
hello world

Arguments to commands are space-separated, so in the previous example hello is the first argument and world is the second. If you need an argument to contain spaces, you’ll want to put quotes around it, echo "like so".

Certain arguments are called “flags”, or “options” (options if they take another argument, flags otherwise) usually prefixed with a hyphen, and they change the way a program operates.

For example, the ls command outputs the contents of a directory passed as an argument, but if you add -l before the directory, it will give you more details on the files in that directory.

jacob@lovelace/tmp/test$ ls /tmp/test
1  2  3  4  5  6
jacob@lovelace/tmp/test$ ls -l /tmp/test
total 0
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 1
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 2
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 3
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 4
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 5
-rw-r--r-- 1 jacob jacob 0 Aug 26 22:06 6
jacob@lovelace/tmp/test$

Most commands take different flags to change their behavior in various ways.

File Management

  • cd <path>: Change the current directory of the running shell to <path>.
  • ls <path>: Output the contents of <path>. If no path is passed, it prints the contents of the current directory.
  • touch <filename>: create an new empty file called <filename>. Used on an existing file, it updates the file’s last accessed and modified times. Most text editors can also create a new file for you, which is probably more useful.
  • mkdir <directory>: Create a new folder/directory at path <directory>.
  • mv <src> <dest>: Move a file or directory at path <src> to <dest>.
  • cp <src> <dest>: Copy a file or directory at path <src> to <dest>.
  • rm <file>: Remove a file at path <file>.
  • zip -r <zipfile> <contents...>: Create a zip file <zipfile> with contents <contents>. <contents> can be multiple arguments, and you’ll usually want to use the -r argument when including directories in your zipfile, as otherwise only the directory will be included and not the files and directories within it.

Searching

  • grep <thing> <file>: Look for the string <thing> in <file>. If no <file> is passed it searches standard input.
  • find <path> -name <name>: Find a file or directory called <name> somwhere under <path>. This command is actually very powerful, but also very complex. For example you can delete all files in a directory older than 30 days with:
    find -mtime +30 -exec rm {}\;
    
  • locate <name>: A much easier to use command to find a file with a given name, but it is not usually installed by default.

Outputting Files

  • cat <files...>: Output (concatenate) all the files passed as arguments.
  • head <file>: Output the beginning of <file>
  • tail <file>: Output the end of <file>

How to Find the Right Command

All commands (at least on sane Linux distributions like Debian or Ubuntu) are documented with a manual page, in man section 1 (for more information on manual sections, run man intro). This can be accessed using man <command> You can search for the right command using the -k flag, as in man -k <search>.

You can also view manual pages in your browser, on sites like https://manpages.debian.org or https://linux.die.net/man.

This is not always helpful, however, because some command’s descriptions are not particularly useful, and also there are a lot of manual pages, which can make searching for a specific one difficult. For example, finding the right command to search inside text files is quite difficult via man (it’s grep). When you can’t find what you need with man I recommend falling back to searching the Internet. There are lots of bad Linux tutorials out there, but here are some reputable sources I recommend:

  • https://www.cyberciti.biz: nixCraft has excellent tutorials on all things Linux
  • Hosting providers like Digital Ocean or Linode: Good intro documentation, but can sometimes be outdated
  • https://tldp.org: The Linux Documentation project is great, but it can also be a little outdated sometimes.
  • https://stackoverflow.com: Oftentimes has great answers, but quality varies wildly since anyone can answer.

These are certainly not the only options but they’re the sources I would recommend when available.

How to Read a Manual Page

Manual pages consist of a series of sections, each with a specific purpose. Instead of attempting to write my own description here, I’m going to borrow the excellent one from The Linux Documentation Project

The NAME section

…is the only required section. Man pages without a name section are as useful as refrigerators at the north pole. This section also has a standardized format consisting of a comma-separated list of program or function names, followed by a dash, followed by a short (usually one line) description of the functionality the program (or function, or file) is supposed to provide. By means of makewhatis(8), the name sections make it into the whatis database files. Makewhatis is the reason the name section must exist, and why it must adhere to the format I described. (Formatting explanation cut for brevity)

The SYNOPSIS section

…is intended to give a short overview on available program options. For functions this sections lists corresponding include files and the prototype so the programmer knows the type and number of arguments as well as the return type.

The DESCRIPTION section

…eloquently explains why your sequence of 0s and 1s is worth anything at all. Here’s where you write down all your knowledge. This is the Hall Of Fame. Win other programmers’ and users’ admiration by making this section the source of reliable and detailed information. Explain what the arguments are for, the file format, what algorithms do the dirty jobs.

The OPTIONS section

…gives a description of how each option affects program behaviour. You knew that, didn’t you?

The FILES section

…lists files the program or function uses. For example, it lists configuration files, startup files, and files the program directly operates on. (Cut details about installing files)

The ENVIRONMENT section

…lists all environment variables that affect your program or function and tells how, of course. Most commonly the variables will hold pathnames, filenames or default options.

The DIAGNOSTICS section

…should give an overview of the most common error messages from your program and how to cope with them. There’s no need to explain system error error messages (from perror(3)) or fatal signals (from psignal(3)) as they can appear during execution of any program.

The BUGS section

…should ideally be non-existent. If you’re brave, you can describe here the limitations, known inconveniences and features that others may regard as misfeatures. If you’re not so brave, rename it the TO DO section ;-)

The AUTHOR section

…is nice to have in case there are gross errors in the documentation or program behaviour (Bzzt!) and you want to mail a bug report.

The SEE ALSO section

…is a list of related man pages in alphabetical order. Conventionally, it is the last section.

Remote Access

One of the more powerful uses of the shell is through ssh, the secure shell. This allows you to remotely connect to another computer and run a shell on that machine:

user@host:~$ ssh other@example.com
other@example:~$

The prompt changes to reflect the change in user and host, as you can see in the example above. This allows you to work in a shell on that machine as if it was right in front of you.

Moving Files Between Machines

There are several ways you can move files between machines over ssh. The first and easiest is scp, which works much like the cp command except that paths can also take a user@host argument to move files across computers. For example, if you wanted to move a file test.txt to your home directory on another machine, the command would look like:

scp test.txt other@example.com:

(The home directory is the default path)

Otherwise you can move files by reversing the order of the arguments and put a path after the colon to move files from another directory on the remote host. For example, if you wanted to fetch the file /etc/issue.net from example.com:

scp other@example.com:/etc/issue.net .

Another option is the sftp command, which gives you a very simple shell-like interface in which you can cd and ls, before either puting files onto the local machine or geting files off of it.

The final and most powerful option is rsync which syncs the contents of one directory to another, and doesn’t copy files that haven’t changed. It’s powerful and complex, however, so I recommend reading the USAGE section of its man page.

Long-Running Commands

The one problem with ssh is that it will stop any command running in your shell when you disconnect. If you want to leave something on and come back later then this can be a problem.

This is where terminal multiplexers come in. tmux and screen both allow you to run a shell in a safe environment where it will continue even if you disconnect from it. You do this by running the command without any arguments, i.e. just tmux or just screen. In tmux you can disconnect from the current session by pressing Ctrl+b then d, and reattach with the tmux attach command. screen works similarly, but with Ctrl+a instead of b and screen -r to reattach.

Command Inputs and Outputs

Arguments are not the only way to pass input to a command. They can also take input from what’s called “standard input”, which the shell usually connects to your keyboard.

Output can go to two places, standard output and standard error, both of which are directed to the screen by default.

Redirecting I/O

Note that I said above that standard input/output/error are only “usually” connected to the keyboard and the terminal? This is because you can redirect them to other places with the shell operators <, > and the very powerful |.

File redirects

The operators < and > redirect the input and output of a command to a file. For example, if you wanted a file called list.txt that contained a list of all the files in a directory /this/one/here you could use:

ls /this/one/here > list.txt

Pipelines

The pipe character, |, allows you to direct the output of one command into the input of another. This can be very powerful. For example, the following pipeline lists the contents of the current directory searches for the string “test”, then counts the number of results. (wc -l counts the number of lines in its input)

ls | grep test | wc -l

For a better, but even more contrived example, say you have a file myfile, with a bunch of lines of potentially duplicated and unsorted data

test
test
1234
4567
1234

You can sort it and output only the unique lines with sort and uniq:

$ uniq < myfile | sort
1234
1234
4567
test

Save Yourself Some Typing: Globs and Tab-Completion

Sometimes you don’t want to type out the whole filename when writing out a command. The shell can help you here by autocompleting when you press the tab key.

If you have a whole bunch of files with the same suffix, you can refer to them when writing arguments as *.suffix. This also works with prefixes, prefix*, and in fact you can put a * anywhere, *middle*. The shell will “expand” that * into all the files in that directory that match your criteria (ending with a specific suffix, starting with a specific prefix, and so on) and pass each file as a separate argument to the command.

For example, if I have a series of files called 1.txt, 2.txt, and so on up to 9, each containing just the number for which it’s named, I could use cat to output all of them like so:

jacob@lovelace/tmp/numbers$ ls
1.txt  2.txt  3.txt  4.txt  5.txt  6.txt  7.txt  8.txt	9.txt
jacob@lovelace/tmp/numbers$ cat *.txt
1
2
3
4
5
6
7
8
9

Also the ~ shorthand mentioned above that refers to your home directory can be used when passing a path as an argument to a command.

Ifs and For loops

The files in the above example were generated with the following shell commands:

for i in 1 2 3 4 5 6 7 8 9
do
echo $i > $i.txt
done

But I’ll have to save variables, conditionals and loops for another day because this is already too long. Needless to say the shell is a full programming language, although a very ugly and dangerous one.

,

Planet DebianMike Hommey: [Linux] Disabling CPU turbo, cores and threads without rebooting

[Disclaimer: this has been sitting as a draft for close to three months ; I forgot to publish it, this is now finally done.]

In my previous blog post, I built Firefox in a multiple different number of configurations where I’d disable the CPU turbo, some of its cores or some of its threads. That is something that was traditionally done via the BIOS, but rebooting between each attempt is not really a great experience.

Fortunately, the Linux kernel provides a large number of knobs that allow this at runtime.

Turbo

This is the most straightforward:

$ echo 0 > /sys/devices/system/cpu/cpufreq/boost

Re-enable with

$ echo 1 > /sys/devices/system/cpu/cpufreq/boost

CPU frequency throttling

Even though I haven’t mentioned it, I might as well add this briefly. There are many knobs to tweak frequency throttling, but assuming your goal is to disable throttling and set the CPU frequency to its fastest non-Turbo frequency, this is how you do it:

$ echo performance > /sys/devices/system/cpu/cpu$n/cpufreq/scaling_governor

where $n is the id of the core you want to do that for, so if you want to do that for all the cores, you need to do that for cpu0, cpu1, etc.

Re-enable with:

$ echo ondemand > /sys/devices/system/cpu/cpu$n/cpufreq/scaling_governor

(assuming this was the value before you changed it ; ondemand is usually the default)

Cores and Threads

This one requires some attention, because you cannot assume anything about the CPU numbers. The first thing you want to do is to check those CPU numbers. You can do so by looking at the physical id and core id fields in /proc/cpuinfo, but the output from lscpu --extended is more convenient, and looks like the following:

CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ    MINMHZ
0   0    0      0    0:0:0:0       yes    3700.0000 2200.0000
1   0    0      1    1:1:1:0       yes    3700.0000 2200.0000
2   0    0      2    2:2:2:0       yes    3700.0000 2200.0000
3   0    0      3    3:3:3:0       yes    3700.0000 2200.0000
4   0    0      4    4:4:4:1       yes    3700.0000 2200.0000
5   0    0      5    5:5:5:1       yes    3700.0000 2200.0000
6   0    0      6    6:6:6:1       yes    3700.0000 2200.0000
7   0    0      7    7:7:7:1       yes    3700.0000 2200.0000
(...)
32  0    0      0    0:0:0:0       yes    3700.0000 2200.0000
33  0    0      1    1:1:1:0       yes    3700.0000 2200.0000
34  0    0      2    2:2:2:0       yes    3700.0000 2200.0000
35  0    0      3    3:3:3:0       yes    3700.0000 2200.0000
36  0    0      4    4:4:4:1       yes    3700.0000 2200.0000
37  0    0      5    5:5:5:1       yes    3700.0000 2200.0000
38  0    0      6    6:6:6:1       yes    3700.0000 2200.0000
39  0    0      7    7:7:7:1       yes    3700.0000 2200.0000
(...)

Now, this output is actually the ideal case, where pairs of CPUs (virtual cores) on the same physical core are always n, n+32, but I’ve had them be pseudo-randomly spread in the past, so be careful.

To turn off a core, you want to turn off all the CPUs with the same CORE identifier. To turn off a thread (virtual core), you want to turn off one CPU. On machines with multiple sockets, you can also look at the SOCKET column.

Turning off one CPU is done with:

$ echo 0 > /sys/devices/system/cpu/cpu$n/online

Re-enable with:

$ echo 1 > /sys/devices/system/cpu/cpu$n/online

Extra: CPU sets

CPU sets are a feature of Linux’s cgroups. They allow to restrict groups of processes to a set of cores. The first step is to create a group like so:

$ mkdir /sys/fs/cgroup/cpuset/mygroup

Please note you may already have existing groups, and you may want to create subgroups. You can do so by creating subdirectories.

Then you can configure on which CPUs/cores/threads you want processes in this group to run on:

$ echo 0-7,16-23 > /sys/fs/cgroup/cpuset/mygroup/cpuset.cpus

The value you write in this file is a comma-separated list of CPU/core/thread numbers or ranges. 0-3 is the range for CPU/core/thread 0 to 3 and is thus equivalent to 0,1,2,3. The numbers correspond to /proc/cpuinfo or the output from lscpu as mentioned above.

There are also memory aspects to CPU sets, that I won’t detail here (because I don’t have a machine with multiple memory nodes), but you can start with:

$ cat /sys/fs/cgroup/cpuset/cpuset.mems > /sys/fs/cgroup/cpuset/mygroup/cpuset.mems

Now you’re ready to assign processes to this group:

$ echo $pid >> /sys/fs/cgroup/cpuset/mygroup/tasks

There are a number of tweaks you can do to this setup, I invite you to check out the cpuset(7) manual page.

Disabling a group is a little involved. First you need to move the processes to a different group:

$ while read pid; do echo $pid > /sys/fs/cgroup/cpuset/tasks; done < /sys/fs/cgroup/cpuset/mygroup/tasks

Then deassociate CPU and memory nodes:

$ > /sys/fs/cgroup/cpuset/mygroup/cpuset.cpus
$ > /sys/fs/cgroup/cpuset/mygroup/cpuset.mems

And finally remove the group:

$ rmdir /sys/fs/cgroup/cpuset/mygroup

Planet DebianEnrico Zini: Miscellaneous news

A fascinating apparent paradox that kind of makes sense: Czech nudists reprimanded by police for not wearing face-masks.

Besides being careful about masks when naked at the lake, be careful about your laptop being confused for a pizza: German nudist chases wild boar that stole laptop.

Talking about pigs: Pig starts farm fire by excreting pedometer.

Now that traveling is complicated, you might enjoy A Brief History of Children Sent Through the Mail, or learning about Narco-submarines.

Meanwhile, in a time of intense biotechnological research, Scientists rename human genes to stop Microsoft Excel from misreading them as dates.

Finally, for a good, cheaper, and more readily available alternative to a trip to the pharmacy, learn about Hypoalgesic effect of swearing.

Planet DebianDirk Eddelbuettel: RcppSMC 0.2.2: Small updates

A new release 0.2.2 of the RcppSMC package arrived on CRAN earlier today (and once again as a very quick pretest-publish within minutes of submission).

RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts.

This releases contains two fixes from a while back that had not been released, a CRAN-requested update plus a few more minor polishes to make it pass R CMD check --as-cran as nicely as usual.

Changes in RcppSMC version 0.2.2 (2020-08-30)

  • Package helper files .editorconfig added (Adam in #43).

  • Change const correctness and add return (Leah in #44).

  • Updates to continuous integration and R versions used (Dirk)

  • Accomodate CRAN request, other updates to CRAN Policy (Dirk in #49 fixing #48).

Courtesy of CRANberries, there is a diffstat report for this release.

More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJonathan Carter: The metamorphosis of Loopy Loop

Dealing with the void during MiniDebConf Online #1

Between 28 and 31 May this year, we set out to create our first ever online MiniDebConf for Debian. Many people have been meaning to do something similar for a long time, but it just didn’t work out yet. With many of us being in lock down due to COVID-19, and with the strong possibility looming that DebConf20 might have had to become an online event, we rushed towards organising the first ever Online MiniDebConf and put together some form of usable video stack for it.

I could go into all kinds of details on the above, but this post is about a bug that lead to a pretty nifty feature for DebConf20. The tool that we use to capture Jitsi calls is called Jibri (Jitsi Broadcasting Infrustructure). It had a bug (well, bug for us, but it’s an upstream feature) where Jibri would hang up after 30s of complete silence, because it would assume that the call has ended and that the worker can be freed up again. This would result in the stream being ended at the end of every talk, so before the next talk, someone would have to remember to press play again in their media player or on the video player on the stream page. Hrmph.

Easy solution on the morning that the conference starts? I was testing a Debian Live image the night before in a KVM and thought that I might as well just start a Jitsi call from there and keep a steady stream of silence so that Jibri doesn’t hang up.

It worked! But the black screen and silence on stream was a bit eery. Because this event was so experimental in nature, and because we were on such an incredibly tight timeline, we opted not to seek sponsors for this event, so there was no sponsors loop that we’d usually stream during a DebConf event. Then I thought “Ah! I could just show the schedule!”.

The stream looked bright and colourful (and was even useful!) and Jitsi/Jibri didn’t die. I thought my work was done. As usual, little did I know how untrue that was.

The silence was slightly disturbing after the talks, and people asked for some music. Playing music on my VM and capturing the desktop audio in to Jitsi was just a few pulseaudio settings away, so I spent two minutes finding some freely licensed tracks that sounded ok enough to just start playing on the stream. I came across mini-albums by Captive Portal and Cinema Noir, During the course of the MiniDebConf Online I even started enjoying those. Someone also pointed out that it would be really nice to have a UTC clock on the stream. I couldn’t find a nice clock in a hurry so I just added a tmux clock in the meantime while we deal with the real-time torrent of issues that usually happens when organising events like this.

Speaking of issues, during our very first talk of the last day, our speaker had a power cut during the talk and abruptly dropped off. Oops! So, since I had a screenshare open from the VM to the stream, I thought I’d just pop in a quick message in a text editor to let people know that we’re aware of it and trying to figure out what’s going on.

In the end, MiniDebConf Online worked out all right. Besides the power cut for our one speaker, and another who had a laptop that was way too under-powered to deal with video, everything worked out very well. Even the issues we had weren’t show-stoppers and we managed to work around them.

DebConf20 Moves Online

For DebConf, we usually show a sponsors loop in between sessions. It’s great that we give our sponsors visibility here, but in reality people see the sponsors loop and think “Talk over!” and then they look away. It’s also completely silent and doesn’t provide any additional useful information. I was wondering how I could take our lessons from MDCO#1 and integrate our new tricks with the sponsors loop. That is, add the schedule, time, some space to type announcements on the screen and also add some loopable music to it.

I used OBS before in making my videos, and like the flexibility it provides when working with scenes and sources. A scene is what you would think of as a screen or a document with its own collection of sources or elements. For example, a scene might contain sources such as a logo, clock, video, image, etc. A scene can also contain another scene. This is useful if you want to contain a banner or play some background music that is shared between scenes.

The above screenshots illustrate some basics of scenes and sources. First with just the DC20 banner, and then that used embedded in another scene.

For MDCO#1, I copied and pasted the schedule into a LibreOffice Impress slide that was displayed on the stream. Having to do this for all 7 days of DebConf, plus dealing with scheduling changes would be daunting. So, I started to look in to generating some schedule slides programmatically. Stefano then pointed me to the Happening Now page on the DebConf website, where the current schedule block is displayed. So all I would need to do in OBS was to display a web page. Nice!

Unfortunately the OBS in Debian doesn’t have the ability to display web pages out of the box (we need to figure out CEF in Debian), but fortunately someone provides a pre-compiled version of the plugin called Linux Browser that works just fine. This allowed me to easily add the schedule page in its own scene.

Being able to display a web page solved another problem. I wasn’t fond of having to type / manage the announcements in OBS. It would either be a bit prone to user error, and if you want to edit the text while the loop is running, you’d have to disrupt the loop, go to the foreground scene, and edit the text before resuming the loop. That’s a bit icky. Then I thought that we could probably just get that from a web page instead. We could host some nice html snippet in a repository in salsa, and then anyone could easily commit an MR to update the announcement.

But then I went a step further, use an etherpad! Then anyone in the orga team can quickly update the announcement and it would be instantly changed on the stream. Nice! So that small section of announcement text on the screen is actually a whole web browser with an added OBS filter to crop away all the pieces we don’t want. Overkill? Sure, but it gave us a decent enough solution that worked in time for the start of DebConf. Also, being able to type directly on to the loop screen works out great especially in an emergency. Oh, and uhm… the clock is also a website rendered in its own web browser :-P

So, I had the ability to make scenes, add elements and add all the minimal elements I wanted in there. Great! But now I had to figure out how to switch scenes automatically. It’s probably worth mentioning that I only found some time to really dig into this right before DebConf started, so with all of this I was scrambling to find things that would work without too many bugs while also still being practical.

Now I needed the ability to switch between the scenes automatically / programmatically. I had never done this in OBS before. I know it has some API because there are Android apps that you can use to control OBS with from your phone. I discovered that it had an automatic scene switcher, but it’s very basic. It can only switch based on active window, which can be useful in some cases, but since we won’t have any windows open other than OBS, this tool was basically pointless.

After some quick searches, I found a plugin called Advanced Scene Switcher. This plugin can do a lot more, but has some weird UI choices, and is really meant for gamers and other types of professional streamers to help them automate their work flow and doesn’t seem at all meant to be used for a continuous loop, but, it worked, and I could make it do something that will work for us during the DebConf.

I had a chicken and egg problem because I had to figure out a programming flow, but didn’t really have any content to work with, or an idea of all the content that we would eventually have. I’ve been toying with the idea in my mind and had some idea that we could add fun facts, postcards (an image with some text), time now in different timezones, Debian news (maybe procured by the press team), cards that contain the longer announcements that was sent to debconf-announce, perhaps a shout out or two and some photos from previous DebConfs like the group photos. I knew that I wouldn’t be able to build anything substantial by the time DebConf starts, but adding content to OBS in between talks is relatively easy, so we could keep on building on it during DebConf.

Nattie provided the first shout out, and I made 2 video loops with the DC18/19 pictures and also two “Did you know” cards. So the flow I ended up with was: Sponsors -> Happening Now -> Random video (which would be any of those clips) -> Back to sponsors. This ended up working pretty well for quite a while. With the first batch of videos the sponsor loop would come up on average about every 2 minutes, but as much shorter clips like shout outs started to come in faster and faster, it made sense to play a few 2-3 shout-outs before going back to sponsors.

So here is a very brief guide on how I set up the sequencing in Advanced Scene Switcher.

If no condition was met, a video would play from the Random tab.

Then in the Random tab, I added the scenes that were part of the random mix. Annoyingly, you have to specify how long it should play for. If you don’t, the ‘no condition’ thingy is triggered and another video is selected. The time is also the length of the video minus one second, because…

You can’t just say that a random video should return back to a certain scene, you have to specify that in the sequence tab for each video. Why after 1 second? Because, at least in my early tests, and I didn’t circle back to this, it seems like 0s can randomly either mean instantly, or never. Yes, this ended up being a bit confusing and tedious, and considering the late hours I worked on this, I’m surprised that I didn’t manage to screw it up completely at any point.

I also suspected that threads would eventually happen. That is, when people create video replies to other videos. We had 3 threads in total. There was a backups thread, beverage thread and an impersonation thread. The arrow in the screenshot above points to the backups thread. I know it doesn’t look that complicated, but it was initially somewhat confusing to set up and make sense out of it.

For the next event, the Advanced Scene Switcher might just get some more taming, or even be replaced entirely. There are ways to drive OBS by API, and even the Advanced Scene Switcher tool can be driven externally to some degree, but I think we definitely want to replace it by the next full DebConf. We had the problem that when a talk ended, we would return to the loop in the middle of a clip, which felt very unnatural and sometimes even confusing. So Stefano helped me with a helper script that could read the socket from Vocto, which I used to write either “Loop” or “Standby” to a file, and then the scene switcher would watch that file and keep the sponsors loop ready for start while the talks play. Why not just switch to sponsors when the talk ends? Well, the little bit of delay in switching would mean that you would see a tiny bit of loop every time before switching to sponsors. This is also why we didn’t have any loop for the ad-hoc track (that would have probably needed another OBS instance, we’ll look more into solutions for this for the future).

Then for all the clips. There were over 50 of them. All of them edited by hand in kdenlive. I removed any hard clicks, tried to improve audibility, remove some sections at the beginning and the end that seemed extra and added some music that would reduce in volume when someone speaks. In the beginning, I had lots of fun with choosing music for the clips. Towards the end, I had to rush them through and just chose the same tune whether it made sense or not. For comparison of what a difference the music can make, compare the original and adapted version for Valhalla’s clip above, or this original and adapted video from urbec. This part was a lot more fun than dealing with the video sequencer, but I also want to automate it a bit. When I can fully drive OBS from Python I’ll likely instead want to show those cards and control music volume from Python (what could possibly go wrong…).

The loopy name happened when I requested an @debconf.org alias for this. I was initially just thinking about loop@debconf.org but since I wanted to make it clear that the purpose of this loop is also to have some fun, I opted for “loopy” instead:

I was really surprised by how people took to loopy. I hoped it would be good and that it would have somewhat positive feedback, but the positive feedback was just immense. The idea was that people typically saw it in between talks. But a few people told me they kept it playing after the last talk of the day to watch it in the background. Some asked for the music because they want to keep listening to it while working (and even for jogging!?). Some people also asked for recordings of the loop because they want to keep it for after DebConf. The shoutouts idea proved to be very popular. Overall, I’m very glad that people enjoyed it and I think it’s safe to say that loopy will be back for the next event.

Also throughout this experiment Loopy Loop turned into yet another DebConf mascot. We gain one about every DebConf, some by accident and some on purpose. This one was not quite on purpose. I meant to make an image for it for salsa, and started with an infinite loop symbol. That’s a loop, but by just adding two more solid circles to it, it looks like googly eyes, now it’s a proper loopy loop!

I like the progress we’ve made on this, but there’s still a long way to go, and the ideas keep heaping up. The next event is quite soon (MDCO#2 at the end of November, and it seems that 3 other MiniDebConf events may also be planned), but over the next few events there will likely be significantly better graphics/artwork, better sequencing, better flow and more layout options. I hope to gain some additional members in the team to deal with incoming requests during DebConf. It was quite hectic this time! The new OBS also has a scripting host that supports Python, so I should be able to do some nice things even within OBS without having to drive it externally (like, display a clock without starting a web browser).

The Loopy Loop Music

The two mini albums that mostly played during the first few days were just a copy and paste from the MDCO#1 music, which was:

For shoutout tracks, that were later used in the loop too (because it became a bit monotonous), most of the tracks came from freepd.com:

I have much more things to say about DebConf20, but I’ll keep that for another post, and hopefully we can get all the other video stuff in a post from the video team, because I think there’s been some real good work done for this DebConf. Also thanks to Infomaniak who was not only a platinum sponsor for this DebConf, but they also provided us with plenty of computing power to run all the video stuff on. Thanks again!

Planet DebianBits from Debian: DebConf20 online closes

DebConf20 group photo - click to enlarge

On Saturday 29 August 2020, the annual Debian Developers and Contributors Conference came to a close.

DebConf20 has been held online for the first time, due to the coronavirus (COVID-19) disease pandemic.

All of the sessions have been streamed, with a variety of ways of participating: via IRC messaging, online collaborative text documents, and video conferencing meeting rooms.

With more than 850 attendees from 80 different countries and a total of over 100 event talks, discussion sessions, Birds of a Feather (BoF) gatherings and other activities, DebConf20 was a large success.

When it became clear that DebConf20 was going to be an online-only event, the DebConf video team spent much time over the next months to adapt, improve, and in some cases write from scratch, technology that would be required to make an online DebConf possible. After lessons learned from the MiniDebConfOnline in late May, some adjustments were made, and then eventually we came up with a setup involving Jitsi, OBS, Voctomix, SReview, nginx, Etherpad, and a newly written web-based frontend for voctomix as the various elements of the stack.

All components of the video infrastructure are free software, and the whole setup is configured through their public ansible repository.

The DebConf20 schedule included two tracks in other languages than English: the Spanish language MiniConf, with eight talks in two days, and the Malayalam language MiniConf, with nine talks in three days. Ad-hoc activities, introduced by attendees over the course of the entire conference, have been possible too, streamed and recorded. There have also been several team gatherings to sprint on certain Debian development areas.

Between talks, the video stream has been showing the usual sponsors on the loop, but also some additional clips including photos from previous DebConfs, fun facts about Debian and short shout-out videos sent by attendees to communicate with their Debian friends.

For those who were not able to participate, most of the talks and sessions are already available through the Debian meetings archive website, and the remaining ones will appear in the following days.

The DebConf20 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events.

Next year, DebConf21 is planned to be held in Haifa, Israel, in August or September.

DebConf is committed to a safe and welcome environment for all participants. During the conference, several teams (Front Desk, Welcome team and Community team) have been available to help so participants get their best experience in the conference, and find solutions to any issue that may arise. See the web page about the Code of Conduct in DebConf20 website for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf20, particularly our Platinum Sponsors: Lenovo, Infomaniak, Google and Amazon Web Services (AWS).

About Debian

The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system.

About DebConf

DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, and Bosnia and Herzegovina. More information about DebConf is available from https://debconf.org/.

About Lenovo

As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world.

About Infomaniak

Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).

About Google

Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware.

Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform.

About Amazon Web Services (AWS)

Amazon Web Services (AWS) is one of the world's most comprehensive and broadly adopted cloud platforms, offering over 175 fully featured services from data centers globally (in 77 Availability Zones within 24 geographic regions). AWS customers include the fastest-growing startups, largest enterprises and leading government agencies.

Contact Information

For further information, please visit the DebConf20 web page at https://debconf20.debconf.org/ or send mail to press@debian.org.

,

Planet DebianAndrew Cater: Just coming to the end of Debconf 20 2020 - and a preview.

 One more talk from Jon "maddog" Hall and then the closing wrap up. This has been a blast: I've enjoyed it a lot and it's made me more enthusiastic than I have been for a long time.

 So once more with thanks to the video team
It's almost the end of this year's DebConf dream
As we all break up, there's a favour to ask
Stay safe - wear a smile - and where needed, a mask

We'll not take you to task ; it's a favour we ask
Stay safe - wear a smile - and where needed, a mask

Haifa, Pristina or Kochi - we'll see how that lands
There's not much left to tidy up - wash your own hands
So now if you'll join us in virtual beer
We'll bring this to a close - and we'll meet up next year

So now if you'll join us - let us all raise a cheer
To Debconf21 - and we'll see you next year


Planet DebianRuss Allbery: PGP::Sign 1.02

This is another test-only release of my module for manipulating PGP signatures in Perl. I'm trying to get the CPAN testing failures down to a dull roar. This iteration fixes some testing issues with systems that have only GnuPG v1 and tries to handle systems whose gpg is GnuPG v2 but is older than 2.1.12 and therefore doesn't have the --pinentry-mode flag that GnuPG uses to suppress password prompting.

I handled the latter by skipping the tests if the gpg on the user's PATH was too old. I'm not certain this is the best approach, although it makes the CPAN automated testing more useful for me, since the module will not work without special configuration on those systems. On the other hand, if someone is installing it to point to some other GnuPG binary on the system at runtime, failing the installation because their system gpg is too old seems wrong, and the test failure doesn't indicate a bug in the module.

Essentially, I'm missing richer test metadata in the Perl ecosystem. I want to be able to declare a dependency on a non-Perl system binary, but of course Perl has no mechanism to do that.

I thought about trying to deal with the Windows failures due to missing IPC::Run features (redirecting high-numbered file descriptors) on the Windows platform in a similar way, but decided in that case I do want the tests to fail because PGP::Sign will never work on that platform regardless of the runtime configuration. Here too I spent some time searching for some way to indicate with Module::Build that the module doesn't work on Windows, and came up empty. This seems to be a gap in Perl's module distribution ecosystem.

In any case, hopefully this release will clean up the remaining test failures on Linux and BSD systems, and I can move on to work on the Big Eight signing key, which was the motivating application for these releases.

You can get the latest release from CPAN or from the PGP::Sign distribution page.

,

Sam VargheseManaging a relationship is hard work

For many years, Australia has been trading with China, apparently in the belief that one can do business with a country for yonks without expecting the development of some sense of obligation. The attitude has been that China needs Australian resources and the relationship needs to go no further than the transfer of sand dug out of Australia and sent to China.

Those in Beijing, obviously, haven’t seen the exchange this way. There has been an expectation that there would be some obligation for the relationship to go further than just the impersonal exchange of goods for money. Australia, in true colonial fashion, has expected China to know its place and keep its distance.

This is similar to the attitude the Americans took when they pushed for China’s admission to the World Trade Organisation: all they wanted was a means of getting rid of their manufacturing so their industries could grow richer and an understanding that China would agree to go along with the American diktat to change as needed to keep the US on top of the trading world.

But then you cannot invite a man into your house for a dinner party and insist that he eat only bread. Once inside, he is free to choose what he wants to consume. It appears that the Americans do not understand this simple rule.

Both Australia and the US have forgotten they are dealing with the oldest civilisation in the world. A culture that plays the long waiting game. The Americans read the situation completely wrong for the last 70 years, assuming initially that the Kuomintang would come out on top and that the Communists would be vanquished. In the interim, the Americans obtained most of the money used for the early development of their country by selling opium to the Chinese.

China has not forgotten that humiliation.

There was never a thought given to the very likely event that China would one day want to assert itself and ask to be treated as an equal. Which is what is happening now. Both Australia and the US are feigning surprise and acting as though they are competely innocent in this exercise.

Fast forward to 2020 when the Americans and the Australians are both on the warpath, asserting that China is acting aggressively and trying to intimidate Australia while refusing to bow to American demands that it behave as it is expected to. There are complaints about Chinese demands for technology transfers, completely ignoring the fact that a developing country can ask for such transfers under WTO rules.

There are allegations of IP theft by the Americans, completely forgetting that they stole IP from Britain in the early days of the colonies; the name Samuel Slater should ring a bell in this context. Many educated Americans have themselves written about Slater.

Racism is one trait that defines the Australian approach to China. The Asian nation has been expected to confine itself to trade and never ask for more. And Australia, in condescending fashion, has lauded its approach, never understanding that it is seen as an American lapdog and no more. China has been waiting for the day when it can level scores.

It is difficult to comprehend why Australia genuflects before the US. There has been an attitude of veneration going back to the time of Harold Holt who is well known for his “All the way with LBJ” line, referring to the fact that Australian soldiers would be sent to Vietnam to serve as cannon fodder for the Americans and would, in short, do anything as long as the US decided so. Exactly what fight Australia had with Vietnam is not clear.

At that stage, there was no seminal action by the US that had put the fear of God into Australia; this came later, in 1975, when the CIA manipulated Australian politics and influenced the sacking of prime minister Gough Whitlam by the governor-general, Sir John Kerr. There is still resistance from Australian officialdom and its toadies to this version of events, but the evidence is incontrovertible; Australian journalist Guy Rundle has written two wonderful accounts of how the toppling took place.

Whitlam’s sins? Well, he had cracked down on the Australian Security Intelligence Organisation, an agency that spied on Australians and conveyed information to the CIA, when he discovered that it was keeping tabs on politicians. His attorney-general, Lionel Murphy, even ordered the Australian Federal Police to raid the ASIO, a major affront to the Americans who did not like their client being treated this way.

Whitlam also hinted that he would not renew a treaty for the Americans to continue using a base at Pine Gap as a surveillance centre. This centre was offered to the US, with the rent being one peppercorn for 99 years.

Of course, this was pure insolence coming from a country which the Americans — as they have with many other nations — treated as a vassal state and one only existing to do their bidding. So Whitlam was thrown out.

On China, too, Australia has served the role of American lapdog. In recent days, the Australian Prime Minister Scott Morrison has made statements attacking China soon after he has been in touch with the American leadership. In other words, the Americans are using Australia to provoke China. It’s shameful to be used in this manner, but then once a bootlicker, always a bootlicker.

Australia’s subservience to the US is so great that it even co-opted an American official, former US Secretary of Homeland Security Kirstjen Nielsen, to play a role in developing a cyber security strategy. There are a large number of better qualified people in the country who could do a much better job than Nielsen, who is a politician and not a technically qualified individual. But the slave mentality has always been there and will remain.

Cryptogram Friday Squid Blogging: How Squid Survive Freezing, Oxygen-Deprived Waters

Lots of interesting genetic details.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Krebs on SecuritySendgrid Under Siege from Hacked Accounts

Email service provider Sendgrid is grappling with an unusually large number of customer accounts whose passwords have been cracked, sold to spammers, and abused for sending phishing and email malware attacks. Sendgrid’s parent company Twilio says it is working on a plan to require multi-factor authentication for all of its customers, but that solution may not come fast enough for organizations having trouble dealing with the fallout in the meantime.

Image: Wikipedia

Many companies use Sendgrid to communicate with their customers via email, or else pay marketing firms to do that on their behalf using Sendgrid’s systems. Sendgrid takes steps to validate that new customers are legitimate businesses, and that emails sent through its platform carry the proper digital signatures that other companies can use to validate that the messages have been authorized by its customers.

But this also means when a Sendgrid customer account gets hacked and used to send malware or phishing scams, the threat is particularly acute because a large number of organizations allow email from Sendgrid’s systems to sail through their spam-filtering systems.

To make matters worse, links included in emails sent through Sendgrid are obfuscated (mainly for tracking deliverability and other metrics), so it is not immediately clear to recipients where on the Internet they will be taken when they click.

Dealing with compromised customer accounts is a constant challenge for any organization doing business online today, and certainly Sendgrid is not the only email marketing platform dealing with this problem. But according to multiple emails from readers, recent threads on several anti-spam discussion lists, and interviews with people in the anti-spam community, over the past few months there has been a marked increase in malicious, phishous and outright spammy email being blasted out via Sendgrid’s servers.

Rob McEwen is CEO of Invaluement.com, an anti-spam firm whose data on junk email trends are used to improve the spam-blocking technologies deployed by several Fortune 100 companies. McEwen said no other email service provider has come close to generating the volume of spam that’s been emanating from Sendgrid accounts lately.

“As far as the nasty criminal phishes and viruses, I think there’s not even a close second in terms of how bad it’s been with Sendgrid over the past few months,” he said.

Trying to filter out bad emails coming from a major email provider that so many legitimate companies rely upon to reach their customers can be a dicey business. If you filter the emails too aggressively you end up with an unacceptable number of “false positives,” i.e., benign or even desirable emails that get flagged as spam and sent to the junk folder or blocked altogether.

But McEwen said the incidence of malicious spam coming from Sendgrid has gotten so bad that he recently launched a new anti-spam block list specifically to filter out email from Sendgrid accounts that have been known to be blasting large volumes of junk or malicious email.

“Before I implemented this in my own filtering system a week ago, I was getting three to four phone calls or stern emails a week from angry customers wondering why these malicious emails were getting through to their inboxes,” McEwen said. “And I just am not seeing anything this egregious in terms of viruses and spams from the other email service providers.”

In an interview with KrebsOnSecurity, Sendgrid parent firm Twilio acknowledged the company had recently seen an increase in compromised customer accounts being abused for spam. While Sendgrid does allow customers to use multi-factor authentication (also known as two-factor authentication or 2FA), this protection is not mandatory.

But Twilio Chief Security Officer Steve Pugh said the company is working on changes that would require customers to use some form of 2FA in addition to usernames and passwords.

“Twilio believes that requiring 2FA for customer accounts is the right thing to do, and we’re working towards that end,” Pugh said. “2FA has proven to be a powerful tool in securing communications channels. This is part of the reason we acquired Authy and created a line of account security products and services. Twilio, like other platforms, is forming a plan on how to better secure our customers’ accounts through native technologies such as Authy and additional account level controls to mitigate known attack vectors.”

Requiring customers to use some form of 2FA would go a long way toward neutralizing the underground market for compromised Sendgrid accounts, which are sold by a variety of cybercriminals who specialize in gaining access to accounts by targeting users who re-use the same passwords across multiple websites.

One such individual, who goes by the handle “Kromatix” on several forums, is currently selling access to more than 400 compromised Sendgrid user accounts. The pricing attached to each account is based on volume of email it can send in a given month. Accounts that can send up to 40,000 emails a month go for $15, whereas those capable of blasting 10 million missives a month sell for $400.

“I have a large supply of cracked Sendgrid accounts that can be used to generate an API key which you can then plug into your mailer of choice and send massive amounts of emails with ensured delivery,” Kromatix wrote in an Aug. 23 sales thread. “Sendgrid servers maintain a very good reputation with [email service providers] so your content becomes much more likely to get into the inbox so long as your setup is correct.”

Neil Schwartzman, executive director of the anti-spam group CAUCE, said Sendgrid’s 2FA plans are long overdue, noting that the company bought Authy back in 2015.

Single-factor authentication for a company like this in 2020 is just ludicrous given the potential damage and malicious content we’re seeing,” Schwartzman said.

“I understand that it’s a task to invoke 2FA, and given the volume of customers Sendgrid has that’s something to consider because there’s going to be a lot of customer overhead involved,” he continued. “But it’s not like your bank, social media account, email and plenty of other places online don’t already insist on it.”

Schwartzman said if Twilio doesn’t act quickly enough to fix the problem on its end, the major email providers of the world (think Google, Microsoft and Apple) — and their various machine-learning anti-spam algorithms — may do it for them.

“There is a tipping point after which receiving firms start to lose patience and start to more aggressively filter this stuff,” he said. “If seeing a Sendgrid email according to machine learning becomes a sign of abuse, trust me the machines will make the decisions even if the people don’t.”

Worse Than FailureError'd: Don't Leave This Page

"My Kindle showed me this for the entire time I read this book. Luckily, page 31 is really exciting!" writes Hans H.

 

Tim wrote, "Thanks JustPark, I'd love to verify my account! Now...how about that button?"

 

"I almost managed to uninstall Viber, or did I?" writes Simon T.

 

Marco wrote, "All I wanted to do was to post a one-time payment on a reputable cloud provider. Now I'm just confused."

 

Brinio H. wrote, "Somehow I expected my muscles to feel more sore after walking over 382 light-years on one day."

 

"Here we have PowerBI failing to dispel the perception that 'Business Intelligence' is an oxymoron," writes Craig.

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Krebs on SecurityConfessions of an ID Theft Kingpin, Part II

Yesterday’s piece told the tale of Hieu Minh Ngo, a hacker the U.S. Secret Service described as someone who caused more material financial harm to more Americans than any other convicted cybercriminal. Ngo was recently deported back to his home country after serving more than seven years in prison for running multiple identity theft services. He now says he wants to use his experience to convince other cybercriminals to use their skills for good. Here’s a look at what happened after he got busted.

Hieu Minh Ngo, 29, in a recent photo.

Part I of this series ended with Ngo in handcuffs after disembarking a flight from his native Vietnam to Guam, where he believed he was going to meet another cybercriminal who’d promised to hook him up with the mother of all consumer data caches.

Ngo had been making more than $125,000 a month reselling ill-gotten access to some of the biggest data brokers on the planet. But the Secret Service discovered his various accounts at these data brokers and had them shut down one by one. Ngo became obsessed with restarting his business and maintaining his previous income. By this time, his ID theft services had earned roughly USD $3 million.

As this was going on, Secret Service agents used an intermediary to trick Ngo into thinking he’d trodden on the turf of another cybercriminal. From Part I:

The Secret Service contacted Ngo through an intermediary in the United Kingdom — a known, convicted cybercriminal who agreed to play along. The U.K.-based collaborator told Ngo he had personally shut down Ngo’s access to Experian because he had been there first and Ngo was interfering with his business.

“The U.K. guy told Ngo, ‘Hey, you’re treading on my turf, and I decided to lock you out. But as long as you’re paying a vig through me, your access won’t go away’,” the Secret Service’s Matt O’Neill recalled.

After several months of conversing with his apparent U.K.-based tormentor, Ngo agreed to meet him in Guam to finalize the deal. But immediately after stepping off of the plane in Guam, he was apprehended by Secret Service agents.

“One of the names of his identity theft services was findget[.]me,” O’Neill said. “We took that seriously, and we did like he asked.”

In an interview with KrebsOnSecurity, Ngo said he spent about two months in a Guam jail awaiting transfer to the United States. A month passed before he was allowed a 10 minute phone call to his family and explain what he’d gotten himself into.

“This was a very tough time,” Ngo said. “They were so sad and they were crying a lot.”

First stop on his prosecution tour was New Jersey, where he ultimately pleaded guilty to hacking into MicroBilt, the first of several data brokers whose consumer databases would power different iterations of his identity theft service over the years.

Next came New Hampshire, where another guilty plea forced him to testify in three different trials against identity thieves who had used his services for years. Among them was Lance Ealy, a serial ID thief from Dayton, Ohio who used Ngo’s service to purchase more than 350 “fullz” — a term used to describe a package of everything one would need to steal someone’s identity, including their Social Security number, mother’s maiden name, birth date, address, phone number, email address, bank account information and passwords.

Ealy used Ngo’s service primarily to conduct tax refund fraud with the U.S. Internal Revenue Service (IRS), claiming huge refunds in the names of ID theft victims who first learned of the fraud when they went to file their taxes and found someone else had beat them to it.

Ngo’s cooperation with the government ultimately led to 20 arrests, with a dozen of those defendants lured into the open by O’Neill and other Secret Service agents posing as Ngo.

The Secret Service had difficulty pinning down the exact amount of financial damage inflicted by Ngo’s various ID theft services over the years, primarily because those services only kept records of what customers searched for — not which records they purchased.

But based on the records they did have, the government estimated that Ngo’s service enabled approximately $1.1 billion in new account fraud at banks and retailers throughout the United States, and roughly $64 million in tax refund fraud with the states and the IRS.

“We interviewed a number of Ngo’s customers, who were pretty open about why they were using his services,” O’Neill said. “Many of them told us the same thing: Buying identities was so much better for them than stolen payment card data, because card data could be used once or twice before it was no good to them anymore. But identities could be used over and over again for years.”

O’Neill said he still marvels at the fact that Ngo’s name is practically unknown when compared to the world’s most infamous credit card thieves, some of whom were responsible for stealing hundreds of millions of cards from big box retail merchants.

“I don’t know of anyone who has come close to causing more material harm than Ngo did to the average American,” O’Neill said. “But most people have probably never heard of him.”

Ngo said he wasn’t surprised that his services were responsible for so much financial damage. But he was utterly unprepared to hear about the human toll. Throughout the court proceedings, Ngo sat through story after dreadful story of how his work had ruined the financial lives of people harmed by his services.

“When I was running the service, I didn’t really care because I didn’t know my customers and I didn’t know much about what they were doing with it,” Ngo said. “But during my case, the federal court received like 13,000 letters from victims who complained they lost their houses, jobs, or could no longer afford to buy a home or maintain their financial life because of me. That made me feel really bad, and I realized I’d been a terrible person.”

Even as he bounced from one federal detention facility to the next, Ngo always seemed to encounter ID theft victims wherever he went, including prison guards, healthcare workers and counselors.

“When I was in jail at Beaumont, Texas I talked to one of the correctional officers there who shared with me a story about her friend who lost her identity and then lost everything after that,” Ngo recalled. “Her whole life fell apart. I don’t know if that lady was one of my victims, but that story made me feel sick. I know now that what I was doing was just evil.”

Ngo’s former ID theft service usearching[.]info.

The Vietnamese hacker was released from prison a few months ago, and is now finishing up a mandatory three-week COVID-19 quarantine in a government-run facility near Ho Chi Minh city. In the final months of his detention, Ngo started reading everything he could get his hands on about computer and Internet security, and even authored a lengthy guide written for the average Internet user with advice about how to avoid getting hacked or becoming the victim of identity theft.

Ngo said while he would like to one day get a job working in some cybersecurity role, he’s in no hurry to do so. He’s already had at least one job offer in Vietnam, but he turned it down. He says he’s not ready to work yet, but is looking forward to spending time with his family — and specifically with his dad, who was recently diagnosed with Stage 4 cancer.

Longer term, Ngo says, he wants to mentor young people and help guide them on the right path, and away from cybercrime. He’s been brutally honest about his crimes and the destruction he’s caused. His LinkedIn profile states up front that he’s a convicted cybercriminal.

“I hope my work can help to change the minds of somebody, and if at least one person can change and turn to do good, I’m happy,” Ngo said. “It’s time for me to do something right, to give back to the world, because I know I can do something like this.”

Still, the recidivism rate among cybercriminals tends to be extremely high, and it would be easy for him to slip back into his old ways. After all, few people know as well as he does how best to exploit access to identity data.

O’Neill said he believes Ngo probably will keep his nose clean. But he added that Ngo’s service if it existed today probably would be even more successful and lucrative given the sheer number of scammers involved in using stolen identity data to defraud states and the federal government out of pandemic assistance loans and unemployment insurance benefits.

“It doesn’t appear he’s looking to get back into that life of crime,” O’Neill said. “But I firmly believe the people doing fraudulent small business loans and unemployment claims cut their teeth on his website. He was definitely the new coin of the realm.”

Ngo maintains he has zero interest in doing anything that might send him back to prison.

“Prison is a difficult place, but it gave me time to think about my life and my choices,” he said. “I am committing myself to do good and be better every day. I now know that money is just a part of life. It’s not everything and it can’t bring you true happiness. I hope those cybercriminals out there can learn from my experience. I hope they stop what they are doing and instead use their skills to help make the world better.”

Worse Than FailureCodeSOD: Win By Being Last

I’m going to open with just one line, just one line from Megan D, before we dig into the story:

public static boolean comparePasswords(char[] password1, char[] password2)

A long time ago, someone wrote a Java 1.4 application. It’s all about getting data out of data files, like CSVs and Excel and XML, and getting it into a database, where it can then be turned into plots and reports. Currently, it has two customers, but boy, there’s a lot of technology invested in it, so the pointy-hairs decided that it needed to be updated so they could sell it to new customers.

The developers played a game of “Not It!” and Megan lost. It wasn’t hard to see why no one wanted to touch this code. The UI section was implemented in code generated by an Eclipse plugin that no longer exists. There was UI code which wasn’t implemented that way, but there were no code paths that actually showed it. The project didn’t have one “do everything” class of utilities- it had many of them.

The real magic was in Database.java. All the data got converted into strings before going into the database, and data got pulled back out as lists of strings- one string per row, prepended with the number of columns in that row. The string would get split up and converted back into the actual real datatypes.

Getting back to our sample line above, Megan adds:

No restrictions on any data in the database, or even input cleaning - little Bobby Tables would have a field day. There are so many issues that the fact that passwords are plaintext barely even registers as a problem.

A common convention used in the database layer is “loop and compare”. Want to check if a username exists in the database? SELECT username FROM users WHERE username = 'someuser', loop across the results, and if the username in the result set matches 'someuser', set a flag to true (set it to false otherwise). Return the flag. And if you're wondering why they need to look at each row instead of just seeing a non-zero number of matches, so am I.

Usernames are not unique, but the username/group combination should be.

Similarly, if you’re logging in, it uses a “loop and compare”. Find all the rows for users with that username. Then, find all the groups for that username. Loop across all the groups and check if any of them match the user trying to log in. Then loop across all the stored- plaintext stored passwords and see if they match.

But that raises the question: how do you tell if two strings match? Just use an equality comparison? Or a .equals? Of course not.

We use “loop and compare” on sequences of rows, so we should also use “loop and compare” on sequences of characters. What could be wrong with that?

/**
   * Compares two given char arrays for equality.
   * 
   * @param password1
   *          The first password to compare.
   * @param password2
   *          The second password to compare.
   * @return True if the passwords are equal false otherwise.
   */
  public static boolean comparePasswords(char[] password1, char[] password2)
  {
    // assume false until prove otherwise
    boolean aSameFlag = false;
    if (password1 != null && password2 != null)
    {
      if (password1.length == password2.length)
      {
        for (int aIndex = 0; aIndex < password1.length; aIndex++)
        {
          aSameFlag = password1[aIndex] == password2[aIndex];
        }
      }
    }
    return aSameFlag;
  }

If the passwords are both non-null, if they’re both the same length, compare them one character at a time. For each character, set the aSameFlag to true if they match, false if they don’t.

Return the aSameFlag.

The end result of this is that only the last letter matters, so from the perspective of this code, there’s no difference between the word “ship” and a more accurate way to describe this code.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Krebs on SecurityConfessions of an ID Theft Kingpin, Part I

At the height of his cybercriminal career, the hacker known as “Hieupc” was earning $125,000 a month running a bustling identity theft service that siphoned consumer dossiers from some of the world’s top data brokers. That is, until his greed and ambition played straight into an elaborate snare set by the U.S. Secret Service. Now, after more than seven years in prison Hieupc is back in his home country and hoping to convince other would-be cybercrooks to use their computer skills for good.

Hieu Minh Ngo, in his teens.

For several years beginning around 2010, a lone teenager in Vietnam named Hieu Minh Ngo ran one of the Internet’s most profitable and popular services for selling “fullz,” stolen identity records that included a consumer’s name, date of birth, Social Security number and email and physical address.

Ngo got his treasure trove of consumer data by hacking and social engineering his way into a string of major data brokers. By the time the Secret Service caught up with him in 2013, he’d made over $3 million selling fullz data to identity thieves and organized crime rings operating throughout the United States.

Matt O’Neill is the Secret Service agent who in February 2013 successfully executed a scheme to lure Ngo out of Vietnam and into Guam, where the young hacker was arrested and sent to the mainland U.S. to face prosecution. O’Neill now heads the agency’s Global Investigative Operations Center, which supports investigations into transnational organized criminal groups.

O’Neill said he opened the investigation into Ngo’s identity theft business after reading about it in a 2011 KrebsOnSecurity story, “How Much is Your Identity Worth?” According to O’Neill, what’s remarkable about Ngo is that to this day his name is virtually unknown among the pantheon of infamous convicted cybercriminals, the majority of whom were busted for trafficking in huge quantities of stolen credit cards.

Ngo’s businesses enabled an entire generation of cybercriminals to commit an estimated $1 billion worth of new account fraud, and to sully the credit histories of countless Americans in the process.

“I don’t know of any other cybercriminal who has caused more material financial harm to more Americans than Ngo,” O’Neill told KrebsOnSecurity. “He was selling the personal information on more than 200 million Americans and allowing anyone to buy it for pennies apiece.”

Freshly released from the U.S. prison system and deported back to Vietnam, Ngo is currently finishing up a mandatory three-week COVID-19 quarantine at a government-run facility. He contacted KrebsOnSecurity from inside this facility with the stated aim of telling his little-known story, and to warn others away from following in his footsteps.

BEGINNINGS

Ten years ago, then 19-year-old hacker Ngo was a regular on the Vietnamese-language computer hacking forums. Ngo says he came from a middle-class family that owned an electronics store, and that his parents bought him a computer when he was around 12 years old. From then on out, he was hooked.

In his late teens, he traveled to New Zealand to study English at a university there. By that time, he was already an administrator of several dark web hacker forums, and between his studies he discovered a vulnerability in the school’s network that exposed payment card data.

“I did contact the IT technician there to fix it, but nobody cared so I hacked the whole system,” Ngo recalled. “Then I used the same vulnerability to hack other websites. I was stealing lots of credit cards.”

Ngo said he decided to use the card data to buy concert and event tickets from Ticketmaster, and then sell the tickets at a New Zealand auction site called TradeMe. The university later learned of the intrusion and Ngo’s role in it, and the Auckland police got involved. Ngo’s travel visa was not renewed after his first semester ended, and in retribution he attacked the university’s site, shutting it down for at least two days.

Ngo said he started taking classes again back in Vietnam, but soon found he was spending most of his time on cybercrime forums.

“I went from hacking for fun to hacking for profits when I saw how easy it was to make money stealing customer databases,” Ngo said. “I was hanging out with some of my friends from the underground forums and we talked about planning a new criminal activity.”

“My friends said doing credit cards and bank information is very dangerous, so I started thinking about selling identities,” Ngo continued. “At first I thought well, it’s just information, maybe it’s not that bad because it’s not related to bank accounts directly. But I was wrong, and the money I started making very fast just blinded me to a lot of things.”

MICROBILT

His first big target was a consumer credit reporting company in New Jersey called MicroBilt.

“I was hacking into their platform and stealing their customer database so I could use their customer logins to access their [consumer] databases,” Ngo said. “I was in their systems for almost a year without them knowing.”

Very soon after gaining access to MicroBilt, Ngo says, he stood up Superget[.]info, a website that advertised the sale of individual consumer records. Ngo said initially his service was quite manual, requiring customers to request specific states or consumers they wanted information on, and he would conduct the lookups by hand.

Ngo’s former identity theft service, superget[.]info

“I was trying to get more records at once, but the speed of our Internet in Vietnam then was very slow,” Ngo recalled. “I couldn’t download it because the database was so huge. So I just manually search for whoever need identities.”

But Ngo would soon work out how to use more powerful servers in the United States to automate the collection of larger amounts of consumer data from MicroBilt’s systems, and from other data brokers. As I wrote of Ngo’s service back in November 2011:

“Superget lets users search for specific individuals by name, city, and state. Each “credit” costs USD$1, and a successful hit on a Social Security number or date of birth costs 3 credits each. The more credits you buy, the cheaper the searches are per credit: Six credits cost $4.99; 35 credits cost $20.99, and $100.99 buys you 230 credits. Customers with special needs can avail themselves of the “reseller plan,” which promises 1,500 credits for $500.99, and 3,500 credits for $1000.99.

“Our Databases are updated EVERY DAY,” the site’s owner enthuses. “About 99% nearly 100% US people could be found, more than any sites on the internet now.”

Ngo’s intrusion into MicroBilt eventually was detected, and the company kicked him out of their systems. But he says he got back in using another vulnerability.

“I was hacking them and it was back and forth for months,” Ngo said. “They would discover [my accounts] and fix it, and I would discover a new vulnerability and hack them again.”

COURT (AD)VENTURES, AND EXPERIAN

This game of cat and mouse continued until Ngo found a much more reliable and stable source of consumer data: A U.S. based company called Court Ventures, which aggregated public records from court documents. Ngo wasn’t interested in the data collected by Court Ventures, but rather in its data sharing agreement with a third-party data broker called U.S. Info Search, which had access to far more sensitive consumer records.

Using forged documents and more than a few lies, Ngo was able to convince Court Ventures that he was a private investigator based in the United States.

“At first [when] I sign up they asked for some documents to verify,” Ngo said. “So I just used some skill about social engineering and went through the security check.”

Then, in March 2012, something even more remarkable happened: Court Ventures was purchased by Experian, one of the big three major consumer credit bureaus in the United States. And for nine months after the acquisition, Ngo was able to maintain his access.

“After that, the database was under control by Experian,” he said. “I was paying Experian good money, thousands of dollars a month.”

Whether anyone at Experian ever performed due diligence on the accounts grandfathered in from Court Ventures is unclear. But it wouldn’t have taken a rocket surgeon to figure out that this particular customer was up to something fishy.

For one thing, Ngo paid the monthly invoices for his customers’ data requests using wire transfers from a multitude of banks around the world, but mostly from new accounts at financial institutions in China, Malaysia and Singapore.

O’Neill said Ngo’s identity theft website generated tens of thousands of queries each month. For example, the first invoice Court Ventures sent Ngo in December 2010 was for 60,000 queries. By the time Experian acquired the company, Ngo’s service had attracted more than 1,400 regular customers, and was averaging 160,000 monthly queries.

More importantly, Ngo’s profit margins were enormous.

“His service was quite the racket,” he said. “Court Ventures charged him 14 cents per lookup, but he charged his customers about $1 for each query.”

By this time, O’Neill and his fellow Secret Service agents had served dozens of subpoenas tied to Ngo’s identity theft service, including one that granted them access to the email account he used to communicate with customers and administer his site. The agents discovered several emails from Ngo instructing an accomplice to pay Experian using wire transfers from different Asian banks.

TLO

Working with the Secret Service, Experian quickly zeroed in on Ngo’s accounts and shut them down. Aware of an opportunity here, the Secret Service contacted Ngo through an intermediary in the United Kingdom — a known, convicted cybercriminal who agreed to play along. The U.K.-based collaborator told Ngo he had personally shut down Ngo’s access to Experian because he had been there first and Ngo was interfering with his business.

“The U.K. guy told Ngo, ‘Hey, you’re treading on my turf, and I decided to lock you out. But as long as you’re paying a vig through me, your access won’t go away’,” O’Neill recalled.

The U.K. cybercriminal, acting at the behest of the Secret Service and U.K. authorities, told Ngo that if he wanted to maintain his access, he could agree to meet up in person. But Ngo didn’t immediately bite on the offer.

Instead, he weaseled his way into another huge data store. In much the same way he’d gained access to Court Ventures, Ngo got an account at a company called TLO, another data broker that sells access to extremely detailed and sensitive information on most Americans.

TLO’s service is accessible to law enforcement agencies and to a limited number of vetted professionals who can demonstrate they have a lawful reason to access such information. In 2014, TLO was acquired by Trans Union, one of the other three big U.S. consumer credit reporting bureaus.

And for a short time, Ngo used his access to TLO to power a new iteration of his business — an identity theft service rebranded as usearching[.]info. This site also pulled consumer data from a payday loan company that Ngo hacked into, as documented in my Sept. 2012 story, ID Theft Service Tied to Payday Loan Sites. Ngo said the hacked payday loans site gave him instant access to roughly 1,000 new fullz records each day.

Ngo’s former ID theft service usearching[.]info.

BLINDED BY GREED

By this time, Ngo was a multi-millionaire: His various sites and reselling agreements with three Russian-language cybercriminal stores online had earned him more than USD $3 million. He told his parents his money came from helping companies develop websites, and even used some of his ill-gotten gains to pay off the family’s debts (its electronics business had gone belly up, and a family member had borrowed but never paid back a significant sum of money).

But mostly, Ngo said, he spent his money on frivolous things, although he says he’s never touched drugs or alcohol.

“I spent it on vacations and cars and a lot of other stupid stuff,” he said.

When TLO locked Ngo out of his account there, the Secret Service used it as another opportunity for their cybercriminal mouthpiece in the U.K. to turn the screws on Ngo yet again.

“He told Ngo he’d locked him out again, and the he could do this all day long,” O’Neill said. “And if he truly wanted lasting access to all of these places he used to have access to, he would agree to meet and form a more secure partnership.”

After several months of conversing with his apparent U.K.-based tormentor, Ngo agreed to meet him in Guam to finalize the deal. Ngo says he understood at the time that Guam is an unincorporated territory of the United States, but that he discounted the chances that this was all some kind of elaborate law enforcement sting operation.

“I was so desperate to have a stable database, and I got blinded by greed and started acting crazy without thinking,” Ngo said. “Lots of people told me ‘Don’t go!,’ but I told them I have to try and see what’s going on.”

But immediately after stepping off of the plane in Guam, he was apprehended by Secret Service agents.

“One of the names of his identity theft services was findget[.]me,” O’Neill said. “We took that seriously, and we did like he asked.”

This is Part I of a multi-part series. Part II in this series is available at this link.

Worse Than FailureCodeSOD: Where to Insert This

If you run a business of any size, you need some sort of resource-management/planning software. Really small businesses use Excel. Medium businesses use Excel. Enterprises use Excel. But in addition to that, the large businesses also pay through the nose for a gigantic ERP system, like Oracle or SAP, that they can wire up to Excel.

Small and medium businesses can’t afford an ERP, but they might want to purchase a management package in the niche realm of “SMB software”- small and medium business software. Much like their larger cousins, these SMB tools have… a different idea of code quality.

Cassandra’s company had deployed such a product, and with it came a slew of tickets. The performance was bad. There were bugs everywhere. While the company provided support, Cassandra’s IT team was expected to also do some diagnosing.

While digging around in one nasty performance problem, Cassandra found that one button in the application would generate and execute this block of SQL code using a SQLCommand object in C#.

DECLARE @tmp TABLE (Id uniqueidentifier)

--{ Dynamic single insert statements, may be in the hundreds. }

IF NOT EXISTS (SELECT TOP 1 1 FROM SomeTable AS st INNER JOIN @tmp t ON t.Id = st.PK)
BEGIN
    INSERT INTO SomeTable (PK, SomeDate) SELECT Id, getdate() as SomeDate FROM @tmp 
END
ELSE 
BEGIN
    UPDATE st
        SET SomeDate = getdate()
        FROM @tmp t
        LEFT JOIN SomeTable AS st ON t.Id = st.PK AND SomeDate = NULL
END

At its core, the purpose of this is to take a temp-table full of rows and perform an “upsert” for all of them: insert if a record with that key doesn’t exist, update if a record with that key does. Now, this code is clearly SQL Server code, so a MERGE handles that.

But okay, maybe they’re trying to be as database agnostic as possible, and don’t want to use something that, while widely supported, has some dialect differences across databases. Fine, but there’s another problem here.

Whoever built this understood that in SQL Server land, cursors are frowned upon, so they didn’t want to iterate across every row. But here’s their problem: some of the records may exist, some of them may not, so they need to check that.

As you saw, this was their approach:

IF NOT EXISTS (SELECT TOP 1 1 FROM SomeTable AS st INNER JOIN @tmp t ON t.Id = st.PK)

This is wrong. This will be true only if none of the rows in the dynamically generated INSERT statements exist in the base table. If some of the rows exist and some don’t, you aren’t going to get the results you were expecting, because this code only goes down one branch: it either inserts or updates.

There are other things wrong with this code. For example, SomeDate = NULL is going to have different behavior based on whether the ANSI_NULLS database flag is OFF (in which case it works), or ON (in which case it doesn’t). There’s a whole lot of caveats about whether you set it at the database level, on the connection string, during your session, but in Cassandra’s example, ANSI_NULLS was ON at the time this ran, so that also didn’t work.

There are other weird choices and performance problems with this code, but the important thing is that this code doesn’t work. This is in a shipped product, installed by over 4,000 businesses (the vendor is quite happy to cite that number in their marketing materials). And it ships with code that can’t work.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Kevin RuddAFR: How Mitch Hooke Axed the Mining Tax and Climate Action

Published by The Australian Financial Review on 25 August 2020

The Australian political arena is full of reinventions.

Tony Abbott has gone from pushing emissions cuts under the Paris climate agreement to demanding Australia withdraw from the treaty altogether. And Scott Morrison, who accused Labor of presiding over “crippling” debt, now binges on wasteful debt-fuelled spending that makes our government’s stimulus look like a rounding error.

However, neither of these metamorphoses comes close to the transformation of Mitch Hooke, the former Minerals Council chief and conservative political operative, who now pretends he is a lifelong evangelist of carbon pricing.

Writing in The Australian Financial Review, (Ken Henry got it wrong on climate wars, mining tax on August 11) Hooke said he supported emissions trading throughout the mid-2000s until my government came to power in 2007.

I then supposedly “trashed that consensus” by using the proceeds of a carbon price to compensate motorists, low-income households and trade-exposed industries.

How dreadful to help those most impacted by a carbon price! The very point of an emissions trading scheme is that it can change consumers’ behaviour without making people on low to middle incomes worse off. That’s why you increase the price of emissions-intensive goods and services (relative to less polluting alternatives) then give that money back to people through the tax or benefits system so they’re no worse off. But they are then able to choose a more climate-friendly product.

The alternative is the government just pockets the cash – thereby defeating the entire purpose of a market-based scheme. Obviously this is pure rocket science for Mitch.

Hooke also seems to have forgotten that such compensation was not only appropriate, but it was exactly what Malcolm Turnbull was demanding in exchange for Liberal support for our proposal in the Senate. Without it, any emissions trading scheme would be a non-starter.

When that deal was tested in the Liberal party room, it was defeated by a single vote. Even so, enough Liberal senators crossed the floor to give the Green political party the balance of power.

Showing their true colours, Bob Brown’s senators sided with Tony Abbott and Barnaby Joyce to kill the legislation. The Green party has, to this day, been unable to adequately explain its decision to voters. If they hadn’t, Australia would now be 10 years down the path of steady decarbonisation.

For Hooke, the reality is that he never wanted an emissions trading scheme if he could avoid one. But rather than state this outright, he just insists on impossible preconditions. As for Hooke’s most beloved Howard government, John Winston would in all probability have gone even further than Labor in compensating people affected by his own proposed emissions trading scheme, given Howard’s legendary ability to bake middle-class welfare into any national budget. Just ask Peter Costello.

Hooke has, like Abbott, been one of the most destructive voices in Australian national climate change action. He also expresses zero remorse for his deceptive campaign of misinformation, in partnership with those wonderful corporate citizens at Rio, targeting my government’s efforts to introduce a profits-based tax for minerals, mirroring the petroleum resource rent tax implemented by the Hawke government in the 1980s.

Our Resource Super Profits Tax would have funded new infrastructure to address looming capacity constraints affecting the sector as well as an across-the-board company tax cut to 28 per cent. Most importantly it sought to fairly spread the proceeds of mining profits when they vastly exceeded the industry norms – such as during commodity price booms – with the broader Australian public. Lest we forget, they actually own those resources. Rio just rents them.

In response, Hooke and his mates at Rio and BHP accumulated a $90 million war chest and $22.2 million of shareholders’ funds were poured into a political advertising campaign over six weeks.

Another $1.9 million was tipped into Liberal and National party coffers to keep conservative politicians on side. All to keep Rio and BHP happy, while ignoring the deep structural interests of the rest of our mining sector, many of whom supported our proposal.

At their height, Hooke’s television ads were screening around 33 times per day on free-to-air channels. Claims the tax would be a “hand grenade” to retirement savings were blasted by the Australian Institute of Superannuation Trustees which referred the “irresponsible” and “scaremongering” campaign to regulators.

This was not an exercise in public debate to refine aspects of the tax’s design; it was a systematic effort to use the wealth of two multinational mining companies to bludgeon the government into submission.

And when Gillard and Swan capitulated as the first act of their new government, they essentially turned over the drafting pen to Hooke to write a new rent tax that collected almost zero revenue.

The industry, however, was far from unified. Fortescue Metals Group chairman Andrew “Twiggy” Forrest understood what we were trying to achieve, having circumvented Hooke’s spin machine to deal directly with my resources minister Martin Ferguson.

We ultimately agreed that Forrest would stand alongside me and pledge to support the tax. The next day, Gillard and Swan struck. And Hooke has been a happy man ever since, even though Australia is the poorer for it.

It doesn’t matter where you sit on the political spectrum, everyone involved in public debate should hope that they’ve helped to improve the lives of ordinary people.

That is not Hooke’s legacy. Nor his interest. However much he may now seek to rationalise his conduct, Hooke’s stock and trade was brutal, destructive politics in direct service of BHP, Rio and the carbon lobby.

He was paid handsomely to thwart climate change action and ensure wealthy multinationals didn’t pay a dollar more in tax than was absolutely necessary. He succeeded. But I’m not sure his grandchildren will be all that proud of his destructive record.

Congratulations, Mitch.

The post AFR: How Mitch Hooke Axed the Mining Tax and Climate Action appeared first on Kevin Rudd.

LongNowThe Alchemical Brothers: Brian Eno & Roger Eno Interviewed

Long Now co-founder Brian Eno on time, music, and contextuality in a recent interview, rhyming on Gregory Bateson’s definition of information as “a difference that makes a difference”:

If a Martian came to Earth and you played her a late Beethoven String Quartet and then another written by a first-year music student, it is unlikely that she would a) understand what the point of listening to them was at all, and b) be able to distinguish between them.

What this makes clear is that most of the listening experience is constructed in our heads. The ‘beauty’ we hear in a piece of music isn’t something intrinsic and immutable – like, say, the atomic weight of a metal is intrinsic – but is a product of our perception interacting with that group of sounds in a particular historical context. You hear the music in relation to all the other experiences you’ve had of listening to music, not in a vacuum. This piece you are listening to right now is the latest sentence in a lifelong conversation you’ve been having. What you are hearing is the way it differs from, or conforms to, the rest of that experience. The magic is in our alertness to novelty, our attraction to familiarity, and the alchemy between the two.

The idea that music is somehow eternal, outside of our interaction with it, is easily disproven. When I lived for a few months in Bangkok I went to the Chinese Opera, just because it was such a mystery to me. I had no idea what the other people in the audience were getting excited by. Sometimes they’d all leap up from their chairs and cheer and clap at a point that, to me, was effectively identical to every other point in the performance. I didn’t understand the language, and didn’t know what the conversation had been up to that point. There could be no magic other than the cheap thrill of exoticism.

So those poor deluded missionaries who dragged gramophones into darkest Africa because they thought the experience of listening to Bach would somehow ‘civilise the natives’ were wrong in just about every way possible: in thinking that ‘the natives’ were uncivilised, in not recognising that they had their own music, and in assuming that our Western music was culturally detachable and transplantable – that it somehow carried within it the seeds of civilisation. This cultural arrogance has been attached to classical music ever since it lost its primacy as the popular centre of the Western musical universe, as though the soundtrack of the Austro-Hungarian Empire in the 19th Century was somehow automatically universal and superior.

Google AdsenseAdSense Reports Technical Lead Manager

The new AdSense reporting is live

Worse Than FailureCodeSOD: Wait a Minute

Hanna's co-worker implemented a new service, got it deployed, and then left for vacation someplace where there's no phones or Internet. So, of course, Hanna gets a call from one of the operations folks: "That new service your team deployed keeps crashing on startup, but there's nothing in the log."

Hanna took it on herself to check into the VB.Net code.

Public Class Service Private mContinue As Boolean = True Private mServiceException As System.Exception = Nothing Private mAppSettings As AppSettings '// ... snip ... // Private Sub DoWork() Try Dim aboutNowOld As String = "" Dim starttime As String = DateTime.Now.AddSeconds(5).ToString("HH:mm") While mContinue Threading.Thread.Sleep(1000) Dim aboutnow As String = DateTime.Now.ToString("HH:mm") If starttime = aboutnow And aboutnow <> aboutNowOld Then '// ... snip ... // starttime = DateTime.Now.AddMinutes(mAppSettings.pollingInterval).ToString("HH:mm") End If aboutNowOld = aboutnow End While Catch ex As Exception mServiceException = ex End Try If mServiceException IsNot Nothing Then EventLog.WriteEntry(mServiceException.ToString, Diagnostics.EventLogEntryType.Error) Throw mServiceException End If End Sub End Class

Presumably whatever causes the crash is behind one of those "snip"s, but Hanna didn't include that information. Instead, let's focus on our unique way to idle.

First, we pick our starttime to be the minute 5 seconds into the future. Then we enter our work loop. Sleep for one second, and then check which minute we're on. If that minute is our starttime and this loop hasn't run during this minute before, we can get into our actual work (snipped), and then calculate the nextstarttime, based on our app settings.

If there are any exceptions, we break the loop, log and re-throw it- but don't do that from the exception handler. No, we store the exception in a member variable and then if it IsNot Nothing we log it out.

Hanna writes: "After seeing this I gave up immediately before I caused a time paradox. Guess we'll have to wait till she's back from the future to fix it."

It's not quite a paradox, but it's certainly far more complex than it ever needs to be. First, we have the stringly-typed date handling. That's just awful. Then, we have the once-per-second polling, but we except pollingInterval to be in minutes. But AddMinutes takes doubles, so it could be seconds, expressed as fractional minutes. But wait, if we know how long we want to wait between executions, couldn't we just Sleep that long? Why poll every second? Does this job absolutely have to run in the first second of every minute? Even if it does, we could easily calculate that sleep time with reasonable accuracy if we actually looked at the seconds portion of the current time.

The developer who wrote this saw the problem of "execute this code once every polling interval" and just called it a day.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Rondam RamblingsThey knew. They still know.

Never forget what conservatives were saying about Donald Trump before he cowed them into submission.(Sorry about the tiny size of the embedded video.  That's the default that Blogger gave me and I can't figure out how to adjust the size.  If it bothers you, click on the link above to see the original.)

Rondam RamblingsRepublicans officially endorse a Trump dictatorship

The Republican party has formally decided not to adopt a platform this year, instead passing a resolution that says essentially, "we will support whatever the Dear Leader says".  Since the resolution calls out the media for its biased reporting, I will quote the resolution here in its entirety, with the salient portions highlighted: WHEREAS, The Republican National Committee (RNC) has

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 14)

Here’s part fourteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Worse Than FailureCodeSOD: Sudon't

There are a few WTFs in today's story. Let's get the first one out of the way: Jan S downloaded a shell script and ran it as root, without reading it. Now, let's be fair, that's honestly a pretty mild WTF; we've all done something similar, and popular software tools still tell you to install them with a curl … | sh, and then sudo themselves extra permissions in the script.

The software being installed in this case is a tool for accessing Bitlocker encrypted drives from Linux. And the real WTF for this one is the install script, which we'll dig into in a moment. This is not, however, some small scale open source project thrown together by hobbyists, but instead released by Initech's "Data Recovery" business. In this case, this is the open source core of a larger data recovery product- if you're willing to muck around with low level commands and configs, you can do it for free, but if you want a vaguely usable UI, get ready to pony up $40.

With that in mind, let's take a look at the script. We're going to do this in chunks, because nearly everything is wrong. You might think I'm exaggerating, but here's the first two lines of the script:

#!/bin/bash home_dir="/home/"${USER}"/initech.bitlocker"

That is not how you find out the user's home directory. We'll usually use ${HOME}, or since the shebang tells us this is definitely bash, we could just use ~. Jan also points out that while a username probably shouldn't have a space, it's possible, and since the ${USER} isn't in quotes, this breaks in that case.

echo ${home_dir} install_dir=$1 if [ ! -d "${install_dir}" ]; then install_dir=${home_dir} if [ ! -d "${install_dir}" ]; then echo "create dir : "${install_dir} mkdir ${install_dir}

Who wants indentation in their scripts? And if a script supports arguments, should we tell the user about it? Of course not! Just check to see if they supplied an argument, and if they did, we'll treat that as the install directory.

As a bonus, the mkdir line protects people like Jan who run this script as root, at least if their home directory is /root, which is common. When it tries to mkdir /home/root/initech.bitlocker, the script fails there.

echo "Install software to ${install_dir}" cp -rf ./* ${install_dir}"/"

Once again, the person who wrote this script doesn't seem to understand what the double quotes in Bash are for, but the real magic is the next segment:

echo "Copy runtime environment ..." sudo cp -f ./libcrypto.so.1.0.0 /usr/lib/ sudo cp -f ./libssl.so.1.0.0 /usr/lib64 sudo cp -f ./libcrypto.so.1.0.0 /usr/lib/ sudo cp -f ./libssl.so.1.0.0 /usr/lib64

Did you have libssl already installed in your system? Well now you have this version! Hope that's okay for you. We like our version of libssl and libcrypto so much we're copying them into your library directories twice. They probably meant to copy libcrypto and libssl to both lib and lib64, but messed up.

Well, that is assuming you already have a lib64 directory, because if you don't, you now have a lib64 file which contains the data from libssl.so.1.0.0.

This is the installer for a piece of software which has been released as part of a product that Initech wants to sell, and they don't successfully install it.

sudo ln -s ${install_dir}/mount.bitlocker /usr/bin/mount.bitlocker sudo ln -s ${install_dir}/bitlocker.creator /usr/bin/create.bitlocker sudo ln -s ${install_dir}/activate.sh /usr/bin/initech.bitlocker.active sudo ln -s ${install_dir}/initech.mount.sh /usr/bin/initech.bitlocker.mount sudo ln -s ${install_dir}/initech.bitlocker.sh /usr/bin/initech.bitlocker

Hey, here's an install step with no critical mistakes, assuming that no other package or tool has tried to claim those names in /usr/bin, which is probably true (Jan actually checked this using dpkg -S … to see if any packages wanted to use that path).

source /etc/os-release case $ID in debian|ubuntu|devuan) echo "Installing dependent package - curl ..." sudo apt-get install curl -y echo "Installing dependent package - openssl ..." sudo apt-get install openssl -y echo "Installing dependent package - fuse ..." sudo apt-get install fuse -y echo "Installing dependent package - gksu ..." sudo apt-get install gksu -y ;;

Here's the first branch of our case. They've learned to indent. They've chosen to slap the -y flag on all the apt-get commands, which means the user isn't going to get a choice about installing these packages, which is mildly annoying. It's also worth noting that sourceing /etc/os-release can be considered harmful, but clearly "not doing harm" isn't high on this script's agenda.

centos|fedora|rhel) yumdnf="yum" if test "$(echo "$VERSION_ID >= 22" | bc)" -ne 0; then yumdnf="dnf" fi echo "Installing dependent package - curl ..." sudo $yumdnf install -y curl echo "Installing dependent package - openssl ..." sudo $yumdnf install -y openssl echo "Installing dependent package - fuse ..." sudo $yumdnf install -y fuse3-libs.x86_64 ;;

So, maybe they just don't think if supports additional indentation? They indent the case fine. I'm not sure what their thinking is.

Speaking of if, look closely at that version check: test "$(echo "$VERSION_ID >= 22" | bc)" -ne 0.

Now, this is almost clever. If your Linux version number uses decimal values, like 18.04, you can't do a simple if [ "$VERSION_ID" -ge 22]…: you'd get an integer expression expected error. So using bc does make sense…ish. It would be good to check if, y'know, bc were actually installed- it probably is, but you don't know- and it might be better to actually think about the purpose of the check.

They don't actually care what version of Redhat Linux you're running. What they're checking is if your version uses yum for package management, or its successor dnf. A more reliable check would be to simply see if dnf is a valid command, and if not, fallback to yum.

Let's finish out the case statement:

*) exit 1 ;; esac

So if your system doesn't use an apt based package manager or a yum/dnf based package manager, this just bails at this point. No error message, just an error number. You know it failed, and you don't know why, and it failed after copying a bunch of crap around your system.

So first it mostly installs itself, then it checks to see if it can successfully install all of its dependencies. And if it fails, does it clean up the changes it made? You better believe it doesn't!

echo "" echo "Initech BitLocker Loader has been installed to "${install_dir}" successfully." echo "Run initech.bitlocker --help to learn more about Initech BitLocker Loader"

This is a pretty optimistic statement, and while yes, it has theoretically been installed to ${install_dir}, assuming that we've gotten this far, it's really installed to your /usr/bin directory.

The real extra super-special puzzle to me is that it interfaces with your package manager to install dependencies. But it also installs its own versions of libcrypto and libssl, which don't come from your package manager. Ignoring the fact that it probably *installs them into the wrong places*, it seems bad. Suspicious, bad, and troubling.

Jan didn't send us the uninstall script, and honestly, I assume there isn't one. But if there is one, you know it probably tries to do rm -rf /${SOME_VAR_THAT_MIGHT_BE_EMPTY} somewhere in there. Which, in consideration, is probably the safest way to uninstall this software anyway.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Krebs on SecurityFBI, CISA Echo Warnings on ‘Vishing’ Threat

The Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) on Thursday issued a joint alert to warn about the growing threat from voice phishing or “vishing” attacks targeting companies. The advisory came less than 24 hours after KrebsOnSecurity published an in-depth look at a crime group offering a service that people can hire to steal VPN credentials and other sensitive data from employees working remotely during the Coronavirus pandemic.

“The COVID-19 pandemic has resulted in a mass shift to working from home, resulting in increased use of corporate virtual private networks (VPNs) and elimination of in-person verification,” the alert reads. “In mid-July 2020, cybercriminals started a vishing campaign—gaining access to employee tools at multiple companies with indiscriminate targeting — with the end goal of monetizing the access.”

As noted in Wednesday’s story, the agencies said the phishing sites set up by the attackers tend to include hyphens, the target company’s name, and certain words — such as “support,” “ticket,” and “employee.” The perpetrators focus on social engineering new hires at the targeted company, and impersonate staff at the target company’s IT helpdesk.

The joint FBI/CISA alert (PDF) says the vishing gang also compiles dossiers on employees at the specific companies using mass scraping of public profiles on social media platforms, recruiter and marketing tools, publicly available background check services, and open-source research. From the alert:

“Actors first began using unattributed Voice over Internet Protocol (VoIP) numbers to call targeted employees on their personal cellphones, and later began incorporating spoofed numbers of other offices and employees in the victim company. The actors used social engineering techniques and, in some cases, posed as members of the victim company’s IT help desk, using their knowledge of the employee’s personally identifiable information—including name, position, duration at company, and home address—to gain the trust of the targeted employee.”

“The actors then convinced the targeted employee that a new VPN link would be sent and required their login, including any 2FA [2-factor authentication] or OTP [one-time passwords]. The actor logged the information provided by the employee and used it in real-time to gain access to corporate tools using the employee’s account.”

The alert notes that in some cases the unsuspecting employees approved the 2FA or OTP prompt, either accidentally or believing it was the result of the earlier access granted to the help desk impersonator. In other cases, the attackers were able to intercept the one-time codes by targeting the employee with SIM swapping, which involves social engineering people at mobile phone companies into giving them control of the target’s phone number.

The agencies said crooks use the vished VPN credentials to mine the victim company databases for their customers’ personal information to leverage in other attacks.

“The actors then used the employee access to conduct further research on victims, and/or to fraudulently obtain funds using varying methods dependent on the platform being accessed,” the alert reads. “The monetizing method varied depending on the company but was highly aggressive with a tight timeline between the initial breach and the disruptive cashout scheme.”

The advisory includes a number of suggestions that companies can implement to help mitigate the threat from these vishing attacks, including:

• Restrict VPN connections to managed devices only, using mechanisms like hardware checks or installed certificates, so user input alone is not enough to access the corporate VPN.

• Restrict VPN access hours, where applicable, to mitigate access outside of allowed times.

• Employ domain monitoring to track the creation of, or changes to, corporate, brand-name domains.

• Actively scan and monitor web applications for unauthorized access, modification, and anomalous activities.

• Employ the principle of least privilege and implement software restriction policies or other controls; monitor authorized user accesses and usage.

• Consider using a formalized authentication process for employee-to-employee communications made over the public telephone network where a second factor is used to
authenticate the phone call before sensitive information can be discussed.

• Improve 2FA and OTP messaging to reduce confusion about employee authentication attempts.

• Verify web links do not have misspellings or contain the wrong domain.

• Bookmark the correct corporate VPN URL and do not visit alternative URLs on the sole basis of an inbound phone call.

• Be suspicious of unsolicited phone calls, visits, or email messages from unknown individuals claiming to be from a legitimate organization. Do not provide personal information or information about your organization, including its structure or networks, unless you are certain of a person’s authority to have the information. If possible, try to verify the caller’s identity directly with the company.

• If you receive a vishing call, document the phone number of the caller as well as the domain that the actor tried to send you to and relay this information to law enforcement.

• Limit the amount of personal information you post on social networking sites. The internet is a public resource; only post information you are comfortable with anyone seeing.

• Evaluate your settings: sites may change their options periodically, so review your security and privacy settings regularly to make sure that your choices are still appropriate.

Worse Than FailureError'd: Just a Suggestion

"Sure thing Google, I guess I'll change my language to... let's see...Ah, how about English?" writes Peter G.

 

Marcus K. wrote, "Breaking news: tt tttt tt,ttt!"

 

Tim Y. writes, "Nothing makes my day more than someone accidentially leaving testing mode enabled (and yes, the test number went through!)"

 

"I guess even thinning brows and psoriasis can turn political these days," Lawrence W. wrote.

 

Strahd I. writes, "It was evident at the time that King Georges VI should have gone asked for a V12 instead."

 

"Well, gee, ZDNet, why do you think I enabled this setting in the first place?" Jeroen V. writes.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

LongNowPeople slept on comfy grass beds 200,000 years ago

The oldest beds known to science now date back nearly a quarter of a million years: traces of silicate from woven grasses found in the back of Border Cave (in South Africa, which has a nearly continuous record of occupation dating back to 200,000 BCE).

Ars Technica reports:

Most of the artifacts that survive from more than a few thousand years ago are made of stone and bone; even wooden tools are rare. That means we tend to think of the Paleolithic in terms of hard, sharp stone tools and the bones of butchered animals. Through that lens, life looks very harsh—perhaps even harsher than it really was. Most of the human experience is missing from the archaeological record, including creature comforts like soft, clean beds.

Given recent work on the epidemic of modern orthodontic issues caused in part by sleeping with “bad oral posture” due to too-soft bedding, it seems like the bed may be another frontier for paleo re-thinking of high-tech life. (See also the controversies over barefoot running, prehistoric diets, and countless other forms of atavism emerging from our future-shocked society.) When technological innovation shuffles the “pace layers” of human existence, changing the built environment faster than bodies can adapt, sometimes comfort’s micro-scale horizon undermines the longer, slower beat of health.

Another plus to making beds of grass is their disposability and integration with the rest of ancient life:

Besides being much softer than the cave floor, these ancient beds were probably surprisingly clean. Burning dirty bedding would have helped cut down on problems with bedbugs, lice, and fleas, not to mention unpleasant smells. [Paleoanthropologist Lyn] Wadley and her colleagues suggest that people at Border Cave may even have raked some extra ashes in from nearby hearths ‘to create a clean, odor-controlled base for bedding.’

And charcoal found in the bedding layers includes bits of an aromatic camphor bush; some modern African cultures use another closely related camphor bush in their plant bedding as an insect repellent. The ash may have helped, too; Wadley and her colleagues note that ‘several ethnographies report that ash repels crawling insects, which cannot easily move through the fine powder because it blocks their breathing and biting apparatus and eventually leaves them dehydrated.’

Finding beds as old as Homo sapiens itself revives the (not quite as old) debate about what makes us human. Defining our humanity as “artists” or “ritualists” seems to weave together modern definitions of technology and craft, ceremony and expression, just as early people wove together sedges for a place to sleep. At least, they are the evidence of a much more holistic, integrated way of life — one that found every possible synergy between day and night, cooking and sleeping:

Imagine that you’ve just burned your old, stale bedding and laid down a fresh layer of grass sheaves. They’re still springy and soft, and the ash beneath is still warm. You curl up and breathe in the tingly scent of camphor, reassured that the mosquitoes will let you sleep in peace. Nearby, a hearth fire crackles and pops, and you stretch your feet toward it to warm your toes. You nudge aside a sharp flake of flint from the blade you were making earlier in the day, then drift off to sleep.

Worse Than FailureCodeSOD: A Backwards For

Aurelia is working on a project where some of the code comes from a client. In this case, it appears that the client has very good reasons for hiring an outside vendor to actually build the application.

Imagine you have some Java code which needs to take an array of integers and iterate across them in reverse, to concatenate a string. Oh, and you need to add one to each item as you do this.

You might be thinking about some combination of a map/reverseString.join operation, or maybe a for loop with a i-- type decrementer.

I’m almost certain you aren’t thinking about this.

public String getResultString(int numResults) {
	StringBuffer sb = null;
	
	for (int result[] = getResults(numResults); numResults-- > 0;) {
		int i = result[numResults];
		if( i == 0){
			int j = i + 1; 
			if (sb == null)
				sb = new StringBuffer();
			else
				sb.append(",");
				sb.append(j);
		}else{
			int j = i + 1; 
			if (sb == null)
				sb = new StringBuffer();
			else
				sb.append(",");
				sb.append(j);
		}
	}
	return sb.toString();
}

I really, really want you to look at that for loop: for (int result[] = getResults(numResults); numResults-- > 0;)

Just look at that. It’s… not wrong. It’s not… bad. It’s just written by an alien wearing a human skin suit. Our initializer actually populates the array we’re going to iterate across. Our bounds check also contains the decrement operation. We don’t have a decrement clause.

Then, if i == 0 we’ll do the exact same thing as if i isn’t 0, since our if and else branches contain the same code.

Increment i, and store the result in j. Why we don’t use the ++i or some other variation to be in-line with our weird for loop, I don’t know. Maybe they were done showing off.

Then, if our StringBuffer is null, we create one, otherwise we append a ",". This is one solution to the contatenator’s comma problem. Again, it’s not wrong, it’s just… unusual.

But this brings us to the thing which is actually, objectively, honestly bad. The indenting.

			if (sb == null)
				sb = new StringBuffer();
			else
				sb.append(",");
				sb.append(j);

Look at that last line. Does that make you angry? Look more closely. Look for the curly brackets. Oh, you don’t see any? Very briefly, when I was looking at this code, I thought, “Wait, does this discard the first item?” No, it just eschews brackets and then indents wrong to make sure we’re nice and confused when we look at the code.

It should read:

			if (sb == null)
				sb = new StringBuffer();
			else
				sb.append(",");
                        sb.append(j);
[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Krebs on SecurityVoice Phishers Targeting Corporate VPNs

The COVID-19 epidemic has brought a wave of email phishing attacks that try to trick work-at-home employees into giving away credentials needed to remotely access their employers’ networks. But one increasingly brazen group of crooks is taking your standard phishing attack to the next level, marketing a voice phishing service that uses a combination of one-on-one phone calls and custom phishing sites to steal VPN credentials from employees.

According to interviews with several sources, this hybrid phishing gang has a remarkably high success rate, and operates primarily through paid requests or “bounties,” where customers seeking access to specific companies or accounts can hire them to target employees working remotely at home.

And over the past six months, the criminals responsible have created dozens if not hundreds of phishing pages targeting some of the world’s biggest corporations. For now at least, they appear to be focusing primarily on companies in the financial, telecommunications and social media industries.

“For a number of reasons, this kind of attack is really effective,” said Allison Nixon, chief research officer at New York-based cyber investigations firm Unit 221B. “Because of the Coronavirus, we have all these major corporations that previously had entire warehouses full of people who are now working remotely. As a result the attack surface has just exploded.”

TARGET: NEW HIRES

A typical engagement begins with a series of phone calls to employees working remotely at a targeted organization. The phishers will explain that they’re calling from the employer’s IT department to help troubleshoot issues with the company’s virtual private networking (VPN) technology.

The employee phishing page bofaticket[.]com. Image: urlscan.io

The goal is to convince the target either to divulge their credentials over the phone or to input them manually at a website set up by the attackers that mimics the organization’s corporate email or VPN portal.

Zack Allen is director of threat intelligence for ZeroFOX, a Baltimore-based company that helps customers detect and respond to risks found on social media and other digital channels. Allen has been working with Nixon and several dozen other researchers from various security firms to monitor the activities of this prolific phishing gang in a bid to disrupt their operations.

Allen said the attackers tend to focus on phishing new hires at targeted companies, and will often pose as new employees themselves working in the company’s IT division. To make that claim more believable, the phishers will create LinkedIn profiles and seek to connect those profiles with other employees from that same organization to support the illusion that the phony profile actually belongs to someone inside the targeted firm.

“They’ll say ‘Hey, I’m new to the company, but you can check me out on LinkedIn’ or Microsoft Teams or Slack, or whatever platform the company uses for internal communications,” Allen said. “There tends to be a lot of pretext in these conversations around the communications and work-from-home applications that companies are using. But eventually, they tell the employee they have to fix their VPN and can they please log into this website.”

SPEAR VISHING

The domains used for these pages often invoke the company’s name, followed or preceded by hyphenated terms such as “vpn,” “ticket,” “employee,” or “portal.” The phishing sites also may include working links to the organization’s other internal online resources to make the scheme seem more believable if a target starts hovering over links on the page.

Allen said a typical voice phishing or “vishing” attack by this group involves at least two perpetrators: One who is social engineering the target over the phone, and another co-conspirator who takes any credentials entered at the phishing page and quickly uses them to log in to the target company’s VPN platform in real-time.

Time is of the essence in these attacks because many companies that rely on VPNs for remote employee access also require employees to supply some type of multi-factor authentication in addition to a username and password — such as a one-time numeric code generated by a mobile app or text message. And in many cases, those codes are only good for a short duration — often measured in seconds or minutes.

But these vishers can easily sidestep that layer of protection, because their phishing pages simply request the one-time code as well.

A phishing page (helpdesk-att[.]com) targeting AT&T employees. Image: urlscan.io

Allen said it matters little to the attackers if the first few social engineering attempts fail. Most targeted employees are working from home or can be reached on a mobile device. If at first the attackers don’t succeed, they simply try again with a different employee.

And with each passing attempt, the phishers can glean important details from employees about the target’s operations, such as company-specific lingo used to describe its various online assets, or its corporate hierarchy.

Thus, each unsuccessful attempt actually teaches the fraudsters how to refine their social engineering approach with the next mark within the targeted organization, Nixon said.

“These guys are calling companies over and over, trying to learn how the corporation works from the inside,” she said.

NOW YOU SEE IT, NOW YOU DON’T

All of the security researchers interviewed for this story said the phishing gang is pseudonymously registering their domains at just a handful of domain registrars that accept bitcoin, and that the crooks typically create just one domain per registrar account.

“They’ll do this because that way if one domain gets burned or taken down, they won’t lose the rest of their domains,” Allen said.

More importantly, the attackers are careful to do nothing with the phishing domain until they are ready to initiate a vishing call to a potential victim. And when the attack or call is complete, they disable the website tied to the domain.

This is key because many domain registrars will only respond to external requests to take down a phishing website if the site is live at the time of the abuse complaint. This requirement can stymie efforts by companies like ZeroFOX that focus on identifying newly-registered phishing domains before they can be used for fraud.

“They’ll only boot up the website and have it respond at the time of the attack,” Allen said. “And it’s super frustrating because if you file an abuse ticket with the registrar and say, ‘Please take this domain away because we’re 100 percent confident this site is going to be used for badness,’ they won’t do that if they don’t see an active attack going on. They’ll respond that according to their policies, the domain has to be a live phishing site for them to take it down. And these bad actors know that, and they’re exploiting that policy very effectively.”

A phishing page (github-ticket[.]com) aimed at siphoning credentials for a target organization’s access to the software development platform Github. Image: urlscan.io

SCHOOL OF HACKS

Both Nixon and Allen said the object of these phishing attacks seems to be to gain access to as many internal company tools as possible, and to use those tools to seize control over digital assets that can quickly be turned into cash. Primarily, that includes any social media and email accounts, as well as associated financial instruments such as bank accounts and any cryptocurrencies.

Nixon said she and others in her research group believe the people behind these sophisticated vishing campaigns hail from a community of young men who have spent years learning how to social engineer employees at mobile phone companies and social media firms into giving up access to internal company tools.

Traditionally, the goal of these attacks has been gaining control over highly-prized social media accounts, which can sometimes fetch thousands of dollars when resold in the cybercrime underground. But this activity gradually has evolved toward more direct and aggressive monetization of such access.

On July 15, a number of high-profile Twitter accounts were used to tweet out a bitcoin scam that earned more than $100,000 in a few hours. According to Twitter, that attack succeeded because the perpetrators were able to social engineer several Twitter employees over the phone into giving away access to internal Twitter tools.

Nixon said it’s not clear whether any of the people involved in the Twitter compromise are associated with this vishing gang, but she noted that the group showed no signs of slacking off after federal authorities charged several people with taking part in the Twitter hack.

“A lot of people just shut their brains off when they hear the latest big hack wasn’t done by hackers in North Korea or Russia but instead some teenagers in the United States,” Nixon said. “When people hear it’s just teenagers involved, they tend to discount it. But the kinds of people responsible for these voice phishing attacks have now been doing this for several years. And unfortunately, they’ve gotten pretty advanced, and their operational security is much better now.”

A phishing page (vzw-employee[.]com) targeting employees of Verizon. Image: DomainTools

PROPER ADULT MONEY-LAUNDERING

While it may seem amateurish or myopic for attackers who gain access to a Fortune 100 company’s internal systems to focus mainly on stealing bitcoin and social media accounts, that access — once established — can be re-used and re-sold to others in a variety of ways.

“These guys do intrusion work for hire, and will accept money for any purpose,” Nixon said. “This stuff can very quickly branch out to other purposes for hacking.”

For example, Allen said he suspects that once inside of a target company’s VPN, the attackers may try to add a new mobile device or phone number to the phished employee’s account as a way to generate additional one-time codes for future access by the phishers themselves or anyone else willing to pay for that access.

Nixon and Allen said the activities of this vishing gang have drawn the attention of U.S. federal authorities, who are growing concerned over indications that those responsible are starting to expand their operations to include criminal organizations overseas.

“What we see now is this group is really good on the intrusion part, and really weak on the cashout part,” Nixon said. “But they are learning how to maximize the gains from their activities. That’s going to require interactions with foreign gangs and learning how to do proper adult money laundering, and we’re already seeing signs that they’re growing up very quickly now.”

WHAT CAN COMPANIES DO?

Many companies now make security awareness and training an integral part of their operations. Some firms even periodically send test phishing messages to their employees to gauge their awareness levels, and then require employees who miss the mark to undergo additional training.

Such precautions, while important and potentially helpful, may do little to combat these phone-based phishing attacks that tend to target new employees. Both Allen and Nixon — as well as others interviewed for this story who asked not to be named — said the weakest link in most corporate VPN security setups these days is the method relied upon for multi-factor authentication.

A U2F device made by Yubikey, plugged into the USB port on a computer.

One multi-factor option — physical security keys — appears to be immune to these sophisticated scams. The most commonly used security keys are inexpensive USB-based devices. A security key implements a form of multi-factor authentication known as Universal 2nd Factor (U2F), which allows the user to complete the login process simply by inserting the USB device and pressing a button on the device. The key works without the need for any special software drivers.

The allure of U2F devices for multi-factor authentication is that even if an employee who has enrolled a security key for authentication tries to log in at an impostor site, the company’s systems simply refuse to request the security key if the user isn’t on their employer’s legitimate website, and the login attempt fails. Thus, the second factor cannot be phished, either over the phone or Internet.

In July 2018, Google disclosed that it had not had any of its 85,000+ employees successfully phished on their work-related accounts since early 2017, when it began requiring all employees to use physical security keys in place of one-time codes.

Probably the most popular maker of security keys is Yubico, which sells a basic U2F Yubikey for $20. It offers regular USB versions as well as those made for devices that require USB-C connections, such as Apple’s newer Mac OS systems. Yubico also sells more expensive keys designed to work with mobile devices. [Full disclosure: Yubico was recently an advertiser on this site].

Nixon said many companies will likely balk at the price tag associated with equipping each employee with a physical security key. But she said as long as most employees continue to work remotely, this is probably a wise investment given the scale and aggressiveness of these voice phishing campaigns.

“The truth is some companies are in a lot of pain right now, and they’re having to put out fires while attackers are setting new fires,” she said. “Fixing this problem is not going to be simple, easy or cheap. And there are risks involved if you somehow screw up a bunch of employees accessing the VPN. But apparently these threat actors really hate Yubikey right now.”

Kevin RuddWashington Post: China’s thirst for coal is economically shortsighted and environmentally reckless

First published in the Washington Post on 19 August 2020

Carbon emissions have fallen in recent months as economies have been shut down and put into hibernation. But whether the world will emerge from the pandemic in a stronger or weaker position to tackle the climate crisis rests overwhelmingly on the decisions that China will take.

China, as part of its plans to restart its economy, has already approved the construction of new coal-fired power plants accounting for some 17 gigawatts of energy this year, sending a collective shiver down the spines of environmentalists. This is more coal plants than it approved in the previous two years combined, and the total capacity now under development in China is larger than the remaining fleet operating in the United States.

At the same time, China has touted investments in so-called “new infrastructure,” such as electric-vehicle charging stations and rail upgrades, as integral to its economic recovery. But frankly, none of this will matter much if these new coal-fired power plants are built.

To be fair, the decisions to proceed with these coal projects largely rest in the hands of China’s provincial and regional governments and not in Beijing. However, this does not mean the central government has no power, nor that it won’t wear the reputational damage if the plants become a reality.

First, it is hard to see how China could meet one of its own commitments under the 2015 Paris climate agreement to peak its emissions by 2030 if these new plants are built. The pledge relies on China retiring much of its existing and relatively young coal fleet, which has been operational only for an average of 14 years. Bringing yet more coal capacity online now is therefore either economically shortsighted or environmentally reckless.

It would also put at risk the world’s collective long-term goal under the Paris agreement to keep temperature increases within 1.5 degrees Celsius, which the Intergovernmental Panel on Climate Change has said requires halving of global emissions between 2018 and 2030 and reaching net-zero emissions by the middle of the century.

It also is completely contrary to China’s own domestic interests, including President Xi Jinping’s desire to grow the economy, improve energy security and clean up the environment (or, as he says, to “make our skies blue again”).

But perhaps most importantly for the geopolitical hard heads in Beijing, it also risks unravelling the goodwill China has built up in recent years for staying the course on the fight against climate change in the face of the Trump administration’s retreat. This will especially be the case in the eyes of many vulnerable developing countries, including the world’s lowest-lying island nations that could face even greater risks if these plants are built.

For his part, former vice president Joe Biden has already got China’s thirst for coal in his sights. He speaks of the need for the United States to focus on how China is “exporting more dirty coal” through its support of carbon-intensive projects in its Belt and Road InitiativeStudies have found a Chinese role in more than 100 gigawatts of additional coal plants under construction across Asia and Africa, and even in Eastern Europe. It is hard to see how the first few months of a Biden administration would not make this an increasingly uncomfortable reality for Beijing at precisely the time the world would be welcoming with open the arms the return of U.S. climate leadership.

As a new paper published by the Asia Society Policy Institute highlights, China’s decisions on coal will also be among the most closely watched as it finalizes its next five-year plan, due out in 2021, as well as its mid-century decarbonization strategy and enhancements to its Paris targets ahead of the 2021 United Nations Climate Change Conference in Glasgow, Scotland. And although China may also have an enormously positive story to tell — continuing to lead the world in the deployment of renewable energy in 2019 — it is China’s decisions on coal that will loom large.

(Photo: Gwendolyn Stansbury/IFPRI)

The post Washington Post: China’s thirst for coal is economically shortsighted and environmentally reckless appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Shallow Perspective

There are times where someone writes code which does nothing. There are times where someone writes code which does something, but nothing useful. This is one of those times.

Ray H was going through some JS code, and found this “useful” method.

mapRowData (data) {
  if (isNullOrUndefined(data)) return null;
  return data.map(x => x);
}

Technically, this isn’t a “do nothing” method. It converts undefined values to null, and it returns a shallow copy of an array, assuming that you passed in an array.

The fact that it can return a null value or an array is one of those little nuisances that we accept, but probably should code around (without more context, it’s probably fine if this returned an empty array on bad inputs, for example).

But Ray adds: “Where this is used, it could just use the array data directly and get the same result.” Yes, it’s used in a handful of places, and in each of those places, there’s no functional difference between the original array and the shallow copy.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Rondam RamblingsHere we go again

Here is a snapshot of the current map of temporary flight restrictions (TFRs) issued by the FAA across the western U.S.:Almost every one of those red shapes is a major fire burning.  Compare that to a similar snapshot taken two years ago at about this same time of year.The regularity of these extreme heat and fire events is starting to get really scary.

,

Kevin RuddMonocle 24 Radio: The Big Interview

INTERVIEW AUDIO
MONOCLE 24 RADIO
‘THE BIG INTERVIEW’
RECORDED LATE 2019
BROADCAST AUGUST 2020

The post Monocle 24 Radio: The Big Interview appeared first on Kevin Rudd.

Kevin RuddABC Late Night Live: US-China Relations

INTERVIEW AUDIO
RADIO INTERVIEW
ABC
LATE NIGHT LIVE
17 AUGUST 2020

Main topic: Foreign Affairs article ‘Beware the Guns of August — in Asia’

 

Image: The USS Ronald Reagan steams through the San Bernardino Strait, July 3, 2020, crossing from the Philippine Sea into the South China Sea. (Navy Petty Officer 3rd Class Jason Tarleton)

The post ABC Late Night Live: US-China Relations appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Carbon Copy

I avoid writing software that needs to send emails. It's just annoying code to build, interfacing with mailservers is shockingly frustrating, and honestly, users don't tend to like the emails that my software tends to send. Once upon a time, it was a system which would tell them it was time to calibrate a scale, and the business requirements were basically "spam them like three times a day the week a scale comes do," which shockingly everyone hated.

But Krista inherited some code that sends email. The previous developer was a "senior", but probably could have had a little more supervision and maybe some mentoring on the C# language.

One commit added this method, for sending emails:

private void SendEmail(ExportData exportData, String subject, String fileName1, String fileName2) { try { if (String.IsNullOrEmpty(exportData.Email)) { WriteToLog("No email address - message not sent"); } else { MailMessage mailMsg = new MailMessage(); mailMsg.To.Add(new MailAddress(exportData.Email, exportData.PersonName)); mailMsg.Subject = subject; mailMsg.Body = "Exported files attached"; mailMsg.Priority = MailPriority.High; mailMsg.BodyEncoding = Encoding.ASCII; mailMsg.IsBodyHtml = true; if (!String.IsNullOrEmpty(exportData.EmailCC)) { string[] ccAddress = exportData.EmailCC.Split(';'); foreach (string address in ccAddress) { mailMsg.CC.Add(new MailAddress(address)); } } if (File.Exists(fileName1)) mailMsg.Attachments.Add(new Attachment(fileName1)); if (File.Exists(fileName2)) mailMsg.Attachments.Add(new Attachment(fileName2)); send(mailMsg); mailMsg.Dispose(); } } catch (Exception ex) { WriteToLog(ex.ToString()); } }

That's not so bad, as these things go, though one has to wonder about parameters like fileName1 and fileName2. Do they only ever send exactly two files? Well, maybe when this method was written, but a few commits later, an overloaded version gets added:

private void SendEmail(ExportData exportData, String subject, String fileName1, String fileName2, String fileName3) { try { if (String.IsNullOrEmpty(exportData.Email)) { WriteToLog("No email address - message not sent"); } else { MailMessage mailMsg = new MailMessage(); mailMsg.To.Add(new MailAddress(exportData.Email, exportData.PersonName)); mailMsg.Subject = subject; mailMsg.Body = "Exported files attached"; mailMsg.Priority = MailPriority.High; mailMsg.BodyEncoding = Encoding.ASCII; mailMsg.IsBodyHtml = true; if (!String.IsNullOrEmpty(exportData.EmailCC)) { string[] ccAddress = exportData.EmailCC.Split(';'); foreach (string address in ccAddress) { mailMsg.CC.Add(new MailAddress(address)); } } if (File.Exists(fileName1)) mailMsg.Attachments.Add(new Attachment(fileName1)); if (File.Exists(fileName2)) mailMsg.Attachments.Add(new Attachment(fileName2)); if (File.Exists(fileName3)) mailMsg.Attachments.Add(new Attachment(fileName3)); send(mailMsg); mailMsg.Dispose(); } } catch (Exception ex) { WriteToLog(ex.ToString()); } }

And then, a few commits later, someone decided that they needed to send four files, sometimes.

private void SendEmail(ExportData exportData, String subject, String fileName1, String fileName2, String fileName3, String fileName4) { try { if (String.IsNullOrEmpty(exportData.Email)) { WriteToLog("No email address - message not sent"); } else { MailMessage mailMsg = new MailMessage(); mailMsg.To.Add(new MailAddress(exportData.Email, exportData.PersonName)); mailMsg.Subject = subject; mailMsg.Body = "Exported files attached"; mailMsg.Priority = MailPriority.High; mailMsg.BodyEncoding = Encoding.ASCII; mailMsg.IsBodyHtml = true; if (!String.IsNullOrEmpty(exportData.EmailCC)) { string[] ccAddress = exportData.EmailCC.Split(';'); foreach (string address in ccAddress) { mailMsg.CC.Add(new MailAddress(address)); } } if (File.Exists(fileName1)) mailMsg.Attachments.Add(new Attachment(fileName1)); if (File.Exists(fileName2)) mailMsg.Attachments.Add(new Attachment(fileName2)); if (File.Exists(fileName3)) mailMsg.Attachments.Add(new Attachment(fileName3)); if (File.Exists(fileName4)) mailMsg.Attachments.Add(new Attachment(fileName4)); send(mailMsg); mailMsg.Dispose(); } } catch (Exception ex) { WriteToLog(ex.ToString()); } }

Each time someone discovered a new case where they wanted to include a different number of attachments, the previous developer copy/pasted the same code, with minor revisions.

Krista wrote a single version which used a paramarray, which replaced all of these versions (and any other possible versions), without changing the calling semantics.

Though the real WTF is probably still forcing the BodyEncoding to be ASCII at this point in time. There's a whole lot of assumptions about your dataset which are probably not true, or at least no reliably true.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Rondam RamblingsIrit Gat, Ph.D. 25 November 1966 - 11 August 2020

With a heavy heart I bear witness to the untimely passing of Dr. Irit Gat last Tuesday at the age of 53.  Irit was the Dean of Behavioral and Social Sciences at Antelope Valley College in Lancaster, California.  She was also my younger sister.  She died peacefully of natural causes.I am going to miss her.  A lot.  I'm going to miss her smile.  I'm going to miss the way she said "Hey bro" when we

Worse Than FailureCodeSOD: Perls Can Change

Tomiko* inherited some web-scraping/indexing code from Dennis. The code started out just scanning candidate profiles for certain keywords, but grew, mutated, and eventually turned into something that also needed to download their CVs.

Now, Dennis was, as Tomiko puts it, "an interesting engineer". "Any agreed upon standard, he would aggressively oppose, and this can be seen in this code."

"This code" also happens to be in Perl, the "best" language for developers who don't like standards. And, it also happens to be connected to this infrastructure.

So let's start with the code, because this is the rare CodeSOD where the code itself isn't the WTF:

foreach my $n (0 .. @{$lines} - 1) { next if index($lines->[$n], 'RT::Spider::Deepweb::Controller::Factory->make(') == -1; # Don't let other cv_id survive. $lines->[$n] =~ s/,\s*cv_id\s*=>[^,)]+//; $lines->[$n] =~ s/,\s*cv_type\s*=>[^,)]+// if defined $cv_type; # Insert the new options. $lines->[$n] =~ s/\)/$opt)/; }

Okay, so it's a pretty standard for-each loop. We skip lines if they contain… wait, that looks like a Perl expression- RT::Spider::Deepweb::Controller::Factory->make('? Well, let's hold onto that thought, but keep trucking on.

Next, we do a few find-and-replace operations to ensure that we Don't let other cv_id survive. I'm not really sure what exactly that's supposed to mean, but Tomiko says, "Dennis never wrote a single meaningful comment".

Well, the regexes are pretty standard character-salad expressions; ugly, but harmless. If you take this code in isolation, it's not good, but it doesn't look terrible. Except, there's that next if line. Why are we checking to see if the input data contains a Perl expression?

Because our input data is a Perl script. Dennis was… efficient. He already had code that would download the candidate profiles. Instead of adding new code to download CVs, instead of refactoring the existing code so that it was generic enough to download both, Dennis instead decided to load the profile code into memory, scan it with regexes, and then eval it.

As Tomiko says: "You can't get more Perl than that."

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Krebs on SecurityMicrosoft Put Off Fixing Zero Day for 2 Years

A security flaw in the way Microsoft Windows guards users against malicious files was actively exploited in malware attacks for two years before last week, when Microsoft finally issued a software update to correct the problem.

One of the 120 security holes Microsoft fixed on Aug. 11’s Patch Tuesday was CVE-2020-1464, a problem with the way every supported version of Windows validates digital signatures for computer programs.

Code signing is the method of using a certificate-based digital signature to sign executable files and scripts in order to verify the author’s identity and ensure that the code has not been changed or corrupted since it was signed by the author.

Microsoft said an attacker could use this “spoofing vulnerability” to bypass security features intended to prevent improperly signed files from being loaded. Microsoft’s advisory makes no mention of security researchers having told the company about the flaw, which Microsoft acknowledged was actively being exploited.

In fact, CVE-2020-1464 was first spotted in attacks used in the wild back in August 2018. And several researchers informed Microsoft about the weakness over the past 18 months.

Bernardo Quintero is the manager at VirusTotal, a service owned by Google that scans any submitted files against dozens of antivirus services and displays the results. On Jan. 15, 2019, Quintero published a blog post outlining how Windows keeps the Authenticode signature valid after appending any content to the end of Windows Installer files (those ending in .MSI) signed by any software developer.

Quintero said this weakness would particularly acute if an attacker were to use it to hide a malicious Java file (.jar). And, he said, this exact attack vector was indeed detected in a malware sample sent to VirusTotal.

“In short, an attacker can append a malicious JAR to a MSI file signed by a trusted software developer (like Microsoft Corporation, Google Inc. or any other well-known developer), and the resulting file can be renamed with the .jar extension and will have a valid signature according Microsoft Windows,” Quintero wrote.

But according to Quintero, while Microsoft’s security team validated his findings, the company chose not to address the problem at the time.

“Microsoft has decided that it will not be fixing this issue in the current versions of Windows and agreed we are able to blog about this case and our findings publicly,” his blog post concluded.

Tal Be’ery, founder of Zengo, and Peleg Hadar, senior security researcher at SafeBreach Labs, penned a blog post on Sunday that pointed to a file uploaded to VirusTotal in August 2018 that abused the spoofing weakness, which has been dubbed GlueBall. The last time that August 2018 file was scanned at VirusTotal (Aug 14, 2020), it was detected as a malicious Java trojan by 28 of 59 antivirus programs.

More recently, others would likewise call attention to malware that abused the security weakness, including this post in June 2020 from the Security-in-bits blog.

Image: Securityinbits.com

Be’ery said the way Microsoft has handled the vulnerability report seems rather strange.

“It was very clear to everyone involved, Microsoft included, that GlueBall is indeed a valid vulnerability exploited in the wild,” he wrote. “Therefore, it is not clear why it was only patched now and not two years ago.”

Asked to comment on why it waited two years to patch a flaw that was actively being exploited to compromise the security of Windows computers, Microsoft dodged the question, saying Windows users who have applied the latest security updates are protected from this attack.

“A security update was released in August,” Microsoft said in a written statement sent to KrebsOnSecurity. “Customers who apply the update, or have automatic updates enabled, will be protected. We continue to encourage customers to turn on automatic updates to help ensure they are protected.”

Update, 12:45 a.m. ET: Corrected attribution on the June 2020 blog article about GlueBall exploits in the wild.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 13)

Here’s part thirteen of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

LongNowKathryn Cooper’s Wildlife Movement Photography

Amazing wildlife photography by Kathryn Cooper reveals the brushwork of birds and their flocks through sky, hidden by the quickness of the human eye.

“Staple Newk” by Kathryn Cooper.

Ever since Eadweard Muybridge’s pioneering photography of animal locomotion in 01877 and 01878 (including the notorious “horse shot by pistol” frames from an era less concerned with animal experiments), the trend has been to unpack our lived experience of movement into serial, successive frames. The movie camera smears one pace layer out across another, lets the eye scrub over one small moment.

“UFO” by Kathryn Cooper.

In contrast, time-lapse and long exposure camerawork implodes the arc of moments, an integral calculus that gathers the entire gesture. Cooper’s flock photography is less the autopsy of high-speed video and more the graceful enzo drawn by a zen master.

Learn More

LongNowPuzzling artifacts found at Europe’s oldest battlefield

Bronze-Age crime scene forensics: newly discovered artifacts only deepen the mystery of a 3,300-year-old battle. What archaeologists previously thought to be a local skirmish looks more and more like a regional conflict that drew combatants in from hundreds of kilometers away…but why?

Much like the total weirdness of the Ediacaran fauna of 580 million years ago, this oldest Bronze-Age battlefield is the earliest example of its kind in the record…and firsts are always difficult to analyze:

Among the stash are also three bronze cylinders that may have been fittings for bags or boxes designed to hold personal gear—unusual objects that until now have only been discovered hundreds of miles away in southern Germany and eastern France.

‘This was puzzling for us,’ says Thomas Terberger, an archaeologist at the University of Göttingen in Germany who helped launch the excavation at Tollense and co-authored the paper. To Terberger and his team, that lends credence to their theory that the battle wasn’t just a northern affair.

Anthony Harding, an archaeologist and Bronze Age specialist who was not involved with the research: ‘Why would a warrior be going round with a lot of scrap metal?’ he asks. To interpret the cache—which includes distinctly un-warlike metalworking gear—as belonging to warriors is ‘a bit far-fetched to me,’ he says.

,

Krebs on SecurityMedical Debt Collection Firm R1 RCM Hit in Ransomware Attack

R1 RCM Inc. [NASDAQ:RCM], one of the nation’s largest medical debt collection companies, has been hit in a ransomware attack.

Formerly known as Accretive Health Inc., Chicago-based R1 RCM brought in revenues of $1.18 billion in 2019. The company has more than 19,000 employees and contracts with at least 750 healthcare organizations nationwide.

R1 RCM acknowledged taking down its systems in response to a ransomware attack, but otherwise declined to comment for this story.

The “RCM” portion of its name refers to “revenue cycle management,” an industry which tracks profits throughout the life cycle of each patient, including patient registration, insurance and benefit verification, medical treatment documentation, and bill preparation and collection from patients.

The company has access to a wealth of personal, financial and medical information on tens of millions of patients, including names, dates of birth, Social Security numbers, billing information and medical diagnostic data.

It’s unclear when the intruders first breached R1’s networks, but the ransomware was unleashed more than a week ago, right around the time the company was set to release its 2nd quarter financial results for 2020.

R1 RCM declined to discuss the strain of ransomware it is battling or how it was compromised. Sources close to the investigation tell KrebsOnSecurity the malware is known as Defray.

Defray was first spotted in 2017, and its purveyors have a history of specifically targeting companies in the healthcare space. According to Trend Micro, Defray usually is spread via booby-trapped Microsoft Office documents sent via email.

“The phishing emails the authors use are well-crafted,” Trend Micro wrote. For example, in an attack targeting a hospital, the phishing email was made to look like it came from a hospital IT manager, with the malicious files disguised as patient reports.

Email security company Proofpoint says the Defray ransomware is somewhat unusual in that it is typically deployed in small, targeted attacks as opposed to large-scale “spray and pray” email malware campaigns.

“It appears that Defray may be for the personal use of specific threat actors, making its continued distribution in small, targeted attacks more likely,” Proofpoint observed.

A recent report (PDF) from Corvus Insurance notes that ransomware attacks on companies in the healthcare industry have slowed in recent months, with some malware groups even dubiously pledging they would refrain from targeting these firms during the COVID-19 pandemic. But Corvus says that trend is likely to reverse in the second half of 2020 as the United States moves cautiously toward reopening.

Corvus found that while services that scan and filter incoming email for malicious threats can catch many ransomware lures, an estimated 75 percent of healthcare companies do not use this technology.

MEJitsi on Debian

I’ve just setup an instance of the Jitsi video-conference software for my local LUG. Here is an overview of how to set it up on Debian.

Firstly create a new virtual machine to run it. Jitsi is complex and has lots of inter-dependencies. It’s packages want to help you by dragging in other packages and configuring them. This is great if you have a blank slate to start with, but if you already have one component installed and running then it can break things. It wants to configure the Prosody Jabber server and a web server and my first attempt at an install failed when it tried to reconfigure the running instances of Prosody and Apache.

Here’s the upstream install docs [1]. They cover everything fairly well, but I’ll document the configuration I wanted (basic public server with password required to create a meeting).

Basic Installation

The first thing to do is to get a short DNS name like j.example.com. People will type that every time they connect and will thank you for making it short.

Using Certbot for certificates is best. It seems that you need them for j.example.com and auth.j.example.com.

apt install curl certbot
/usr/bin/letsencrypt certonly --standalone -d j.example.com,auth.j.example.com -m you@example.com
curl https://download.jitsi.org/jitsi-key.gpg.key | gpg --dearmor > /etc/apt/jitsi-keyring.gpg
echo "deb [signed-by=/etc/apt/jitsi-keyring.gpg] https://download.jitsi.org stable/" > /etc/apt/sources.list.d/jitsi-stable.list
apt-get update
apt-get -y install jitsi-meet

When apt installs jitsi-meet and it’s dependencies you get asked many questions for configuring things. Most of it works well.

If you get the nginx certificate wrong or don’t have the full chain then phone clients will abort connections for no apparent reason, it seems that you need to edit /etc/nginx/sites-enabled/j.example.com.conf to use the following ssl configuration:

ssl_certificate /etc/letsencrypt/live/j.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/j.example.com/privkey.pem;

Then you have to edit /etc/prosody/conf.d/j.example.com.cfg.lua to use the following ssl configuration:

key = "/etc/letsencrypt/live/j.example.com/privkey.pem";
certificate = "/etc/letsencrypt/live/j.example.com/fullchain.pem";

It seems that you need to have an /etc/hosts entry with the public IP address of your server and the names “j.example.com j auth.j.example.com”. Jitsi also appears to use the names “speakerstats.j.example.com conferenceduration.j.example.com lobby.j.example.com conference.j.example.com conference.j.example.com internal.auth.j.example.com” but they aren’t required for a basic setup, I guess you could add them to /etc/hosts to avoid the possibility of strange errors due to it not finding an internal host name. There are optional features of Jitsi which require some of these names, but so far I’ve only used the basic functionality.

Access Control

This section describes how to restrict conference creation to authenticated users.

The secure-domain document [2] shows how to restrict access, but I’ll summarise the basics.

Edit /etc/prosody/conf.avail/j.example.com.cfg.lua and use the following line in the main VirtualHost section:

        authentication = "internal_hashed"

Then add the following section:

VirtualHost "guest.j.example.com"
        authentication = "anonymous"
        c2s_require_encryption = false
        modules_enabled = {
            "turncredentials";
        }

Edit /etc/jitsi/meet/j.example.com-config.js and add the following line:

        anonymousdomain: 'guest.j.example.com',

Edit /etc/jitsi/jicofo/sip-communicator.properties and add the following line:

org.jitsi.jicofo.auth.URL=XMPP:j.example.com

Then run commands like the following to create new users who can create rooms:

prosodyctl register admin j.example.com

Then restart most things (Prosody at least, maybe parts of Jitsi too), I rebooted the VM.

Now only the accounts you created on the Prosody server will be able to create new meetings. You should be able to add, delete, and change passwords for users via prosodyctl while it’s running once you have set this up.

Conclusion

Once I gave up on the idea of running Jitsi on the same server as anything else it wasn’t particularly difficult to set up. Some bits were a little fiddly and hopefully this post will be a useful resource for people who have trouble understanding the documentation. Generally it’s not difficult to install if it is the only thing running on a VM.

LongNowHow to Be in Time

Photograph: Scott Thrift.

“We already have timepieces that show us how to be on time. These are timepieces that show us how to be in time.”

– Scott Thrift

Slow clocks are growing in popularity, perhaps as a tonic for or revolt against the historical trend of ever-faster timekeeping mechanisms.

Given that bell tower clocks were originally used to keep monastic observances of the sacred hours, it seems appropriate to restore some human agency in timing and give kairos back some of the territory it lost to the minute and second hands so long ago…

Scott Thrift’s three conceptual timepieces measure with only one hand each, counting 24 hour, one-month, and one-year cycles with each revolution. Not quite 10,000 years, but it’s a consumer-grade start.

“Right now we’re living in the long-term effects of short-term thinking. I don’t think it’s possible really for us to commonly think long term if the way that we tell time is with a short-term device that just shows the seconds, minutes, and hours. We’re precluded to seeing things in the short term.”

-Scott Thrift

Worse Than FailureError'd: New Cat Nullness

"Honest! If I could give you something that had a 'cat' in it, I would!" wrote Gordon P.

 

"You'd think Outlook would hage told me sooner about these required updates," Carlos writes.

 

Colin writes, "Asking for a friend, does balsamic olive oil still have to be changed every 3,000 miles?"

 

"I was looking for Raspberry Pi 4 cases on my local Amazon.co.jp when I stumbled upon a pretty standard, boring WTF. Desparate to find an actual picture of the case I was after, I changed to Amazon.com and I guess I got what I wanted," George wrote. (Here are the short versions: https://www.amazon.co.jp/dp/B07TFDFGZFhttps://www.amazon.com/dp/B07TFDFGZF)

 

Kevin wrote, "Ah, I get it. Shiny and blinky ads are SO last decade. Real container advertisers nowadays get straight to the point!"

 

"I noticed this in the footer of an email from my apartment management company and well, I'm intrigued at the possibility of 'rewards'," wrote Peter C.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Kevin RuddBBC World: US-China Tensions

E&OE TRANSCRIPT
TV INTERVIEW
BBC WORLD
13 AUGUST 2020

Topics: Foreign Affairs article ‘Beware the Guns of August – in Asia’

Mike Embley
Beijing’s crackdown on Hong Kong’s democracy movement has attracted strong criticism both Washington and Beijing hitting key figures with sanctions and closing consulates in recent weeks. Is that not the only issue where the two countries don’t see eye to eye tensions have been escalating on a range of fronts, including the Chinese handling of the pandemic, the American decision to ban Huawei and Washington’s allegations of human rights abuses against Uighur Muslims in Xinjiang. So where is all this heading? Let’s try and find out we speak to Kevin Rudd, former Australian Prime Minister, of course, now the president of the Asia Society Policy Institute. Welcome very good to talk to you. You’ve been very vocal about China’s attitudes to democracy in Hong Kong, also the tit for tat sanctions between the US and China where do you think all this is heading?

Kevin Rudd
Well, if our prism for analysis is where does the US China relationship go? The bottom line is we haven’t seen this relationship in such fundamental disrepair in about half a century. And as a result, whether it’s Hong Kong, or whether it’s Taiwan or events unfolding in the South China Sea, this is pushing the relationship into greater and greater levels of crisis. What concerns those of us who study this professionally. And who know both systems of government reasonably well, both in Beijing and Washington, is that the probability of a crisis unfolding either in the Taiwan Straits or in the South China Sea is now growing. And the probability of escalation is now real into a serious shooting match. And the lesson of history is it’s very difficult to de escalate under those circumstances.

Mike Embley
Yes, I think you’ve spoken in terms of the risk of a hot war, actual war between the US and China. Are you serious?

Kevin Rudd
I am serious and I’ve not said this before. I’ve been a student of US-China relations for the last 35 years. And I’ve I take a genuinely sceptical approach to people who have sounded the alarms in previous periods of the relationship. But those of us who have observed this through the prism of history, I think have got a responsibility to say to decision makers both in Washington and in Beijing right now be careful what you wish for, because this is catapulting in a particular direction. When you look at the South China Sea in particular, there you have a huge amount of metal on metal, that is a large number of American ships and a large number of People’s Liberation Army Navy ships, similar number of aircraft, the rules of engagement, the standard operating procedures of these vessels are unbeknownst to the rest of us, we’ve had near misses before. What I’m pointing to is that if we actually have a collision, or a sinking or a crash, what then ensues in terms of crisis management on both sides when we last had this in 2001 2002 in the Bush administration, the state of the US China relationship was pretty good. Right now 20 years later, it is fundamentally appalling. That’s why many of us are deeply concerned, and are sounding this concern both to Beijing and Washington.

Mike Embley
And yet you know, of course, China is such a power economically and is making its presence felt in so many places in the world. There is a sense that really China can pretty much do what it wants, how do you avoid the kind of situation you’re describing?

Kevin Rudd
Well, the government in Beijing needs to understand the importance of restraint as well in terms of its own calculus of its own long term national interests. And that is China’s current cause of action across a range of fronts is in fact causing a massive international reaction against China now, unprecedented against again, the measures of the last 40 or 50 years. You now have fundamental dislocations in the relationship not just with Washington, but with Canada, with Australia, with United Kingdom, with Japan, with the Republic of Korea, and a whole bunch of others as well, including those in various parts of continental Europe. And so therefore, looking at this from the prism of Beijing’s own interests, there are those in Beijing who will be raising the argument, are we pushing too far too hard, too fast. And the responsibility of the rest of us is to say to that cautionary advice within Beijing, all power to your arm in restraining China from this course of action, but also in equal measure saying into our friends in Washington, particularly in a presidential election season, where Republicans and Democrats are seeking to outflank each other to the right, on China strategy, that this is no time to engage in, shall we say, symbolic acts for a domestic political purpose in the United States presidential election context, which can have real national security consequences in Southeast Asia and then globally.

Mike Embley
Mr. Rudd, you say very clearly what you hope will happen what you hope China will realize, what do you think actually will happen? Are you optimistic in a nutshell or pessimistic?

Kevin Rudd
The reason for me writing the piece I’ve just done in Foreign Affairs Magazine, which is entitled “Beware The Guns of August”, for those of us obviously familiar with what happened in August of 1914. Is that on balance I am pessimistic, that the political cultures in both capitals right now are fully seized of the risks that they are playing with on the high seas and over Taiwan as well. Hong Kong, the matters you were referring to before, frankly, add further to the deterioration of the surrounding political relationship between the two countries. But in terms of incendiary actions of a national security nature, it’s events in the Taiwan straits and it’s events on the high seas in the South China Sea, which are most likely to trigger this. And to answer your question directly right now, until we see the other side of the US presidential election. I remain on balance concerned and pessimistic.

Mike Embley
Right. Kevin Rudd Thank you very much for talking to us.

Kevin Rudd
Good to be with you.

The post BBC World: US-China Tensions appeared first on Kevin Rudd.

Kevin RuddAustralian Jewish News: Michael Gawenda and ‘The Powerbroker’

With the late Shimon Peres in 2012.

This article was first published by The Australian Jewish News on 13 August 2020.

The factional manoeuvrings of Labor’s faceless men a decade ago are convoluted enough without demonstrable misrepresentations by authors like Michael Gawenda in his biography of Mark Leibler, The Powerbroker.

Gawenda claims my memoir, The PM Years, blames the leadership coup on Leibler’s hardline faction of Australia’s Israel lobby, “plotting” in secret with Julia Gillard – a vision of “extreme, verging on conspiratorial darkness”. This is utter fabrication on his part. My simple challenge to Gawenda is to specify where I make such claims. He can’t. If he’d bothered to call me before publishing, I would have told him so.

Let me be clear: I have never claimed, nor do I believe, that Leibler or AIJAC were involved in the coup. It was conceived and executed almost entirely by factional warlords who blamed me for stymieing their individual ambitions.

It’s true my relationship with Leibler was strained in 2010 after Mossad agents stole the identities of four Australians living in Israel. Using false passports, they slipped into Dubai to assassinate a Hamas operative. They broke our laws and breached our trust.

The Mossad also jeopardised the safety of every Australian who travels on our passports in the Middle East. Unless this stopped, any Australian would be under suspicion, exposing them to arbitrary detention or worse.

More shocking, this wasn’t their first offence. The Mossad explicitly promised to stop abusing Australian passports after an incident in 2003, in a memorandum kept secret to spare Israel embarrassment. It didn’t work. They reoffended because they thought Australia was weak and wouldn’t complain.

We needed a proportional response to jolt Israeli politicians to act, without fundamentally damaging our valued relationship. Australia’s diplomatic, national security and intelligence establishments were unanimous: we should expel the Mossad’s representative in Canberra. This would achieve our goal but make little practical difference to Australia-Israel cooperation. Every minister in the national security committee agreed, including Gillard.

But obdurate elements of Australia’s Israel lobby accused us of overreacting. How could we treat our friend Israel like this? How did we know it was them? Wasn’t this just the usual murky business of espionage? According to Leibler, Diaspora leaders should “not criticise any Israeli government when it comes to questions of Israeli security”. Any violation of law, domestic or international, is acceptable. Never mind every citizen’s duty to uphold our laws and protect Australian lives.

I invited Leibler and others to dinner at the Lodge to reassure them the affair, although significant, wouldn’t derail the relationship. I sat politely as Leibler berated me. Boasting of his connections, he wanted to personally arrange meetings with the Mossad to smooth things over. We had, of course, already done this.

Apropos of nothing, Leibler then leaned over and, in what seemed to me a slightly menacing manner, suggested Julia was “looking very good in the polls” and “a great friend of Israel”. This surprised me, not least because I believed, however foolishly, that my deputy was loyal.

Leibler’s denials are absorbed wholly by Gawenda, solely on the basis of his notes. Give us a break, Michael – why would Leibler record such behaviour? It’s also meaningless that others didn’t hear him since, as often happens at dinners, multiple conversations occur around the table. The truth is it did happen, hence why I recorded it in my book. I have no reason to invent such an anecdote.

In fairness to Gillard, her eagerness to befriend Leibler reflected the steepness of her climb on Israel. She emerged from organisations that historically antagonised Israel – the Socialist Left and Australian Union of Students – and often overcompensated by swinging further towards AIJAC than longstanding Labor policy allowed.

By contrast, my reputation was well established, untainted by the anti-Israel sentiment sometimes found on the political left. A lifelong supporter of Israel and security for its people, I defied Labor critics by proudly leading Parliament in praise of the Jewish State’s achievements. I have consistently denounced the BDS campaign targeting Israeli businesses, both in office and since. My government blocked numerous shipments of potential nuclear components to Iran, and commissioned legal advice on charging president Mahmoud Ahmadinejad with incitement to genocide against the Jewish people. I’m as proud of this record as I am of my longstanding support for a two-state solution.

I have never considered that unequivocal support for Israel means unequivocal support for the policies of the Netanyahu government. For example, the annexation plan in the West Bank would be disastrous for Israel’s future security and fundamentally breach international law – a view shared by UK Conservative PM Boris Johnson. Israel, like the Australian Jewish community, is not monolithic; my concerns are shared by ordinary Israelis as well as many members of the Knesset.

Michael Gawenda is free to criticise me for things I’ve said and done (ironically, as editor of The Age, he didn’t consider me left-wing enough!), but his assertions in this account are flatly untrue.

The post Australian Jewish News: Michael Gawenda and ‘The Powerbroker’ appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: Don't Stop the Magic

Don’t you believe in magic strings and numbers being bad? From the perspective of readability and future maintenance, constants are better. We all know this is true, and we all know that it can sometimes go too far.

Douwe Kasemier has a co-worker that has taken that a little too far.

For example, they have a Java method with a signature like this:

Document addDocument(Action act, boolean createNotification);

The Action type contains information about what action to actually perform, but it will result in a Document. Sometimes this creates a notification, and sometimes it doesn’t.

Douwe’s co-worker was worried about the readability of addDocument(myAct, true) and addDocument(myAct, false), so they went ahead and added some constants:

    private static final boolean NO_NOTIFICATION = false;
    private static final boolean CREATE_NOTIFICATION = true;

Okay, now, I don’t love this, but it’s not the worst thing…

public Document doActionWithNotification(Action act) {
  addDocument(act, CREATE_NOTIFICATION);
}

public Document doActionWithoutNotification(Action act) {
  addDocument(act, NO_NOTIFICATION);
}

Okay, now we’re just getting silly. This is at least diminishing returns of readability, if not actively harmful to making the code clear.

    private static final int SIX = 6;
    private static final int FIVE = 5;
    public String findId(String path) {
      String[] folders = path.split("/");
      if (folders.length >= SIX && (folders[FIVE].startsWith(PREFIX_SR) || folders[FIVE].startsWith(PREFIX_BR))) {
          return folders[FIVE].substring(PREFIX_SR.length());
      }
      return null;
    }

Ah, there we go. The logical conclusion: constants for 5 and 6. And yet they didn’t feel the need to make a constant for "/"?

At least this in maintainable, so that when the value of FIVE changes, the method doesn’t need to change.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Krebs on SecurityWhy & Where You Should Plant Your Flag

Several stories here have highlighted the importance of creating accounts online tied to your various identity, financial and communications services before identity thieves do it for you. This post examines some of the key places where everyone should plant their virtual flags.

As KrebsOnSecurity observed back in 2018, many people — particularly older folks — proudly declare they avoid using the Web to manage various accounts tied to their personal and financial data — including everything from utilities and mobile phones to retirement benefits and online banking services. From that story:

“The reasoning behind this strategy is as simple as it is alluring: What’s not put online can’t be hacked. But increasingly, adherents to this mantra are finding out the hard way that if you don’t plant your flag online, fraudsters and identity thieves may do it for you.”

“The crux of the problem is that while most types of customer accounts these days can be managed online, the process of tying one’s account number to a specific email address and/or mobile device typically involves supplying personal data that can easily be found or purchased online — such as Social Security numbers, birthdays and addresses.”

In short, although you may not be required to create online accounts to manage your affairs at your ISP, the U.S. Postal Service, the credit bureaus or the Social Security Administration, it’s a good idea to do so for several reasons.

Most importantly, the majority of the entities I’ll discuss here allow just one registrant per person/customer. Thus, even if you have no intention of using that account, establishing one will be far easier than trying to dislodge an impostor who gets there first using your identity data and an email address they control.

Also, the cost of planting your flag is virtually nil apart from your investment of time. In contrast, failing to plant one’s flag can allow ne’er-do-wells to create a great deal of mischief for you, whether it be misdirecting your service or benefits elsewhere, or canceling them altogether.

Before we dive into the list, a couple of important caveats. Adding multi-factor authentication (MFA) at these various providers (where available) and/or establishing a customer-specific personal identification number (PIN) also can help secure online access. For those who can’t be convinced to use a password manager, even writing down all of the account details and passwords on a slip of paper can be helpful, provided the document is secured in a safe place.

Perhaps the most important place to enable MFA is with your email accounts. Armed with access to your inbox, thieves can then reset the password for any other service or account that is tied to that email address.

People who don’t take advantage of these added safeguards may find it far more difficult to regain access when their account gets hacked, because increasingly thieves will enable multi-factor options and tie the account to a device they control.

Secondly, guard the security of your mobile phone account as best you can (doing so might just save your life). The passwords for countless online services can be reset merely by entering a one-time code sent via text message to the phone number on file for the customer’s account.

And thanks to the increasing prevalence of a crime known as SIM swapping, thieves may be able to upend your personal and financial life simply by tricking someone at your mobile service provider into diverting your calls and texts to a device they control.

Most mobile providers offer customers the option of placing a PIN or secret passphrase on their accounts to lessen the likelihood of such attacks succeeding, but these protections also usually fail when the attackers are social engineering some $12-an-hour employee at a mobile phone store.

Your best option is to reduce your overall reliance on your phone number for added authentication at any online service. Many sites now offer MFA options that are app-based and not tied to your mobile service, and this is your best option for MFA wherever possible.

YOUR CREDIT FILES

First and foremost, all U.S. residents should ensure they have accounts set up online at the three major credit bureaus — Equifax, Experian and Trans Union.

It’s important to remember that the questions these bureaus will ask to verify your identity are not terribly difficult for thieves to answer or guess just by referencing public records and/or perhaps your postings on social media.

You will need accounts at these bureaus if you wish to freeze your credit file. KrebsOnSecurity has for many years urged all readers to do just that, because freezing your file is the best way to prevent identity thieves from opening new lines of credit in your name. Parents and guardians also can now freeze the files of their dependents for free.

For more on what a freeze entails and how to place or thaw one, please see this post. Beyond the big three bureaus, Innovis is a distant fourth bureau that some entities use to check consumer creditworthiness. Fortunately, filing a freeze with Innovis likewise is free and relatively painless.

It’s also a good idea to notify a company called ChexSystems to keep an eye out for fraud committed in your name. Thousands of banks rely on ChexSystems to verify customers who are requesting new checking and savings accounts, and ChexSystems lets consumers place a security alert on their credit data to make it more difficult for ID thieves to fraudulently obtain checking and savings accounts. For more information on doing that with ChexSystems, see this link.

If you placed a freeze on your file at the major bureaus more than a few years ago but haven’t revisited the bureaus’ sites lately, it might be wise to do that soon. Following its epic 2017 data breach, Equifax reconfigured its systems to invalidate the freeze PINs it previously relied upon to unfreeze a file, effectively allowing anyone to bypass that PIN if they can glean a few personal details about you. Experian’s site also has undermined the security of the freeze PIN.

I mentioned planting your flag at the credit bureaus first because if you plan to freeze your credit files, it may be wise to do so after you have planted your flag at all the other places listed in this story. That’s because these other places may try to check your identity records at one or more of the bureaus, and having a freeze in place may interfere with that account creation.

YOUR FINANCIAL INSTITUTIONS

I can’t tell you how many times people have proudly told me they don’t bank online, and prefer to manage all of their accounts the old fashioned way. I always respond that while this is totally okay, you still need to establish an online account for your financial providers because if you don’t someone may do it for you.

This goes doubly for any retirement and pension plans you may have. It’s a good idea for people with older relatives to help those individuals set up and manage online identities for their various accounts — even if those relatives never intend to access any of the accounts online.

This process is doubly important for parents and relatives who have just lost a spouse. When someone passes away, there’s often an obituary in the paper that offers a great deal of information about the deceased and any surviving family members, and identity thieves love to mine this information.

YOUR GOVERNMENT

Whether you’re approaching retirement, middle-aged or just starting out in your career, you should establish an account online at the U.S. Social Security Administration. Maybe you don’t believe Social Security money will actually still be there when you retire, but chances are you’re nevertheless paying into the system now. Either way, the plant-your-flag rules still apply.

Ditto for the Internal Revenue Service. A few years back, ID thieves who specialize in perpetrating tax refund fraud were massively registering people at the IRS’s website to download key data from their prior years’ tax transcripts. While the IRS has improved its taxpayer validation and security measures since then, it’s a good idea to mark your territory here as well.

The same goes for your state’s Department of Motor Vehicles (DMV), which maintains an alarming amount of information about you whether you have an online account there or not. Because the DMV also is the place that typically issues state drivers licenses, you really don’t want to mess around with the possibility that someone could register as you, change your physical address on file, and obtain a new license in your name.

Last but certainly not least, you should create an account for your household at the U.S. Postal Service’s Web site. Having someone divert your mail or delay delivery of it for however long they like is not a fun experience.

Also, the USPS has this nifty service called Informed Delivery, which lets residents view scanned images of all incoming mail prior to delivery. In 2018, the U.S. Secret Service warned that identity thieves have been abusing Informed Delivery to let them know when residents are about to receive credit cards or notices of new lines of credit opened in their names. Do yourself a favor and create an Informed Delivery account as well. Note that multiple occupants of the same street address can each have their own accounts.

YOUR HOME

Online accounts coupled with the strongest multi-factor authentication available also are important for any services that provide you with telephone, television and Internet access.

Strange as it may sound, plenty of people who receive all of these services in a bundle from one ISP do not have accounts online to manage their service. This is dangerous because if thieves can establish an account on your behalf, they can then divert calls intended for you to their own phones.

My original Plant Your Flag piece in 2018 told the story of an older Florida man who had pricey jewelry bought in his name after fraudsters created an online account at his ISP and diverted calls to his home phone number so they could intercept calls from his bank seeking to verify the transactions.

If you own a home, chances are you also have an account at one or more local utility providers, such as power and water companies. If you don’t already have an account at these places, create one and secure access to it with a strong password and any other access controls available.

These frequently monopolistic companies traditionally have poor to non-existent fraud controls, even though they effectively operate as mini credit bureaus. Bear in mind that possession of one or more of your utility bills is often sufficient documentation to establish proof of identity. As a result, such records are highly sought-after by identity thieves.

Another common way that ID thieves establish new lines of credit is by opening a mobile phone account in a target’s name. A little-known entity that many mobile providers turn to for validating new mobile accounts is the National Consumer Telecommunications and Utilities Exchange, or nctue.com. Happily, the NCTUE allows consumers to place a freeze on their file by calling their 800-number, 1-866-349-5355. For more information on the NCTUE, see this page.

Have I missed any important items? Please sound off in the comments below.

Worse Than FailureTeleconference Horror

Jcacweb cam

In the spring of 2020, with very little warning, every school in the United States shut down due to the ongoing global pandemic. Classrooms had to move to virtual meeting software like Zoom, which was never intended to be used as the primary means of educating grade schoolers. The teachers did wonderfully with such little notice, and most kids finished out the year with at least a little more knowledge than they started. This story takes place years before then, when online schooling was seen as an optional add-on and not a necessary backup plan in case of plague.

TelEdu provided their take on such a thing in the form of a free third-party add-on for Moodle, a popular e-learning platform. Moodle provides space for teachers to upload recordings and handouts; TelEdu takes it one step further by adding a "virtual classroom" complete with a virtual whiteboard. The catch? You have to pay a subscription fee to use the free module, otherwise it's nonfunctional.

Initech decided they were on a tight schedule to implement a virtual classroom feature for their corporate training, so they went ahead and bought the service without testing it. They then scheduled a demonstration to the client, still without testing it. The client's 10-man team all joined to test out the functionality, and it wasn't long before the phone started ringing off the hook with complaints: slowness, 504 errors, blank pages, the whole nine yards.

That's where Paul comes in to our story. Paul was tasked with finding what had gone wrong and completing the integration. The most common complaint was that Moodle was being slow, but upon testing it himself, Paul found that only the TelEdu module pages were slow, not the rest of the install. So far so good. The code was open-source, so he went digging through to find out what in view.php was taking so long:

$getplan = telEdu_get_plan();
$paymentinfo = telEdu_get_payment_info();
$getclassdetail = telEdu_get_class($telEduclass->class_id);
$pricelist = telEdu_get_price_list($telEduclass->class_id);

Four calls to get info about the class, three of them to do with payment. Not a great start, but not necessarily terrible, either. So, how was the info fetched?

function telEdu_get_plan() {
    $data['task'] = TELEDU_TASK_GET_PLAN;
    $result = telEdu_get_curl_info($data);
    return $result;
}

"They couldn't possibly ... could they?" Paul wondered aloud.

function telEdu_get_payment_info() {
    $data['task'] = TELEDU_TASK_GET_PAYMENT_INFO;
    $result = telEdu_get_curl_info($data);
    return $result;
}

Just to make sure, Paul next checked what telEdu_get_curl_info actually did:


function telEdu_get_curl_info($data) {
    global $CFG;
    require_once($CFG->libdir . '/filelib.php');

    $key = $CFG->mod_telEdu_apikey;
    $baseurl = $CFG->mod_telEdu_baseurl;

    $urlfirstpart = $baseurl . "/" . $data['task'] . "?apikey=" . $key;

    if (($data['task'] == TELEDU_TASK_GET_PAYMENT_INFO) || ($data['task'] == TELEDU_TASK_GET_PLAN)) {
        $location = $baseurl;
    } else {
        $location = telEdu_post_url($urlfirstpart, $data);
    }

    $postdata = '';
    if ($data['task'] == TELEDU_TASK_GET_PAYMENT_INFO) {
        $postdata = 'task=getPaymentInfo&apikey=' . $key;
    } else if ($data['task'] == TELEDU_TASK_GET_PLAN) {
        $postdata = 'task=getplan&apikey=' . $key;
    }

    $options = array(
        'CURLOPT_RETURNTRANSFER' => true, 'CURLOPT_SSL_VERIFYHOST' => false, 'CURLOPT_SSL_VERIFYPEER' => false,
    );

    $curl = new curl();
    $result = $curl->post($location, $postdata, $options);

    $finalresult = json_decode($result, true);
    return $finalresult;
}

A remote call to another API using, of all things, a shell call out to cURL, which queried URLs from the command line. Then it waited for the result, which was clocking in at anywhere between 1 and 30 seconds ... each call. The result wasn't used anywhere, either. It seemed to be just a precaution in case somewhere down the line they wanted these things.

After another half a day of digging through the rest of the codebase, Paul gave up. Sales told the client that "Due to the high number of users, we need more time to make a small server calibration."

The calibration? Replacing TelEdu with BigBlueButton. Problem solved.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Krebs on SecurityMicrosoft Patch Tuesday, August 2020 Edition

Microsoft today released updates to plug at least 120 security holes in its Windows operating systems and supported software, including two newly discovered vulnerabilities that are actively being exploited. Yes, good people of the Windows world, it’s time once again to backup and patch up!

At least 17 of the bugs squashed in August’s patch batch address vulnerabilities Microsoft rates as “critical,” meaning they can be exploited by miscreants or malware to gain complete, remote control over an affected system with little or no help from users. This is the sixth month in a row Microsoft has shipped fixes for more than 100 flaws in its products.

The most concerning of these appears to be CVE-2020-1380, which is a weaknesses in Internet Explorer that could result in system compromise just by browsing with IE to a hacked or malicious website. Microsoft’s advisory says this flaw is currently being exploited in active attacks.

The other flaw enjoying active exploitation is CVE-2020-1464, which is a “spoofing” bug in virtually all supported versions of Windows that allows an attacker to bypass Windows security features and load improperly signed files. For more on this flaw, see Microsoft Put Off Fixing Zero for 2 Years.

Trend Micro’s Zero Day Initiative points to another fix — CVE-2020-1472 — which involves a critical issue in Windows Server versions that could let an unauthenticated attacker gain administrative access to a Windows domain controller and run an application of their choosing. A domain controller is a server that responds to security authentication requests in a Windows environment, and a compromised domain controller can give attackers the keys to the kingdom inside a corporate network.

“It’s rare to see a Critical-rated elevation of privilege bug, but this one deserves it,” said ZDI’S Dustin Childs. “What’s worse is that there is not a full fix available.”

Perhaps the most “elite” vulnerability addressed this month earned the distinction of being named CVE-2020-1337, and refers to a security hole in the Windows Print Spooler service that could allow an attacker or malware to escalate their privileges on a system if they were already logged on as a regular (non-administrator) user.

Satnam Narang at Tenable notes that CVE-2020-1337 is a patch bypass for CVE-2020-1048, another Windows Print Spooler vulnerability that was patched in May 2020. Narang said researchers found that the patch for CVE-2020-1048 was incomplete and presented their findings for CVE-2020-1337 at the Black Hat security conference earlier this month. More information on CVE-2020-1337, including a video demonstration of a proof-of-concept exploit, is available here.

Adobe has graciously given us another month’s respite from patching Flash Player flaws, but it did release critical security updates for its Acrobat and PDF Reader products. More information on those updates is available here.

Keep in mind that while staying up-to-date on Windows patches is a must, it’s important to make sure you’re updating only after you’ve backed up your important data and files. A reliable backup means you’re less likely to pull your hair out when the odd buggy patch causes problems booting the system.

So do yourself a favor and backup your files before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And as ever, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Cory DoctorowTerra Nullius

Terra Nullius is my March 2019 column in Locus magazine; it explores the commonalities between the people who claim ownership over the things they use to make new creative works and the settler colonialists who arrived in various “new worlds” and declared them to be empty, erasing the people who were already there as a prelude to genocide.

I was inspired by the story of Aloha Poke, in which a white dude from Chicago secured a trademark for his “Aloha Poke” midwestern restaurants, then threatened Hawai’ians who used “aloha” in the names of their restaurants (and later, by the Dutch grifter who claimed a patent on the preparation of teff, an Ethiopian staple grain that has been cultivated and refined for about 7,000 years).

MP3 Link

LongNowScientists Have a Powerful New Tool to Investigate Triassic Dark Ages

The time-honored debate between catastrophists and gradualists (those who believe major Earth changes were due to sudden violent events or happened over long periods of time) has everything to do with the coarse grain of the geological record. When paleontologists only have a series of thousand-year flood deposits to study, it’s almost impossible to say what was really going on at shorter timescales. So many of the great debates of natural history hinge on the resolution at which data can be collected, and boil down to something like, “Was it a meteorite impact that caused this extinction, or the inexorable climate changes caused by continental drift?”

One such gap in our understanding is in the Late Triassic — a geological shadow during which major regime changes in terrestrial fauna took place, setting the stage for The Age of Dinosaurs. But the curtains were closed during that scene change…until, perhaps, now:

By determining the age of the rock core, researchers were able to piece together a continuous, unbroken stretch of Earth’s history from 225 million to 209 million years ago. The timeline offers insight into what has been a geologic dark age and will help scientists investigate abrupt environmental changes from the peak of the Late Triassic and how they affected the plants and animals of the time.

Cool new detective work on geological “tree rings” from the Petrified Forest National Park (where I was lucky enough to do some revolutionary paleontological reconstruction work under Dr. Bill Parker back in 2005).

Kevin RuddCNN: South China Sea and the US-China Tech War

E&OE TRANSCRIPT
TELEVISION INTERVIEW
CNN, FIRST MOVE
11 AUGUST 2020

Topics: Foreign Affairs article; US-China tech war

Zain Asher
In a sobering assessment in Foreign Affairs magazine, the former Australian Prime Minister Kevin Rudd warns that diplomatic relations are crumbling and raise the possibility of armed conflict. Mr Rudd, who is president of the Asia Society Policy Institute, joins us live now. So Mr Rudd, just walk us through this. You believe that armed conflict is possible and, is this relationship at this point, in your opinion, quite frankly, beyond repair?

Kevin Rudd
It’s not beyond repair, but we’ve got to be blunt about the fact that the level of deterioration has been virtually unprecedented at least in the last half-century. And things are moving at a great pace in terms of the scenarios, the two scenarios which trouble us most are the Taiwan straits and the South China Sea. In the Taiwan straits, we see consistent escalation of tensions between Washington and Beijing. And certainly, in the South China Sea, the pace and intensity of naval and air activity in and around that region increases the possibility, the real possibility, of collisions at sea and collisions in the air. And the question then becomes: do Beijing and Washington really have an intention to de-escalate or then to escalate, if such a crisis was to unfold?

Zain Asher
How do they de-escalate? Is the only way at this point, or how do they reverse the sort of tensions between them? Is the main way at this point that, you know, a new administration comes in in November and it can be reset? If Trump gets re-elected, can there be de-escalation? If so, how?

Kevin Rudd
Well the purpose of my writing the article in Foreign Affairs, which you referred to before, was to, in fact, talk about the real dangers we face in the next three months. That is, before the US presidential election. We all know that in the US right now, that tensions or, shall I say, political pressure on President Trump are acute. But what people are less familiar of within the West is the fact that in Chinese politics there is also pressure on Xi Jinping for a range of domestic and external reasons as well. So what I have simply said is: in this next three months, where we face genuine political pressure operating on both political leaders, if we do have an incident, that is an unplanned incident or collision in the air or at sea, we now have a tinderbox environment. Therefore, the plans which need to be put in place between the grown-ups in the US and Chinese militaries is to have a mechanism to rapidly de-escalate should a collision occur. I’m not sure that those plans currently exist.

Zain Asher
Let’s talk about tech because President Donald Trump, as you know, is forcing ByteDance, the company that owns TikTok, to sell its assets and no longer operate in the US. The premise is that there are national security fears and also this idea that TikTok is handing over user data from American citizens to the Chinese government. How real and concrete are those fears, or is this purely politically motivated? Are the fears justified, in other words?

Kevin Rudd
As far as TikTok is concerned, this is way beyond my paygrade in terms of analysing the technological capacities of a) the company and b) the ability of the Chinese security authorities to backdoor them. What I can say is this a deliberate decision on the part of the US administration to radically escalate the technology war. In the past, it was a war about Huawei and 5G. It then became an unfolding conflict over the question of the future access to semiconductors, computer chips. And now we have, as it were, the unfolding ban imposed by the administration on Chinese-sourced computer apps, including this one, for TikTok. So this is a throwing-down of the gauntlet by the US administration. What I believe we will see, however, is Chinese retaliation. I think they will find a corporate mechanism to retaliate, given the actions taken not just against ByteDance and TikTok, but of course against WeChat. And so the pattern of escalation that we were talking about earlier in technology, the economy, trade, investment, finance, and the hard stuff in national security continues to unfold, which is why we need sober heads to prevail in the months ahead.

The post CNN: South China Sea and the US-China Tech War appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: The Concatenator

In English, there's much debate over the "Oxford Comma": in a list of items, do you put a comma between the penultimate item and the "and" before the final one? For example: "The conference featured bad programmers, Remy and TheDailyWTF readers" versus "The conference featured bad programmers, Remy, and the TheDailyWTF readers."

I'd like to introduce a subtly different one: "the concatenator's comma", or if we want to be generic "the concatenator's seperator character", but that doesn't have the same ring to it. If you're planning to list items as a string, you might to something like this pseudocode:

for each item in items result.append(item + ", ")

This naive approach does pose a problem: we'll have an extra comma. So maybe you have to add logic to decide if you're on the first or last item, and insert (or fail to insert) commas as appropriate. Or, maybe isn't a problem- if we're generating JSON, for example, we can just leave the trailing commas. This isn't universally true, of course, but many formats will ignore extra separators. Edit: I was apparently hallucinating when I wrote this; one of the most annoying things about JSON is that you can't do this.

Like, for example, URL query strings, which don't require a "sub-delim" like "&" to have anything following it.

But fortunately for us, no matter what language we're using, there's almost certainly an API that makes it so that we don't have to do string concatenation anyway, so why even bring it up?

Well, because Mike has a co-worker that has read the docs well enough to know that PHP has a substr method, but not well enough to know it has an http_build_query method. Or even an implode method, which handles string concats for you. Instead, they wrote this:

$query = ''; foreach ($postdata as $var => $val) { $query .= $var .'='. $val .'&'; } $query = substr($query, 0, -1);

This code exploits a little-observed feature of substr: a negative length reads back from the end. So this lops off that trailing "&", which is both unnecessary and one of the most annoying ways to do this.

Maybe it's not enough to RTFM, as Mike puts it, maybe you need to "RTEFM": read the entire manual.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

LongNowThe Deep Sea

As detailed in the exquisite documentary Proteus, the ocean floor was until very recently a repository for the dreams of humankind — the receptacle for our imagination. But when the H.M.S. Challenger expedition surveyed the world’s deep-sea life and brought it back for cataloging by now-legendary illustrator Ernst Haeckel (who coined the term “ecology”), the hidden benthic universe started coming into view. What we found, and what we continue to discover on the ocean floor, is far stranger than the monsters we’d projected.

This spectacular site by Neal Agarwal brings depth into focus. You’ve surfed the Web; now take a few and dive all the way down to Challenger Deep, scrolling past the animals that live at every depth.

Just as The Long Now situates us in a humbling, Copernican experience of temporality, Deep Sea reminds us of just how thin of a layer surface life exists in. Just as with Stewart Brand’s pace layers, the further down you go, the slower everything unfolds: the cold and dark and pressure slow the evolutionary process, dampening the frequency of interactions between creatures, bestowing space and time for truly weird and wondrous and as-yet-uncategorized life.

Dig in the ground and you might pull up the fossils of some strange long-gone organisms. Dive to the bottom of the ocean and you might find them still alive down there, the unmolested records of an ancient world still drifting in slow motion, going about their days-without-days…

For evidence of time-space commutability, settle in for a sublime experience that (like benthic life itself) makes much of very little: just one page, one scroll bar, and one journey to a world beyond.

(Mobile device suggested: this scroll goes in, not just across…)

Learn More:

  • The “Big Here” doesn’t get much bigger than Neal Agarwal‘s The Size of Space, a new interactive visualization that provides a dose of perspective on our place in the universe.

,

LongNowChildhood as a solution to explore–exploit tensions

Big questions abound regarding the protracted childhood of Homo sapiens, but there’s a growing argument that it’s an adaptation to the increased complexity of our social environment and the need to learn longer and harder in order to handle the ever-raising bar of adulthood. (Just look to the explosion of requisite schooling over the last century for a concrete example of how childhood grows along with social complexity.)

It’s a tradeoff between genetic inheritance and enculturation — see also Kevin Kelly’s remarks in The Inevitable that we have entered an age of lifelong learning and the 21st Century requires all of us to be permanent “n00bs”, due to the pace of change and the scale at which we have to grapple with evolutionarily relevant sociocultural information.

New research from Past Long Now Seminar Speaker Alison Gopnik:

“I argue that the evolution of our life history, with its distinctively long, protected human childhood, allows an early period of broad hypothesis search and exploration, before the demands of goal-directed exploitation set in. This cognitive profile is also found in other animals and is associated with early behaviours such as neophilia and play. I relate this developmental pattern to computational ideas about explore–exploit trade-offs, search and sampling, and to neuroscience findings. I also present several lines of empirical evidence suggesting that young human learners are highly exploratory, both in terms of their search for external information and their search through hypothesis spaces. In fact, they are sometimes more exploratory than older learners and adults.”

Alison Gopnik, “Childhood as a solution to explore-exploit tensions” in Philosophical Transactions of the Royal Society B.

,

Chaotic IdealismTo a Newly Diagnosed Autistic Teenager

I was recently asked by a 14-year-old who had just been diagnosed autistic what advice I had to give. This is what I said.

The thing that helped me most was understanding myself and talking to other autistic people, so you’re already well on that road.

The more you learn about yourself, the more you learn about how you *learn*… meaning that you can become better at teaching yourself to communicate with neurotypicals.

Remember though: The goal is to communicate. Blending in is secondary, or even irrelevant, depending on your priorities. If you can get your ideas from your brain to theirs, and understand what they’re saying, and live in the world peacefully without hurting anyone and without putting yourself in danger, then it does not matter how different you are or how differently you do things.

Autistic is not better and not worse than neurotypical; it’s simply different. Having a disability is a normal part of human life; it’s nothing to be proud of and nothing to be ashamed of. Disability doesn’t stop you from being talented or from becoming unusually skilled, especially with practice. Being different means that you see things from a different perspective, which means that as you grow and gain experience you will be able to provide solutions to problems that other people simply don’t see, to contribute skills that most people don’t have.

Learn to advocate for yourself. If you have an IEP, go to the meetings and ask questions about what help is available and what problems you have. When you are mistreated, go to someone you trust and ask for help; and if you can’t get help, protect yourself as best you can. Learn to stand up for yourself, to keep other people from taking advantage of you. Also learn to help other people stay safe.

Your best social connections now will be anyone who treats you with kindness. You can tell whether someone is kind by observing how they treat those they have power over when nobody, or nobody with much influence, is watching. You want people who are honest, or who only lie when they are trying to protect others’ feelings. Talk to these people; explain that you are not very good with social things and that you sometimes embarrass yourself or accidentally insult people, and that you would like them to tell you when you are doing something clumsy, offensive, confusing, or cringeworthy. Explain to these people that you would prefer to know about mistakes you are making, because if you are not told you will never be able to correct those mistakes.

Learn to apologize, and learn that an apology simply means, “I recognize I have made a mistake and shall work to correct it in the future.” An apology is not a sign of failure or an admission of inferiority. Sometimes an apology can even mean, “I have made a mistake that I could not control; if I had been able to control it, I would not have made the mistake.” Therefore, it is okay to apologize if you have simply made an honest mistake. The best apology includes an explanation of how you will fix your mistake or what you will change to keep it from happening in the future.

Learn not to apologize when you have done nothing wrong. Do not apologize for being different, for standing up for yourself or for other people, or for having an opinion others disagree with. You do not need to justify your existence. You should never give in to the pressure to say, “I am autistic, but that’s okay because I have this skill and that talent.” The correct statement is, “I am autistic, and that is okay.” You don’t need to do anything to be valuable. You just need to be human.

If someone uses you to fulfill their own desires but doesn’t give things back in return; if someone doesn’t care about your needs when you tell them; if someone can tell you are hurt and doesn’t care; then that is a person you cannot trust.

In general, you can expect your teen years to be harder than your young-adult years. As you grow and gain experience, you’ll gain skills and you’ll gather a library of techniques to help you navigate the social and sensory world, to help you deal with your emotions and with your relationships. You will never be perfect–but then, nobody is. What you’re aiming for is useful, functional skills, in whatever form they take, whether they are the typical way of doing things or not. As the saying goes: If it looks stupid but it works, it isn’t stupid.

Keep trying. Take good care of yourself. When you are tired, rest. Learn to push yourself to your limits, but not beyond; and learn where those limits are. When you are tired from something that would not tire a neurotypical, be unashamed about your need for down time. Learn to say “no” when you don’t want something, and learn to say “yes” when you want something but you are a little bit intimidated by it because it is new or complicated or unpredictable. Learn to accept failure and learn from it. Help others. Make your world better. Make your own way. Grow. Live.

You’ll be okay.

,

LongNowTraditional Ecological Knowledge

Archaeologist Stefani Crabtree writes about her work to reconstruct Indigenous food and use networks for the National Park Service:

Traditional Ecological Knowledge gets embedded in the choices that people make when they consume, and how TEK can provide stability of an ecosystem. Among Martu, the use of fire for hunting and the knowledge of the habits of animals are enshrined in the Dreamtime stories passed inter-generationally; these Dreamtime stories have material effects on the food web, which were detected in our simulations. The ecosystem thrived with Martu; it was only through their removal that extinctions began to cascade through the system.

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 12)

Here’s part twelve of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

LongNowPredicting the Animals of the Future

Jim Cooke / Gizmodo

Gizmodo asks half a dozen natural historians to speculate on who is going to be doing what jobs on Earth after the people disappear. One of the streams that runs wide and deep through this series of fun thought experiments is how so many niches stay the same through catastrophic changes in the roster of Earth’s animals. Dinosaurs die out but giant predatory birds evolve to take their place; butterflies took over from (unrelated) dot-winged, nectar-sipping giant lacewing pollinator forebears; before orcas there were flippered ocean-going crocodiles, and there will probably be more one day.

In Annie Dillard’s Pulitzer Prize-winning Pilgrim at Tinker Creek, she writes about a vision in which she witnesses glaciers rolling back and forth “like blinds” over the Appalachian Mountains. In this Gizmodo piece, Alexis Mychajliw of the La Brea Tar Pits & Museum talks about how fluctuating sea levels connected island chains or made them, fusing and splitting populations in great oscillating cycles, shrinking some creatures and giantizing others. There’s something soothing in the view from orbit paleontologists, like other deep-time mystics, possess, embody, and transmit: a sense for the clockwork of the cosmos and its orderliness, an appreciation for the powerful resilience of life even in the face of the ephemerality of life-forms.

While everybody interviewed here has obvious pet futures owing to their areas of interest, hold all of them superimposed together and you’ll get a clearer image of the secret teachings of biology…

(This article must have been inspired deeply by Dougal Dixon’s book After Man, but doesn’t mention him – perhaps a fair turn, given Dixon was accused of plagiarizing Wayne Barlowe for his follow-up, Man After Man.)

,

MELinks July 2020

iMore has an insightful article about Apple’s transition to the ARM instruction set for new Mac desktops and laptops [1]. I’d still like to see them do something for the server side.

Umair Haque wrote an insightful article about How the American Idiot Made America Unlivable [2]. We are witnessing the destruction of a once great nation.

Chris Lamb wrote an interesting blog post about comedy shows with the laugh tracks edited out [3]. He then compares that to social media with the like count hidden which is an interesting perspective. I’m not going to watch TV shows edited in that way (I’ve enjoyed BBT inspite of all the bad things about it) and I’m not going to try and hide like counts on social media. But it’s interesting to consider these things.

Cory Doctorow wrote an interesting Locus article suggesting that we could have full employment by a transition to renewable energy and methods for cleaning up the climate problems we are too late to prevent [4]. That seems plausible, but I think we should still get a Universal Basic Income.

The Thinking Shop has posters and decks of cards with logical fallacies and cognitive biases [5]. Every company should put some of these in meeting rooms. Also they have free PDFs to download and print your own posters.

gayhomophobe.com [6] is a site that lists powerful homophobic people who hurt GLBT people but then turned out to be gay. It’s presented in an amusing manner, people who hurt others deserve to be mocked.

Wired has an insightful article about the shutdown of Backpage [7]. The owners of Backpage weren’t nice people and they did some stupid things which seem bad (like editing posts to remove terms like “lolita”). But they also worked well with police to find criminals. The opposition to what Backpage were doing conflates sex trafficing, child prostitution, and legal consenting adult sex work. Taking down Backpage seems to be a bad thing for the victims of sex trafficing, for consenting adult sex workers, and for society in general.

Cloudflare has an interesting blog post about short lived certificates for ssh access [8]. Instead of having user’s ssh keys stored on servers each user has to connect to a SSO server to obtain a temporary key before connecting, so revoking an account is easy.

LongNowThe Digital Librarian as Essential Worker

Michelle Swanson, an Oregon-based educator and educational consultant, has written a blog post on the Internet Archive on the increased importance of digital librarians during the pandemic:

With public library buildings closed due to the global pandemic, teachers, students, and lovers of books everywhere have increasingly turned to online resources for access to information. But as anyone who has ever turned up 2.3 million (mostly unrelated) results from a Google search knows, skillfully navigating the Internet is not as easy as it seems. This is especially true when conducting serious research that requires finding and reviewing older books, journals and other sources that may be out of print or otherwise inaccessible.

Enter the digital librarian.

Michelle Swanson, “Digital Librarians – Now More Essential Than Ever” from the Internet Archive.

Kevin Kelly writes (in New Rules for the New Economy and in The Inevitable) about how an information economy flips the relative valuation of questions and answers — how search makes useless answers nearly free and useful questions even more precious than before, and knowing how to reliably produce useful questions even more precious still.

But much of our knowledge and outboard memory is still resistant to or incompatible with web search algorithms — databases spread across both analog and digital, with unindexed objects or idiosyncratic cataloging systems. Just as having map directions on your phone does not outdo a local guide, it helps to have people intimate with a library who can navigate the weird specifics. And just as scientific illustrators still exist to mostly leave out the irrelevant and make a paper clear as day (which cameras cannot do, as of 02020), a librarian is a sharp instrument that cuts straight through the extraneous info to what’s important.

Knowing what to enter in a search is one thing; knowing when it won’t come up in search and where to look amidst an analog collection is another skill entirely. Both are necessary at a time when libraries cannot receive (as many) scholars in the flesh, and what Penn State Prof Rich Doyle calls the “infoquake” online — the too-much-all-at-once-ness of it all — demands an ever-sharper reason just to stay afloat.

Learn More

  • Watch Internet Archive founder Brewster Kahle’s 02011 Long Now talk, “Universal Access to All Knowledge.”

Sam VargheseHistory lessons at a late stage of life

In 1987, I got a job in Dubai, to work for a newspaper named Khaleej (Gulf) Times. I was chosen because the interviewer was a jolly Briton who came down to Bombay to do the interview on 12 June.

Malcolm Payne, the first editor of the newspaper that had been started in 1978 by Iranian brothers named Galadari, told me that he had always wanted to come and pick some people to work at the paper. By then he had been pushed out of the editorship by the politics of both Pakistani and Indian journalists who worked there.

For some strange reason, he took a liking to me. At the end of about 45 minutes of what was a much more robust conversation than I had ever experienced in earlier job interviews, which were normally tense affairs, Payne told me, “You’re a good bugger, Samuel. I’ll see you in Dubai.”

I took it with a pinch of salt. Anyway, I reckoned that I would know in a matter of months whether he pulling my leg or not in few months. I was more focused on my upcoming wedding which was to be scheduled shortly.

But, Payne turned out to be a man of his word. In September, I got a telegram from Dubai asking me to send copies of my passport in order that a visa could be obtained for me to work in Dubai. I had mixed emotions: on the one hand, I was happy that a chance to get out of the grinding poverty I lived in had presented itself. At the same time, I was worried about leaving my sickly mother in India; by then, she had been a widow for a few months and I was her only son.

When my mother-in-law to be heard about the job opportunity, she insisted that the wedding should be held before I left for Dubai. Probably she thought that once I went to the Persian Gulf, I would begin to look for another woman.

The wedding was duly fixed for 19 October and I was to leave for Dubai on 3 November.

After I landed in Dubai, I learnt about the tension that exists between most Indian and Pakistanis as a result of the partition of the subcontinent in 1947. Pakistanis are bitter because they feel that they were forced to leave for a country that had turned out to be a basket case, subsisting only because of aid from the US, and Indians felt that the Pakistanis had been the ones to force Britain, then the colonial ruler, to split the country.

Never did this enmity come to the fore more than when India and Pakistan sent their cricket teams to the UAE — Dubai is part of this country — to play in a tournament organised there by some businessman from Sharjah.

Of course, the whole raison d’etre for the tournament was the Indo-Pakistan enmity; pitting teams that had a history of this sort against each other was like staging a proxy war. What’s more, there were both expatriate Indians and Pakistanis in large numbers waiting eagerly to buy tickets and pour into what was literally a coliseum.
The other teams who were invited — sometimes there was a three-way contest, at others a four-way fight — were just there to make up the numbers.

And the organisers always prayed for an India-Pakistan final.

A year before I arrived in Dubai, a Pakistani batsman known as Javed Miandad had taken his team to victory by hitting a six off the last ball; the contests were limited to 50 overs a side. He was showered with gifts by rich Pakistanis and one even gifted him some land. Such was the euphoria a victory in the former desert generated.

Having been born and raised in Sri Lanka, I knew nothing of the history of India. My parents did not clue me in either. I learnt all about the grisly history of the subcontinent after I landed in Dubai.

That enmity resulted in several other incidents worth telling, which I shall relate soon.

,

LongNowThe Unexpected Influence of Cosmic Rays on DNA

Samuel Velasco/Quanta Magazine

Living in a world with multiple spatiotemporal scales, the very small and fast can often drive the future of the very large and slow: Microscopic genetic mutations change macroscopic anatomy. Undetectably small variations in local climate change global weather patterns (the infamous “butterfly effect”).

And now, one more example comes from a new theory about why DNA on modern Earth only twists in one of two possible directions:

Our spirals might all trace back to an unexpected influence from cosmic rays. Cosmic ray showers, like DNA strands, have handedness. Physical events typically break right as often as they break left, but some of the particles in cosmic ray showers tap into one of nature’s rare exceptions. When the high energy protons in cosmic rays slam into the atmosphere, they produce particles called pions, and the rapid decay of pions is governed by the weak force — the only fundamental force with a known mirror asymmetry.

Millions if not billions of cosmic ray strikes could be required to yield one additional free electron in a [right-handed] strand, depending on the event’s energy. But if those electrons changed letters in the organisms’ genetic codes, those tweaks may have added up. Over perhaps a million years…cosmic rays might have accelerated the evolution of our earliest ancestors, letting them out-compete their [left-handed] rivals.

In other words, properties of the subatomic world seem to have conferred a benefit to the potential for innovation among right-handed nucleic acids, and a “talent” for generating useful copying errors led to the entrenched monopoly we observe today.

But that isn’t the whole story. Read more at Quanta.

,

Rondam RamblingsThe insidious problem of racism

Take a moment to seriously think about what is wrong with racism.  If you're like most people, your answer will probably be that racism is bad because it's a form of prejudice, and prejudice is bad.  This is not wrong, but it misses a much deeper, more insidious issue.  The real problem with racism is that it is that it can be (and usually is) rationalized and those rationalizations can turn into

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 11)

Here’s part eleven of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

LongNowDiscovery in Mexican Cave May Drastically Change the Known Timeline of Humans’ Arrival to the Americas

Human history in the Americas may be twice long as long as previously believed — at least 26,500 years — according to authors of a new study at Mexico’s Chiquihuite cave and other sites throughout Central Mexico.

According to the study’s lead author Ciprian Ardelean:

“This site alone can’t be considered a definitive conclusion. But with other sites in North America like Gault (Texas), Bluefish Caves (Yukon), maybe Cactus Hill (Virginia)—it’s strong enough to favor a valid hypothesis that there were humans here probably before and almost surely during the Last Glacial Maximum.”

,

Rondam RamblingsAbortion restrictions result in more abortions

Not that this was ever in any serious doubt, but now there is actual data published in The Lancet showing that abortion restrictions increase the number of abortions: In 2015–19, there were 121.0 million unintended pregnancies annually (80% uncertainty interval [UI] 112.8–131.5), corresponding to a global rate of 64 unintended pregnancies (UI 60–70) per 1000 women aged 15–49 years. 61% (58–63)

Rondam RamblingsMark your calendars: I am debating Ken Hovind on July 9

I've recently taken up a new hobby of debating young-earth creationists on YouTube.  (It's a dirty job, but somebody's gotta do it.)  I've done two of them so far [1][2], both on a creationist channel called Standing For Truth.  My third debate will be against Kent Hovind, one of the more prominent and, uh, outspoken members of the YEC community.  In case you haven't heard of him, here's a sample

,

LongNowThe Comet Neowise as seen from the ISS

For everyone who cannot see the Comet Neowise with their own eyes this week — or just wants to see it from a higher perch — this video by artist Seán Doran combines 550 NASA images from the International Space Station into a real-time view of the comet from 250 miles above Earth’s surface and 17,500 mph.

LongNowEnormous Dormice Once Roamed Mediterranean Islands

Pleistocene dormouse Leithia melitensis was the size of a house cat. New computer-aided reconstructions show a skull as long as an entire modern dormouse.

It’s a textbook example of “island gigantism,” in which, biologists hypothesize, fewer terrestrial predators and more pressure from predatory birds selects for a much larger body size in some island organisms.

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 10)

Here’s part ten of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3