Planet Russell

,

Planet DebianMichael Stapelberg: Optional dependencies don’t work

In the i3 projects, we have always tried hard to avoid optional dependencies. There are a number of reasons behind it, and as I have recently encountered some of the downsides of optional dependencies firsthand, I summarized my thoughts in this article.

What is a (compile-time) optional dependency?

When building software from source, most programming languages and build systems support conditional compilation: different parts of the source code are compiled based on certain conditions.

An optional dependency is conditional compilation hooked up directly to a knob (e.g. command line flag, configuration file, …), with the effect that the software can now be built without an otherwise required dependency.

Let’s walk through a few issues with optional dependencies.

Inconsistent experience in different environments

Software is usually not built by end users, but by packagers, at least when we are talking about Open Source.

Hence, end users don’t see the knob for the optional dependency, they are just presented with the fait accompli: their version of the software behaves differently than other versions of the same software.

Depending on the kind of software, this situation can be made obvious to the user: for example, if the optional dependency is needed to print documents, the program can produce an appropriate error message when the user tries to print a document.

Sometimes, this isn’t possible: when i3 introduced an optional dependency on cairo and pangocairo, the behavior itself (rendering window titles) worked in all configurations, but non-ASCII characters might break depending on whether i3 was compiled with cairo.

For users, it is frustrating to only discover in conversation that a program has a feature that the user is interested in, but it’s not available on their computer. For support, this situation can be hard to detect, and even harder to resolve to the user’s satisfaction.

Packaging is more complicated

Unfortunately, many build systems don’t stop the build when optional dependencies are not present. Instead, you sometimes end up with a broken build, or, even worse: with a successful build that does not work correctly at runtime.

This means that packagers need to closely examine the build output to know which dependencies to make available. In the best case, there is a summary of available and enabled options, clearly outlining what this build will contain. In the worst case, you need to infer the features from the checks that are done, or work your way through the --help output.

The better alternative is to configure your build system such that it stops when any dependency was not found, and thereby have packagers acknowledge each optional dependency by explicitly disabling the option.

Untested code paths bit rot

Code paths which are not used will inevitably bit rot. If you have optional dependencies, you need to test both the code path without the dependency and the code path with the dependency. It doesn’t matter whether the tests are automated or manual, the test matrix must cover both paths.

Interestingly enough, this principle seems to apply to all kinds of software projects (but it slows down as change slows down): one might think that important Open Source building blocks should have enough users to cover all sorts of configurations.

However, consider this example: building cairo without libxrender results in all GTK application windows, menus, etc. being displayed as empty grey surfaces. Cairo does not fail to build without libxrender, but the code path clearly is broken without libxrender.

Can we do without them?

I’m not saying optional dependencies should never be used. In fact, for bootstrapping, disabling dependencies can save a lot of work and can sometimes allow breaking circular dependencies. For example, in an early bootstrapping stage, binutils can be compiled with --disable-nls to disable internationalization.

However, optional dependencies are broken so often that I conclude they are overused. Read on and see for yourself whether you would rather commit to best practices or not introduce an optional dependency.

Best practices

If you do decide to make dependencies optional, please:

  1. Set up automated testing for all code path combinations.
  2. Fail the build until packagers explicitly pass a --disable flag.
  3. Tell users their version is missing a dependency at runtime, e.g. in --version.

Worse Than FailurePowerful Trouble

FSC Primergy TX200 S2 0012

Many years ago, Chris B. worked for a company that made CompactPCI circuit boards. When the spec for hot-swappable boards (i.e., boards that could be added and removed without powering down the system) came out, the company began to make some. Chris became the designated software hot-swap expert.

The company bought several expensive racks with redundant everything for testing purposes. In particular, the racks had three power supply units even though they only required two to run. The idea was that if one power supply were to fail, it could be replaced while the system was still up and running. The company also recommended those same racks to its customers.

As part of a much-lauded business deal, the company's biggest-spending customer set up a lab with many of these racks. A short while later, though, they reported a serious problem: whenever they inserted a board with the power on, it wouldn't come up properly. However, the same board would initialize without issue if it were in place when the system was first started.

Several weeks slipped by as Chris struggled to troubleshoot remotely and/or reproduce the problem locally, all to no avail. The customer, Sales, and upper management all chipped in to provide additional pressure. The deal was in jeopardy. Ben, the customer's panicked Sales representative, finally suggested a week-long business trip in hopes of seeing the problem in situ and saving his commission the company's reputation. And that was how Chris found himself on an airplane with Ben, flying out to the customer site.

Bright and early Monday morning, Chris and Ben arrived at the customer's fancy lab. They met up with their designated contact—an engineer—and asked him to demonstrate the problem.

The engineer powered up an almost empty rack, then inserted a test board. Sure enough, it didn't initialize.

Chris spent a moment looking at the system. What could possibly be different here compared to our setup back home? he asked himself. Then, he spotted something that no one on the customer side had ever mentioned to him previously.

"I see you only have one of the three power supplies for the chassis in place." He pointed to the component in question. "Why is that?"

"Well, they're really loud," the engineer replied.

Chris bit back an unkind word. "Could you humor me and try again with two power supplies in place?"

The engineer hooked up a second power supply obligingly, then repeated the test. This time, the board mounted properly.

"Aha!" Ben looked to Chris with a huge grin on his face.

"So, what was the issue?" the engineer asked.

"I'm not a hardware expert," Chris prefaced, "but as I understand it, the board draws the most power whenever it's first inserted. Your single power supply wasn't sufficient, but with two in there, the board can get what it needs."

It was almost as if the rack had been designed with this power requirement in mind—but Chris kept the sarcasm bottled. He was so happy and relieved to have finally solved the puzzle that he had no room in his mind for ill will.

"You're a miracle-worker!" Ben declared. "This is fantastic!"

In the end, functionality won out over ambiance; the fix proved successful on the customers' other racks as well. Ben was so pleased, he treated Chris to a fancy dinner that evening. The pair spent the rest of the week hanging around the customer's lab, hoping to be of some use before their flight home.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianFrançois Marier: Installing Ubuntu 18.04 using both full-disk encryption and RAID1

I recently setup a desktop computer with two SSDs using a software RAID1 and full-disk encryption (i.e. LUKS). Since this is not a supported configuration in Ubuntu desktop, I had to use the server installation medium.

This is my version of these excellent instructions.

Server installer

Start by downloading the alternate server installer and verifying its signature:

  1. Download the required files:

     wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/ubuntu-18.04.2-server-amd64.iso
     wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/SHA256SUMS
     wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/SHA256SUMS.gpg
    
  2. Verify the signature on the hash file:

     $ gpg --keyid-format long --keyserver hkps://keyserver.ubuntu.com --recv-keys 0xD94AA3F0EFE21092
     $ gpg --verify SHA256SUMS.gpg SHA256SUMS
     gpg: Signature made Fri Feb 15 08:32:38 2019 PST
     gpg:                using RSA key D94AA3F0EFE21092
     gpg: Good signature from "Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>" [undefined]
     gpg: WARNING: This key is not certified with a trusted signature!
     gpg:          There is no indication that the signature belongs to the owner.
     Primary key fingerprint: 8439 38DF 228D 22F7 B374  2BC0 D94A A3F0 EFE2 1092
    
  3. Verify the hash of the ISO file:

     $ sha256sum ubuntu-18.04.2-server-amd64.iso 
     a2cb36dc010d98ad9253ea5ad5a07fd6b409e3412c48f1860536970b073c98f5  ubuntu-18.04.2-server-amd64.iso
     $ grep ubuntu-18.04.2-server-amd64.iso SHA256SUMS
     a2cb36dc010d98ad9253ea5ad5a07fd6b409e3412c48f1860536970b073c98f5 *ubuntu-18.04.2-server-amd64.iso
    

Then copy it to a USB drive:

dd if=ubuntu-18.04.2-server-amd64.iso of=/dev/sdX

and boot with it.

Inside the installer, use manual partitioning to:

  1. Configure the physical partitions.
  2. Configure the RAID array second.
  3. Configure the encrypted partitions last

Here's the exact configuration I used:

  • /dev/sda1 is 512 MB and used as the EFI parition
  • /dev/sdb1 is 512 MB but not used for anything
  • /dev/sda2 and /dev/sdb2 are both 4 GB (RAID)
  • /dev/sda3 and /dev/sdb3 are both 512 MB (RAID)
  • /dev/sda4 and /dev/sdb4 use up the rest of the disk (RAID)

I only set /dev/sda2 as the EFI partition because I found that adding a second EFI partition would break the installer.

I created the following RAID1 arrays:

  • /dev/sda2 and /dev/sdb2 for /dev/md2
  • /dev/sda3 and /dev/sdb3 for /dev/md0
  • /dev/sda4 and /dev/sdb4 for /dev/md1

I used /dev/md0 as my unencrypted /boot partition.

Then I created the following LUKS partitions:

  • md1_crypt as the / partition using /dev/md1
  • md2_crypt as the swap partition (4 GB) with a random encryption key using /dev/md2

Post-installation configuration

Once your new system is up, sync the EFI partitions using DD:

dd if=/dev/sda1 of=/dev/sdb1

and create a second EFI boot entry:

efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l \EFI\ubuntu\shimx64.efi

Ensure that the RAID drives are fully sync'ed by keeping an eye on /prod/mdstat and then reboot, selecting "ubuntu2" in the UEFI/BIOS menu.

Once you have rebooted, remove the following package to speed up future boots:

apt purge btrfs-progs

To switch to the desktop variant of Ubuntu, install these meta-packages:

apt install ubuntu-desktop gnome

then use debfoster to remove unnecessary packages (in particular the ones that only come with the default Ubuntu server installation).

Fixing booting with degraded RAID arrays

Since I have run into RAID startup problems in the past, I expected having to fix up a few things to make degraded RAID arrays boot correctly.

I did not use LVM since I didn't really feel the need to add yet another layer of abstraction of top of my setup, but I found that the lvm2 package must still be installed:

apt install lvm2

with use_lvmetad = 0 in /etc/lvm/lvm.conf.

Then in order to automatically bring up the RAID arrays with 1 out of 2 drives, I added the following script in /etc/initramfs-tools/scripts/local-top/cryptraid:

 #!/bin/sh
 PREREQ="mdadm"
 prereqs()
 {
      echo "$PREREQ"
 }
 case $1 in
 prereqs)
      prereqs
      exit 0
      ;;
 esac

 mdadm --run /dev/md0
 mdadm --run /dev/md1
 mdadm --run /dev/md2

before making that script executable:

chmod +x /etc/initramfs-tools/scripts/local-top/cryptraid

and refreshing the initramfs:

update-initramfs -u -k all

Disable suspend-to-disk

Since I use a random encryption key for the swap partition (to avoid having a second password prompt at boot time), it means that suspend-to-disk is not going to work and so I disabled it by putting the following in /etc/initramfs-tools/conf.d/resume:

RESUME=none

and by adding noresume to the GRUB_CMDLINE_LINUX variable in /etc/default/grub before applying these changes:

update-grub
update-initramfs -u -k all

Test your configuration

With all of this in place, you should be able to do a final test of your setup:

  1. Shutdown the computer and unplug the second drive.
  2. Boot with only the first drive.
  3. Shutdown the computer and plug the second drive back in.
  4. Boot with both drives and re-add the second drive to the RAID array:

     mdadm /dev/md0 -a /dev/sdb3
     mdadm /dev/md1 -a /dev/sdb4
     mdadm /dev/md2 -a /dev/sdb2
    
  5. Wait until the RAID is done re-syncing and shutdown the computer.

  6. Repeat steps 2-5 with the first drive unplugged instead of the second.
  7. Reboot with both drives plugged in.

At this point, you have a working setup that will gracefully degrade to a one-drive RAID array should one of your drives fail.

,

Planet DebianCharles Plessy: Register your media types to the IANA !

As the maintainer of the mime-support in Debian, I would like to give Kudos to Petter Reinholdtsen, who just opened a ticket at the IANA to create a text/vnd.sosi media type. May his example be followed by others!

Planet DebianEnrico Zini: Identity links

Krebs on SecurityLegal Threats Make Powerful Phishing Lures

Some of the most convincing email phishing and malware attacks come disguised as nastygrams from a law firm. Such scams typically notify the recipient that he/she is being sued, and instruct them to review the attached file and respond within a few days — or else. Here’s a look at a recent spam campaign that peppered more than 100,000 business email addresses with fake legal threats harboring malware.

On or around May 12, at least two antivirus firms began detecting booby-trapped Microsoft Word files that were sent along with some various of the following message:

{Pullman & Assoc. | Wiseman & Assoc.| Steinburg & Assoc. | Swartz & Assoc. | Quartermain & Assoc.}

Hi,

The following {e-mail | mail} is to advise you that you are being charged by the city.

Our {legal team | legal council | legal departement} has prepared a document explaining the {litigation | legal dispute | legal contset}.

Please download and read the attached encrypted document carefully.

You have 7 days to reply to this e-mail or we will be forced to step forward with this action.

Note: The password for the document is 123456

The template above was part of a phishing kit being traded on the underground, and the user of this kit decides which of the options in brackets actually get used in the phishing message.

Yes, the spelling/grammar is poor and awkward (e.g., the salutation), but so is the overall antivirus detection rate of the attached malicious Word document. This phishing kit included five booby-trapped Microsoft Word documents to choose from, and none of those files are detected as malicious by more than three of the five dozen or so antivirus products that scanned the Word docs on May 22 — 10 days after they were spammed out.

According to both Fortinet and Sophos, the attached Word documents include a trojan that is typically used to drop additional malware on the victim’s computer. Previous detections of this trojan have been associated with ransomware, but the attackers in this case can use the trojan to install malware of their choice.

Also part of the phishing kit was a text document containing some 100,000 business email addresses — most of them ending in Canadian (.ca) domains — although there were also some targets at companies in the northeastern United States. If only a tiny fraction of the recipients of this scam were unwary enough to open the attachment, it would still be a nice payday for the phishers.

The law firm domain spoofed in this scam — wpslaw.com — now redirects to the Web site for RWC LLC, a legitimate firm based in Connecticut. A woman who answered the phone at RWC said someone had recently called to complain about a phishing scam, but beyond that the firm didn’t have any knowledge of the matter.

As phishing kits go, this one is pretty basic and not terribly customized or convincing. But I could see a kit that tried only slightly harder to get the grammar right and more formally address the recipient doing quite well: Legitimate-looking legal threats have a way of making some people act before they think.

Don’t be like those people. Never open attachments in emails you were not expecting. When in doubt, toss it out. If you’re worried it may be legitimate, research the purported sender(s) and reach out to them over the phone if need be. And resist the urge to respond to these spammers; doing so may only serve to encourage further “mailious” correspondence.

KrebsOnSecurity would like to thank Hold Security for a heads up on this phishing kit.

CryptogramVisiting the NSA

Yesterday, I visited the NSA. It was Cyber Command's birthday, but that's not why I was there. I visited as part of the Berklett Cybersecurity Project, run out of the Berkman Klein Center and funded by the Hewlett Foundation. (BERKman hewLETT -- get it? We have a web page, but it's badly out of date.)

It was a full day of meetings, all unclassified but under the Chatham House Rule. Gen. Nakasone welcomed us and took questions at the start. Various senior officials spoke with us on a variety of topics, but mostly focused on three areas:

  • Russian influence operations, both what the NSA and US Cyber Command did during the 2018 election and what they can do in the future;

  • China and the threats to critical infrastructure from untrusted computer hardware, both the 5G network and more broadly;

  • Machine learning, both how to ensure a ML system is compliant with all laws, and how ML can help with other compliance tasks.

It was all interesting. Those first two topics are ones that I am thinking and writing about, and it was good to hear their perspective. I find that I am much more closely aligned with the NSA about cybersecurity than I am about privacy, which made the meeting much less fraught than it would have been if we were discussing Section 702 of the FISA Amendments Act, Section 215 the USA Freedom Act (up for renewal next year), or any 4th Amendment violations. I don't think we're past those issues by any means, but they make up less of what I am working on.

Planet DebianJonathan Wiltshire: RC candidate of the day (2)

Sometimes the list of release-critical bugs is overwhelming, and it’s hard to find something to tackle.

Today’s invitation is to review and perhaps upload the patch included in bug #928883.

Planet DebianThomas Goirand: Wrote a Debian mirror setup puppet module in 3 hours

As I needed the functionality, I wrote this:

https://salsa.debian.org/openstack-team/puppet/puppet-module-debian-archvsync

The matching Debian package has been uploaded and is now in the NEW queue. Thanks a lot to Waldi for packaging ftpsync, which I’m using.

Comments and contributions are welcome.

CryptogramFingerprinting iPhones

This clever attack allows someone to uniquely identify a phone when you visit a website, based on data from the accelerometer, gyroscope, and magnetometer sensors.

We have developed a new type of fingerprinting attack, the calibration fingerprinting attack. Our attack uses data gathered from the accelerometer, gyroscope and magnetometer sensors found in smartphones to construct a globally unique fingerprint. Overall, our attack has the following advantages:

  • The attack can be launched by any website you visit or any app you use on a vulnerable device without requiring any explicit confirmation or consent from you.
  • The attack takes less than one second to generate a fingerprint.
  • The attack can generate a globally unique fingerprint for iOS devices.
  • The calibration fingerprint never changes, even after a factory reset.
  • The attack provides an effective means to track you as you browse across the web and move between apps on your phone.

* Following our disclosure, Apple has patched this vulnerability in iOS 12.2.

Research paper.

Planet DebianPetter Reinholdtsen: Nikita version 0.4 released - free software archive API server

This morning, a new release of Nikita Noark 5 core project was announced on the project mailing list. The Nikita free software solution is an implementation of the Norwegian archive standard Noark 5 used by government offices in Norway. These were the changes in version 0.4 since version 0.3, see the email link above for links to a demo site:

  • Roll out OData handling to all endpoints where applicable
  • Changed the relation key for "ny-journalpost" to the official one.
  • Better link generation on outgoing links.
  • Tidy up code and make code and approaches more consistent throughout the codebase
  • Update rels to be in compliance with updated version in the interface standard
  • Avoid printing links on empty objects as they can't have links
  • Small bug fixes and improvements
  • Start moving generation of outgoing links to @Service layer so access control can be used when generating links
  • Log exception that was being swallowed so it's traceable
  • Fix name mapping problem
  • Update templated printing so templated should only be printed if it is set true. Requires more work to roll out across entire application.
  • Remove Record->DocumentObject as per domain model of n5v4
  • Add ability to delete lists filtered with OData
  • Return NO_CONTENT (204) on delete as per interface standard
  • Introduce support for ConstraintViolationException exception
  • Make Service classes extend NoarkService
  • Make code base respect X-Forwarded-Host, X-Forwarded-Proto and X-Forwarded-Port
  • Update CorrespondencePart* code to be more in line with Single Responsibility Principle
  • Make package name follow directory structure
  • Make sure Document number starts at 1, not 0
  • Fix isues discovered by FindBugs
  • Update from Date to ZonedDateTime
  • Fix wrong tablename
  • Introduce Service layer tests
  • Improvements to CorrespondencePart
  • Continued work on Class / Classificationsystem
  • Fix feature where authors were stored as storageLocations
  • Update HQL builder for OData
  • Update OData search capability from webpage

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on irc.freenode.net) or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Worse Than FailureCodeSOD: Do Fiasco

Consuela works with a senior developer who has been with the company since its founding, has the CEO’s ear, and basically can do anything they want.

These days, what they want to do is code entirely locally on their machine, hand the .NET DLL off to Consuela for deployment, and then complain that their fancy code is being starved for hardware resources.

Recently, they started to complain that the webserver was using 100% of the CPU resources, so obviously the solution was to upgrade the webserver. Consuela popped open ILSpy and decompiled the DLL. For those unfamiliar with .NET, it’s a supremely decompilable language, as it “compiles” into an Intermediate Language (IL) which is JIT-ed at runtime.

The code, now with symbols reinserted, looked like this:

private static void startLuck()
{
    luck._timer = new Timer(30000.0);
    luck._timer.Elapsed += delegate
    {
        try
        {
            luck.doFiasco().ConfigureAwait(true);
        }
        catch (Exception)
        {
        }
    };
    luck._timer.Enabled = true;
    luck._timer.Start();
    Console.WriteLine("fabricating...");
    Console.WriteLine(DateTime.UtcNow);
    while (true)
    {
        bool flag = true;
    }
}

The .NET Timer class invokes its Elapsed delegate every interval- in this case, it will invoke the delegate block every 30,000 milliseconds. This is not an uncommon way to launch a thread which periodically does something, but the way in which they ensured that this new thread never died is… interesting.

The while (true) loop not only pegs the CPU, but it also ensures that calls to startLuck are blocking calls, which more or less defeats the purpose of using a Timer in the first place.

Consuela pointed out the reason this code was using 100% of the CPU. First, the senior developer demanded to know how she had gotten the code, as it was only stored on their machine. After explaining decompilation, the developer submitted a new DLL, this time run through an obfuscator before handing it off. Even with obfuscation, it was easy to spot the while (true) loop in the IL.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianDavid Bremner: Dear UNB: please leave my email alone.

1 Background

Apparently motivated by recent phishing attacks against @unb.ca addresses, UNB's Integrated Technology Services unit (ITS) recently started adding banners to the body of email messages an. Despite (cough) several requests, they have been unable and/or unwilling to let people opt out of this. Recently ITS has reduced the size of banner; this does not change the substance of what is discussed here. In this blog post I'll try to document some of the reasons this reduces the utility of my UNB email account.

2 What do I know about email?

I have been using email since 1985 1. I have administered my own UNIX-like systems since the mid 1990s. I am a Debian Developer 2. Debian is a mid-sized organization (there are more Debian Developers than UNB faculty members) that functions mainly via email (including discussions and a bug tracker). I maintain a mail user agent (informally, an email client) called notmuch 3. I administer my own (non-UNB) email server. I have spent many hours reading RFCs 4. In summary, my perspective might be different than an enterprise email adminstrator, but I do know something about the nuts and bolts of how email works.

3 What's wrong with a helpful message?

3.1 It's a banner ad.

I don't browse the web without an ad-blocker and I don't watch TV with advertising in it. Apparently the main source of advertising in my life is a service provided by my employer. Some readers will probably dispute my description of a warning label inserted by an email provider as "advertising". Note that is information inserted by a third party to promote their own (well intentioned) agenda, and inserted in an intentionally attention grabbing way. Advertisements from charities are still advertisements. Preventing phishing attacks is important, but so are an almost countless number of priorities of other units of the University. For better or worse those units are not so far able to insert messages into my email. As a thought experiment, imagine inserting a banner into every PDF file stored on UNB servers reminding people of the fiscal year end.

3.2 It makes us look unprofessional.

Because the banner is contained in the body of email messages, it almost inevitably ends up in replies. This lets funding agencies, industrial partners, and potential graduate students know that we consider them as potentially hostile entities. Suggesting that people should edit their replies is not really an acceptable answer, since it suggests that it is acceptable to download the work of maintaining the previous level of functionality onto each user of the system.

3.3 It doesn't help me

I have an archive of 61270 email messages received since 2003. Of these 26215 claim to be from a unb.ca address 5. So historically about 42% of the mail to arrive at my UNB mailbox is internal 6. This means that warnings will occur in the majority of messages I receive. I think the onus is on the proposer to show that a warning that occurs in the large majority of messages will have any useful effect.

3.4 It disrupts my collaboration with open-source projects

Part of my job is to collaborate with various open source projects. A prominent example is Eclipse OMR 7, the technological driver for a collaboration with IBM that has brought millions of dollars of graduate student funding to UNB. Git is now the dominant version control system for open source projects, and one popular way of using git is via git-send-email 8

Adding a banner breaks the delivery of patches by this method. In the a previous experiment I did about a month ago, it "only" caused the banner to end up in the git commit message. Those of you familiar with software developement will know that this is roughly the equivalent of walking out of the bathroom with toilet paper stuck to your shoe. You'd rather avoid it, but it's not fatal. The current implementation breaks things completely by quoted-printable re-encoding the message. In particular '=' gets transformed to '=3D' like the following

-+    gunichar *decoded=g_utf8_to_ucs4_fast (utf8_str, -1, NULL);
-+    const gunichar *p = decoded;
++    gunichar *decoded=3Dg_utf8_to_ucs4_fast (utf8_str, -1, NULL);

I'm not currently sure if this is a bug in git or some kind of failure in the re-encoding. It would likely require an investment of several hours of time to localize that.

3.5 It interferes with the use of cryptography.

Unlike many people, I don't generally read my email on a phone. This means that I don't rely on the previews that are apparently disrupted by the presence of a warning banner. On the other hand I do send and receive OpenPGP signed and encrypted messages. The effects of the banner on both signed and encrypted messages is similar, so I'll stick to discussing signed messages here. There are two main ways of signing a message. The older method, still unfortunately required for some situations is called "inline PGP". The signed region is re-encoded, which causes gpg to issue a warning about a buggy MTA 9, namely gpg: quoted printable character in armor - probably a buggy MTA has been used. This is not exactly confidence inspiring. The more robust and modern standard is PGP/MIME. Here the insertion of a banner does not provoke warnings from the cryptography software, but it does make it much harder to read the message (before and after screenshots are given below). Perhaps more importantly it changes the message from one which is entirely signed or encrypted 10, to one which is partially signed or encrypted. Such messages were instrumental in the EFAIL exploit 11 and will probably soon be rejected by modern email clients.

 signature-clean.png

Figure 1: Intended view of PGP/MIME signed message

 signature-dirty.png

Figure 2: View with added banner

Footnotes:

1

On Multics, when I was a high school student

4

IETF Requests for Comments, which define most of the standards used by email systems.

5

possibly overcounting some spam as UNB originating email

6

In case it's not obvious dear reader, communicating with the world outside UNB is part of my job.

8

Some important projects function exclusively that way. See https://git-send-email.io/ for more information.

9

Mail Transfer Agent

Author: David Bremner

Created: 2019-05-22 Wed 17:04

Validate

,

Planet DebianEnrico Zini: Privilege links

Planet DebianMolly de Blanc: remuneration

I am a leader in free software. As evidence for this claim, I like to point out that I once finagled an invitation to the Google OSCON luminaries dinner, and was once invited to a Facebook party for open source luminaries.

In spite of my humor, I am a leader and have taken on leadership roles for a number of years. I was in charge of guests of honor (and then some) at Penguicon for several years at the start of my involvement in FOSS. I’m a delegate on the Debian Outreach team. My participation in Debian A-H is a leadership role as well. I’m president of the OSI Board of Directors. I’ve given keynote presentations on two continents, and talks on four. And that’s not even getting into my paid professional life. My compensated labor has been nearly exclusively for nonprofits.

Listing my credentials in such concentration feels a bit distasteful, but sometimes I think it’s important. Right now, I want to convey that I know a thing or two about free/open source leadership. I’ve even given talks on that.

Other than my full-time job, my leadership positions come without material renumeration — that is to say I don’t get paid for any of them — though I’ve accepted many a free meal and have had travel compensated on a number of occasions. I am not interested in getting paid for my leadership work, though I have come to believe that more leadership positions should be paid.

One of my criticisms about unpaid project/org leadership positions is that they are so time consuming it means that the people who can do the jobs are:

  • students
  • contractors
  • unemployed
  • those with few to no other responsibilities
  • those with very supportive partners
  • those with very supportive employers
  • those who don’t need much sleep
  • those with other forms of financial privilege

I have few responsibilities beyond some finicky plants and Bash (my cat). I also have extremely helpful roommates and modern technology (e.g. automatic feeders) that assist with these things while traveling. I can spend my evenings and weekends holed up in my office plugging away on my free software work. I have a lot of freedom and flexibility — economic, social, professional — that affords me this opportunity. Very few of us do.

This is is a problem! One solution is to pay more leadership positions; another is to have these projects hire someone in an executive director-like capacity and turn their leadership roles into advisory roles; or replace the positions with committees (the problem with the latter is that most committees still have/need a leader).

Diversity is good.

The time requirements for leadership roles severely limit the pool of potential participants. This limits the perspectives and experiences brought to the positions — and diversity in experience is widely considered to be good. People from underrepresented backgrounds generally overlap with marginalized communities — including ethnic, geographic, gender, race, and socio-economic minorities.

Volunteer work is not “more pure.”

One of the arguments for not paying people for these positions is that their motives will be more pure if they are doing it as a volunteer — because they aren’t “in it for the money. I would argue that your motives can be less pure if you aren’t being paid for your labor.

In mission-driven nonprofits, you want as much of your funding as possible to come from individual or community donors rather than corporate sponsors. You want the number of individual and community donors and members to be greater than that of your sponsors. You want to ensure you have enough money that should a corporate sponsor drop you (or you drop them), you are still in a sustainable position. You want to do this so that you are not beholden to any of your corporate or government sponsors. Freeing yourself from corporate influence allows you to focus on the mission of your work.

When searching for a volunteer leader, you need to look at them as a mission-driven nonprofit. Ask: What are their conflicts of interest? What happens if their employers pull away their support? What sort of financial threats are they susceptible to?

In a capitalist system, when someone is being paid for their labor, they are able to prioritize that labor. Adequate compensation enables a person to invest more fully in their work. When your responsibilities as the leader of a free software project, for which you are unpaid, come into direct conflict with the interests of your employer, who is going to win?

Note, however, that it’s important to make sure the funding to pay your leadership does not come with strings attached so that your work isn’t contingent upon any particular sponsor or set of sponsors getting what they want.

It’s a lot of work. Like, a lot of work.

By turning a leadership role into a job (even a part-time one), the associated labor can be prioritized over other labor. Many volunteer leadership positions require the same commitment as a part-time job, and some can be close to if not actually full-time jobs.

Someone’s full-time employer needs to be supportive of their volunteer leadership activities. I have some flexibility in the schedule for my day job, so I can plan meetings with people who are doing their day jobs, or in different time zones, that will work for them. Not everyone has this flexibility when they have a full-time job that isn’t their leadership role. Many people in leadership roles — I know past presidents of the OSI and previous Debian Project Leaders who will attest to this — are only able to do so because their employer allows them to shift their work schedule in order to do their volunteer work. Even when you’re “just” attending meetings, you’re doing so either with your employer giving you the time off, or using your PTO to do so.

A few final thoughts.

Many of us live in capitalist societies. One of the ways you show respect for someone’s labor is by paying them for it. This isn’t to say I think all FOSS contributions should be paid (though some argue they ought to be!), but that certain things require levels of dedication that go significantly above and beyond that which is reasonable. Our free software leaders are incredible, and we need to change how we recognize that.

(Please note that I don’t feel as though I should be paid for any of my leadership roles and, in fact, have reasons why I believe they should be unpaid.)

Planet DebianJonathan Wiltshire: RC candidate of the day (1)

Sometimes the list of release-critical bugs is overwhelming, and it’s hard to find something to tackle.

So I invite you to have a go at #928040, which may only be a case of reviewing and uploading the included patch.

TEDA first glimpse at the TEDSummit 2019 speaker lineup

At TEDSummit 2019, more than 1,000 members of the TED community will gather for five days of performances, workshops, brainstorming, outdoor activities, future-focused discussions and, of course, an eclectic program of TED Talks — curated by TED Global curator Bruno Giussani, pictured above. (Photo: Marla Aufmuth / TED)

With TEDSummit 2019 just two months away, it’s time to unveil the first group of speakers that will take to the stage in Edinburgh, Scotland, from July 21-25.

Three years ago, more than 1,000 members of the TED global community convened in Banff, Canada, for the first-ever TEDSummit. We talked about the fracturing state of the world, the impact of technology and the accelerating urgency of climate change. And we drew wisdom and inspiration from the speakers — and from each other.

These themes are equally pressing today, and we’ll bring them to the stage in novel, more developed ways in Edinburgh. We’ll also address a wide range of additional topics that demand attention — looking not only for analysis but also antidotes and solutions. To catalyze this process, half of the TEDSummit conference program will take place outside the theatre, as experts host an array of Discovery Sessions in the form of hands-on workshops, activities, debates and conversations.

Check out a glimpse of the lineup of speakers who will share their future-focused ideas below. Some are past TED speakers returning to give new talks; others will step onto the red circle for the first time. All will help us understand the world we currently live in.

Here we go! (More will be added in the coming weeks):

Amanda Levete, architect

Anna Piperal, digital country expert

Bob Langert, corporate changemaker

Carl Honoré, author

Carole Cadwalladr, investigative journalist

Diego Prilusky, immersive media technologist

Eli Pariser, organizer and author

Fay Bound Alberti, historian

George Monbiot, thinker and author

Hajer Sharief, youth inclusion activist

Howard Taylor, children safety advocate

Jochen Wegner, editor and dialogue creator

Kelly Wanser, geoengineering expert

Laura Safer Espinoza, workers’ rights advocate

Ma Yansong, architect

Marco Tempest, technology magician

Margaret Heffernan, business thinker

María Neira, global public health official

Mariana Lin, AI personalities writer

Mariana Mazzucato, economist

Marwa Al-Sabouni, architect

Nick Hanauer, capitalism redesigner

Nicola Jones, science writer

Nicola Sturgeon, First Minister of Scotland

Omid Djalili, comedian

Patrick Chappatte, editorial cartoonist

Pico Iyer, global author

Poet Ali, musician

Rachel Kleinfeld, violence scholar

Raghuram Rajan, former central banker

Rose Mutiso, energy for Africa activist

Sandeep Jauhar, cardiologist

Sara-Jane Dunn, computational biologist

Sheperd Doeleman, black hole scientist

Sonia Livingstone, social psychologist

Susan Cain, quiet revolutionary

Tim Flannery, carbon-negative tech scholar

Tshering Tobgay, former Prime Minister of Bhutan

 

With them, a number of artists will also join us at TEDSummit, including:

Djazia Satour, singer

ELEW, pianist and DJ

KT Tunstall, singer and songwriter

Min Kym, virtuoso violinist

Radio Science Orchestra, space-music orchestra

Yilian Cañizares, singer and songwriter

 

Registration for TEDSummit is open for active members of our various communities: TED conference members, Fellows, past TED speakers, TEDx organizers, Educators, Partners, Translators and more. If you’re part of one of these communities and would like to attend, please visit the TEDSummit website.

Planet DebianEnrico Zini: Self-care links

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, April 2019

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, 204 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 4 hours (out of 14 hours allocated, thus carrying over 10 hours to May).
  • Adrian Bunk did 8 hours (out of 8 hours allocated).
  • Ben Hutchings did 31.25 hours (out of 17.25 hours allocated plus 14 extra hours from April).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 17 hours (out of 17.25 hours allocated, thus carrying over 0.25h to May).
  • Emilio Pozuelo Monfort did 8 hours (out of 17.25 hours allocated + 6 extra hours from March, thus carrying over 15.25h to May).
  • Hugo Lefeuvre did 17.25 hours.
  • Jonas Meurer did 14 hours (out of 14 hours allocated).
  • Markus Koschany did 17.25 hours.
  • Mike Gabriel did 11.5 hours (out of 17.25 hours allocated, thus carrying over 5.75h to May).
  • Ola Lundqvist did 5.5 hours (out of 8 hours allocated + 1.5 extra hours from last month, thus carrying over 4h to May).
  • Roberto C. Sanchez did 1.75 hours (out of 12 hours allocated, thus carrying over 10.25h to May).
  • Sylvain Beucler did 17.25 hours.
  • Thorsten Alteholz did 17.25 hours.

Evolution of the situation

During this month, and after a two-year break, Jonas Meurer became again an active LTS contributor. Still, we continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The number of sponsors did not change. There are 58 organizations sponsoring 215 work hours per month.

The security tracker currently lists 33 packages with a known CVE and the dla-needed.txt file has 31 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

CryptogramHow Technology and Politics Are Changing Spycraft

Interesting article about how traditional nation-based spycraft is changing. Basically, the Internet makes it increasingly possible to generate a good cover story; cell phone and other electronic surveillance techniques make tracking people easier; and machine learning will make all of this automatic. Meanwhile, Western countries have new laws and norms that put them at a disadvantage over other countries. And finally, much of this has gone corporate.

LongNowA History of Land Art in the American West, Part III

Source: The Center for Land Use Interpretation.

As installation begins at the Texas site for Long Now’s monumental 10,000 Year Clock, it’s worth taking a step back to examine the Clock’s larger artistic context and its place in the history of Land Art in the American West.

Long Now’s staff and many of the individuals working on the project and serving on our board have drawn inspiration for the 10,000 Year Clock, and its placement in the remote landscape of West Texas, from these Land Artists and their great works. 

In Part I of our exploration of Land Art in the American West, we covered the birth of the movement in the 01960s and some of the seminal works created by Robert Smithson, Michael Heizer, Nancy Holt and James Turrell, which expanded the definition of art and opened up new possibilities for the location of artworks. Drawn to the desert for its long vistas, compelling terrain, beautiful light and dark night skies, these artists pushed through the boundaries of art in their day to create monumental works that explored the expansiveness of earth and time.

In Part II of our series, we moved out of the 01960s to explore the work of 3 artists who created their major works during the 01970s and 01980s. We see a shift with these artists to a focus on complete control over the exhibition of their work and meticulous curation of the viewer’s experience coupled with a goal of permanence of the artwork in situ. Marfa, Texas is only about 80 miles from our Clock Site, making Donald Judd’s work there especially relevant to us.

In Part III, we survey the contemporary practice around Land Art, which—though no longer a contained art movement—is still inspired and informed by the work of the early Land Artists. We touch on the activities of the Nevada Museum of Art’s Center for Art + Environment, with exhibits, archives and a major conference; the Land Arts of the American West field program, covering 8,000 miles and multiple sites over 2 months; and the Center for Land Use Interpretation, with their comprehensive database of sites and mobile exhibits. We also explore some of Steve Rowell’s work, which exemplifies some of the new strategies and directions artists have taken to inform and reveal our relationship with the landscape. We close Part III with a list of resources for further exploration of Land Art in the American West.


Black Rock Desert. Source: Nevada Film Office.

The Nevada Museum of Art is located in Reno, at the edge of the arid Great Basin. The Truckee River flows by the museum and turns north towards Black Rock Desert, emptying its waters into Pyramid Lake, the remnant of a vast Pleistocene lake. The dusty playas, naked geology and open horizons of the region continue to attract artists who seek to engage with landscape. This setting and the museum’s history help to explain why the museum has become such an important institution for Land Art.

The oldest cultural institution in the state of Nevada, the Nevada Museum of Art was founded in 1931 as the Nevada Art Gallery by Dr. James E. Church, a Professor of German and Classics at the University of Nevada, Reno. Church was the first on record to summit 10,776-foot Mount Rose and build a snow survey station on the mountain, he was intricately connected to the area’s natural resources and an early example of the Museum’s interest in art and environment. — Nevada Museum website

Sierra Nevada: An Adaptation is a fifty-year project and conceptual map. Source: Nevada Museum of Art.

One year after the first Art + Environment Conference in 02008, the museum established the Center for Art + Environment, “an internationally recognized research center that supports the practice, study and awareness of creative interactions between people and their natural, built, and virtual environments.” With the establishment of the Center came a permanent gallery space, a research library and an archive, all dedicated to displaying and encouraging art that engages our environment.

The Center for Art + Environment’s list of exhibitions illustrates how the scope of an area of art — which we can no longer truly call a “Land Art movement” — has transformed and expanded since Robert Smithson and Michael Heizer first roamed the West. Many of the featured artists are directly addressing environmental issues such as climate change and resource scarcity. The Canary Project has photographed locations where scientists are studying the effects of climate change, creating a collection of images and installations entitled “A History of the Future.” In Sierra Nevada: An Adaptation, Helen Mayer Harrison and Newton Harrison presented a conceptual map that proposed “a series of long-term ecological responses to recorded temperature increases in the Sierra Nevada.” The map is just the first manifestation of a fifty-year project.

In addition to this explicit environmental activism and scientific collaboration, many of these modern projects have more to do with engaging and re-interpreting landscape and our relationship to it, and less to do with the large-scale earthen sculptures of early Land Art. The Center for Art + Environment’s first Artists|Writers|Environments Grant went to Amy Franceschini and Michael Taussig for This is Not a Trojan Horse, which consisted of a mobile, human-powered horse sculpture that traveled the Italian countryside investigating why working farmers still practice their traditional vocation:

The large-scale, mobile architecture and interactive sculpture collected traces of rural practices: seeds, tools, interviews, recipes and products to enliven the imaginations of farmers and locals through discourse and artistic production. The project was designed as a vehicle for social and material exchange at a pivotal moment in the Abruzzo region, when modes of traditional agricultural production are being challenged by large-scale corporate farming trends as well as new sustainable directions. — Nevada Museum website

“This is Not a Trojan Horse” by Amy Franceschini. Source: Nevada Museum of Art.

The 02011 Art + Environment Conference included presentations by many of the artists behind these exhibitions, as well as photographers, musicians, authors, the director of the Chinati Foundation in Marfa, science fiction writer Bruce Sterling, and The Long Now Foundation’s Executive Director Alexander Rose.

One Art + Environment project aimed to reflect on the terrestrial earth in its entirety. On December 3rd, 02018, Trevor Paglen’s Orbital Reflector launched into low orbit as part of the payload on SpaceX’s Falcon 9 rocket. The 100-foot-long diamond shaped mylar balloon was intended to be the world’s first space sculpture. It would be visible to the naked eye, appearing as a slowly-moving star in the sky. Paglen saw the project as a “catalyst” for asking what it means to be on this planet. Unfortunately, due to satellite tracking issues and the U.S. government shutdown, the work of art went unrealized.

Sketch of Trevor Paglen’s Orbital Reflector. Source: Trevor Paglen.

In addition to the exhibition space, the Center for Art + Environment’s library and archive are available to the public. The library serves as an art history resource for the community and museum staff, focused on works pertaining to the natural world and the environment. In his introduction to the 02011 Conference’s ‘Field Guide,’ the Center’s Director William Fox wrote specifically about the reason behind their archive:

Why does the Center for Art + Environment collect archives? It has been estimated that up to 97% of the world’s art is destroyed within one hundred years of its making. We’re not familiar with the best classical Greek statues because most of them are destroyed, buried, or underwater. We’ve lost a century of Dutch painting due to war, and countless Asian artworks are gone because of dynastic upheavals and tragic looting. Archive collections offer a momentary stay against decay and loss, but are an important opportunity for researchers to learn from the past — even as the present accelerates away from it. — 02011 A+E Conference Field Guide

Michael Heizer’s “City” from above. Photo by Paul Saffo.

The archive includes a collection of materials relating to the early land artists Michael Heizer and Walter De Maria. In fact, one of the Center’s early exhibitions was focused on this collection, indicating the enduring importance and relevance of the artists who created Double Negative and The Lightning Field. One of Heizer’s concepts from the late 01960s came to fruition in 02012 when a 340-ton boulder was placed above a trench in front of the Los Angeles County Museum of Art for his piece Levitated Mass. Heizer’s work-in-progress City is also located in Nevada, where it has been under construction since the early 01970s. Long Now board member Paul Saffo recently spotted City during a flight to San Francisco and took the picture, at left, from the airplane.

The archive has also acquired materials from the Land Arts of the American West field program, an educational program in which participants travel through the American West exploring the landscape and the ways in which humans have interacted with it. The Center also supports a number of research fellows, including past Interval speaker Jonathon Keats


Land Arts of the American West was started by Bill Gilbert at the University of New Mexico in 02000, and since 02002 has also been operated by Chris Taylor, first at the University of Texas at Austin and more recently at Texas Tech University. Each year they pack up the vans and take a group of students on a journey of more than 8,000 miles through the American West, visiting sites that range from canyons and mines to seminal Land Art sculptures and Native American ruins. In the introduction to the book that Taylor and Gilbert published about the program in 02009, they describe the project as “a field program designed to explore the large array of human responses to a specific landscape over an extended period of time.”¹ 

Kite photography with David Gregor, Cabinetlandia, east of Deming, New Mexico. Source: Land Arts of the American West.

While the Land Arts program has an indisputably educational function, it is also a work of art in and of itself. Participants engage with the landscape as they travel through it by producing site-specific and ephemeral artwork (in keeping with the program’s no-trace ethic), and follow up with gallery exhibitions once they return from the field. Photographs of many of these works can be seen on the Land Arts website. The practice of displaying artwork that is associated with site-specific land art in an urban gallery is one that began in early Land Art to address the problem of reaching an audience that cannot necessarily visit the actual (often remote) location of a work. Robert Smithson wrote about this duality and defined it as a dialectic of the site and the non-site. Describing an early piece, he writes:

The Non-Site (an indoor earthwork) is a three dimensional logical picture that is abstract, yet it represents an actual site in N.J. (The Pine Barrens Plains). It is by this dimensional metaphor that one site can represent another site which does not resemble it — this is The Non-Site. To understand this language of sites is to appreciate the metaphor between the syntactical construct and the complex of ideas, letting the former function as a three dimensional picture which doesn’t look like a picture. — A Provisional Theory of Non-Sites

The creative aspect of Land Arts is a key component of the program. Chris Taylor writes that “within an expanding definition of landscape, producing work in the field, regardless of scale, is paramount in translating both interpretation and action — in moving from passive awareness to active investment.”² Land Art, he adds, is work that inspires and makes other works.

The notion that Land Arts is itself a work of Land Art meshes well with Taylor’s perspective on the topic, influenced in part by the writings of John Brinckerhoff Jackson. Jackson was a writer and scholar who published Landscape magazine in the 01950s and broadened the scope of geography as a practice.

Jackson’s work, which dominated the first five issues of the magazine, was grounded in what he would later call the vernacular: an interest in the commonplace or everyday landscape, and Jackson expressed an innate confidence in the ability of people of small means to make significant changes, by no means all bad, in their surroundings. — Wikipedia

This all-encompassing interest in the landscape is represented in the yearly itineraries for the program. The group’s 02017 destinations included Cebolla Canyon, Mimbres River, White Sands, Chaco Caynon, Brokeoff Mountains, Cabinetlandia, and Jackpile Mine in New Mexico; Cedar Mesa, Spiral Jetty, Sun TunnelsandCLUI , in Utah; Double Negativein Nevada; the North Rim of the Grand Canyon and the Chiricahua Mountains in Arizona; and Marfa, Texas. Along the way, unintentional works both small and large might be found; even an empty tin can half-buried in the dust beneath a sagebrush is worth investigating and interpreting.

Ann Reynolds explaining Roberts Smithson’s proposal for the bottom of Bingham Canyon Mine, Utah. Source: Land Arts of the American West.

One thing that these sites have in common is their location within the desert landscape of the American West. The pedagogical value of the desert landscape is important for Taylor as a teacher, and Land Artists and miners alike appreciate the accessibility of the land’s geology — embodying deep time in a tangible way. Even works such as Spiral Jetty and Double Negative, constructed in the 01970s, can now illustrate to a visitor the transformative effects of time on an earthen body. In 02007, when Chris Taylor took the Land Arts concept abroad, he went to Chile’s Atacama Desert — one of the driest places on the planet. “Legibility,” he writes, “is the hallmark of arid regions Land Arts works within. Laid bare, these places allow open readings where we can map the intersection of geomorphology and human construction. Arid lands are unable to hold secrets. They challenge our ability to remember and create enduring language.”³ 

Being present in the landscape is a crucial step to appreciating those qualities. Images of the American West can be impressive, but they are inevitably exclusive, framing a particular part of a view and omitting the rest. Taylor likens standing in the desert to being in an IMAX theater — the experience is immersive in a way that a photograph on a wall cannot replicate.

Celeste Martinez celestial vaulting at Muley Point, Utah. Source: Land Arts of the American West.

During the Land Arts’ annual voyage, the group meets with knowledgeable guests who hold seminars on various sites and topics. Among frequent collaborators are William Fox, the aforementioned director of the Center for Art + Environment, and Matt Coolidge of the Center for Land Use Interpretation, who we discuss later in this article. They also visit artisans such as Mary Lewis Garcia of the Acoma Pueblo, who demonstrates traditional pottery making for the students. 02011 brought a new sort of guest, filmmaker Sam Douglas of Big Beard Films. Douglas made a film about the Land Arts of the American West program, Through the Repellent Fence(02017).

Potential expansion of the program includes creating a more focused undergraduate curriculum, the opportunity for postgraduate work and residencies for visiting professionals. There is also a deepening connection to the people they have met and worked with in the field. As Land Arts continues and evolves, both Chris Taylor and Bill Gilbert see the program anchored by their commitment to a hands-on method of inquiry.


James Turrell, Stone Sky, 02005. Source: Galerie Magazine.

With the goal of making perception itself his medium, some of James Turrell’s work immerses viewers in a textureless space. The human mind seeks patterns, visual or otherwise, and when confronted with what appears to be a perfectly smooth surface, attempts to fill the void with something meaningful. In this way, we project and observe something of ourselves onto the artwork; we see ourselves seeing. To some, the desert presents a similar opportunity. Early Land Artists saw a place to take minimalist sculptural practices to extremes, but they also began to think about our relationship to the landscape and how to represent it. In such stark settings, time’s effects, geological forces, and human intervention are all much more apparent than they would be in a busily verdant or urban environment. Seeking to incorporate these effects into their work, Land Artists maintained that viewers had to experience the landscape and the setting in person to truly understand their work.

James Turrell.

Much as Turrell’s stark surfaces compel viewers to consider their own agency in perceiving light and objects, the starkness of the desert landscape compels viewers of Land Art to consider the agents shaping the environment. While this relationship isn’t always the primary focus of Land Art, it has become a common theme in the contemporary practice of the genre. Geographers, anthropologists, landscape architects, and others have long evaluated the human/landscape relationship, but usually in a formalized way. Land Artists, on the other hand, take part in what Matt Coolidge calls a non-disciplinary scholarship. Coolidge is among the founders of The Center for Land Use Interpretation, a group of artists and researchers documenting landscape and attempts to interpret it. Or, as they put it:

The Center for Land Use Interpretation is a research and education organization interested in understanding the nature and extent of human interaction with the earth’s surface, and in finding new meanings in the intentional and incidental forms that we individually and collectively create. We believe that the manmade landscape is a cultural inscription, that can be read to better understand who we are, and what we are doing. — The Center for Land Use Interpretation

Pilot peak with bombing crater, Wendover, Utah. Photo by Chris Taylor. Source: CLUI.

In its own non-disciplinary way, CLUI is a broad exploration of humanity’s relationship to nature and landscape. They maintain several archives (living, accessible ones you can browse on their website) of interventions on the land, and produce shows and exhibits at their headquarters in Southern California and in mobile exhibition units all over the United States. Their Land Use Database is an extensive index of unique sites, places where land is used in experimental, controversial, secret or otherwise noteworthy ways:

Some sites included in the database are works by government agencies involved in geo-transformative activities, such as the Department of Energy, the Bureau of Reclamation, the Army Corps of Engineers, and the Department of Defense. Also included are industrially altered landscapes, such as especially noteworthy mining sites, features of transportation systems, and field test facilities for a variety of high-impact technologies. The database includes museums and displays related to land use, and one of the most thorough listings of land art sites available. — Land Use Database

Transportation Technology Center, Colorado. Source: CLUI.

While one would be hard-pressed to call the Transportation Technology Center of the Colorado plains a work of Land Art (it’s a remote, restricted-access facility with 48 miles of train tracks for crash-testing and studying new train designs, among other things), knowing of its existence offers the opportunity to consider the vast effort our nation has made in order to physically link itself via rail. Beyond just a unique use of land, the TTC has played a role in shaping our use of land all over the continent. The Lightning Field and other pieces of Land Art are documented in the Land Use Database as well and provide similar opportunities for considering our use of the planet’s surface.

In addition to the Land Use Database, CLUI maintains the Morgan Cowles Archive:

The Morgan Cowles Archive is the principal collection of images at the Center for Land Use Interpretation, as well as the initiative to preserve and present them to the public. The archive draws from over 100,000 images of thousands of places taken by people working for the CLUI since the inception of the organization in 1994.

The Center’s American Land Museum is a nation-wide, distributed network of exhibition sites:

The physical form of the individual museum locations will differ according to site considerations and available development resources. The primary “exhibit” at each location is, naturally, the immediate landscape of the location itself. Collectively the individual exhibit sites comprise the American Land Museum, a museum both situated in and made up of the landscapes of America.

Though its description might seem a bit cryptic, the American Land Museum is a project that is meant to encourage actively seeing one’s surrounding landscape. Like many projects by CLUI, it seeks to present places without judgment in order to encourage viewers to form their own opinions about places and their relationship to them. The Center catalogues and exhibits artistic and utilitarian interventions on the landscape with the faux-formality of a very serious, clinical institution. CLUI’s belief is that by cultivating a bit of ambiguity and disorientation in this way, they can encourage viewers to actively orient themselves to the landscape.


Many years ago, on a visit to the Museum of Jurassic Technology in Los Angeles, artist Steve Rowell noticed a nondescript office nearby and, intrigued by its sign, wandered in. He found himself in the headquarters of the Center for Land Use Interpretation and felt an immediate resonance with the Center’s vision. Several years later and newly a Californian, he started using the resources offered by the Center to help him familiarize himself with his new surroundings. He kept in touch, going on some of the tours they offer, and was hired in 02001 to help redesign their website.

The Ely Tracking Station site in Nevada was torn down in 02012. Source: CLUI.

While working with the Center, Rowell traveled with one of its founders, Matt Coolidge, to a site north of Ely, Nevada where NASA used to conduct monitoring of a research program focused on hypersonic flight. Some of the craft that were tracked by the station had flown to the very edges of space and Rowell found the age and distant reach of the location an inspiration. Much of the artwork Rowell has done since then, with CLUI and otherwise, has focused on themes represented by that site: the technological expressions of power, desolate landscapes, extreme distances and isolation, and ground-based support for activities in the air.

Steve Rowell doesn’t consider his work to be Land Art, but he has been inspired by the form and many of the questions it poses and has overlapped geographically with many of the artists and sites we’ve previously discussed.

TX AUX IN, 02008. Source: Simparch.

In 02008 he created a piece in Marfa with Simparch called TX AUX IN, which featured field recordings from desert-based military listening stations, aircraft testing facilities and radar stations. The recordings are played for a listener inside an old camper trailer that’s been converted to keep a maximum of light and sound out, creating an immersive audio experience. He’s also a Research Director with Office of Experiments, who state:

Our aim is to develop autonomous resources such as archives, databases, publications and field guides, through which we can draw material evidence and interpretive speculation on the fabric of sites, spaces and events. In doing so, we hope to open and create alternative public resources that will inform the broader imaginary, perception, engagement and critical response to the scale, time base and structures of the rational world. — Office of Experiments

Long Now worked closely with Steve Rowell around the opportunity to visit the Svalbard Seed Vault; that visit led both to new audio and visual pieces by Rowell and added to Long Now’s growing collection of information on underground construction around the world.

Wendover Air Base. Source: CLUI.

Rowell’s work is a good example of what Chris Taylor has described as Land Art’s proclivity to generate other works. Interventions on the landscape aren’t always made with aesthetic or artistic intentions, but from Spiral Jetty to the Wendover Airfield, from the Mojave to the Arctic, humanity’s marks on the planet are open to a wide array of interpretations which can continue to inform and reveal our relationship with the landscape. Land Art helped to diffuse discussions of that relationship beyond strict disciplinary boundaries and Steve Rowell’s work (along with that of CLUI and others) further expands the frame by also drawing inspiration from these non-artistic works.


Construction of the 10,000 Year Clock. Source: Long Now.

One can trace a direct line from the early large scale Land Art pieces and artists of the 1960s, through the refinement of the experience of the art as exemplified by Judd, De Maria and others in the 1970s and 80s to the multi-faceted approaches to art in the landscape today. These earlier works and explorations—many of them still visible or being worked on today—continue to inform and inspire not only the practice of the new generation of landscape artists, but how Long Now creates the experience of visiting the 10,000 Year Clock in the mountain. 


Written by Austin Brown, Danielle Engelman, and Alex Mensing. Edited and updated by Ahmed Kabil.

We would like to extend many thanks to Chris Taylor of the Land Arts of the American West program, Matt Coolidge of The Center for Land Use Interpretation, and artist Steve Rowell for sharing their knowledge and perspectives with us over the course of several telephone interviews.

Footnotes:

[1] Taylor, Chris and Bill Gilbert, Land Arts of the American West, University of Texas Press, Austin: 02009, pp. 25.

[2] Taylor, Chris et. al. Incubo Atacama Lab. Incubo, Santiago, Chile: 02008, pp. 11.

[3] Ibid., pp. 11.

Works Cited:

Additional Resources:

Interesting artists and projects that fell outside the framework of this article include:


Worse Than FailureCodeSOD: Tern Failure into a Success

Oliver Smith stumbled across a small but surprising bug when running some automated tests against a mostly clean code-base. Specifically, they were trying to break things by playing around with different compiler flags and settings. And they did, though in a surprising case.

bool long_name_that_maybe_distracted_someone()
{
  return (execute() ? CONDITION_SUCCESS : CONDITION_FAILURE);
}

Note the return type of the method is boolean. Note that execute must also return boolean. So once again, we’ve got a ternary which exists to turn a boolean value into a boolean value, which we’ve seen so often it’s barely worth mentioning. But there’s an extra, hidden assumption in this code- specifically that CONDITION_SUCCESS and CONDITION_FAILURE are actually defined to be true or false.

CONDITION_SUCCESS was in fact #defined to be 1. CONDITION_FAILURE, on the other hand, was #defined to be 2.

Worse, while long_name_that_maybe_distracted_someone is itself a wrapper around execute, it was in turn called by another wrapper, which also mapped true/false to CONDITION_SUCCESS/CONDITION_FAILURE. Oddly, however, that method itself always returned CONDITION_SUCCESS, even if there was some sort of failure.

This code had been sitting in the system for years, unnoticed, until Oliver and company had started trying to find the problem areas in their codebase.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet Linux AustraliaMichael Still: A nerd snipe, in which I learn to read gerber files

Share

So, I had the realisation last night that the biggest sunk cost with getting a PCB made in China is the shipping. The boards are about 50 cents each, and then its $25 for shipping (US dollars of course). I should therefore be packing as many boards into a single order as possible to reduce the shipping cost per board.

I have a couple of boards on the trot at the moment, my RFID attendance tracker project (called GangScan), and I’ve just decided to actually get my numitrons working and whipped up a quick break out board for those. You’ll see more about that one later I’m sure.

I decided to ask my friends in Canberra if they needed any boards made, and one friend presented with a set of Gerber CAM files and nothing else. That’s a pain because I need to know the dimensions of the board for the quoting system. Of course, I couldn’t find a tool to do extract that for me with a couple of minutes of Googling, so… I decided to just learn to read the file format.

Gerber is well specified, with a quite nice specification available online. So it wasn’t too hard to dig out the dimensions layer from the zipped gerber files and then do this:

Contents of file Meaning Dimensional impact
G04 DipTrace 3.3.1.2* Comment
G04 BoardOutline.gbr* Comment
%MOIN*% File is in inch units
G04 #@! TF.FileFunction,Profile* Comment
G04 #@! TF.Part,Single* Comment
%ADD11C,0.005512*% Defines an apperture. D11 is a circle with diameter 0.005512 inches
%FSLAX26Y26*% Resolution is 2.6, i.e. there are 2 integer places and 6 decimal places
G04* Comment
G70* Historic way of setting units to inches
G90* Historic way of setting coordinates to absolute notation
G75* Sets quadrant mode graphics state parameter to ‘multi quadrant’
G01* Sets interpolation mode graphics state parameter to ‘linear interpolation’
G04 BoardOutline* Comment
%LPD*% Sets the object polarity to dark
X394016Y394016D2* Set current point to 0.394016, 0.394016 (in inches) Top left is 0.394016, 0.394016 inches
D11* Draw the previously defined tiny circle
Y1194016D1* Draw a vertical line to 1.194016 inches Board is 1.194016 inches tall
X1931366Y1194358D1* Draw a line to 1.931366, 1.194358 inches
Board is 1.931366 inches wide (and not totally square)
Y394358D1* Draw a vertical line to 0.394358 inches
X394016Y394016D1* Draw a line to 0.394016, 0.394016 inches
M02* End of file

So this board is effectively 3cm by 5cm.

A nice little nerd snipe to get the morning going.

Share

,

Sam VargheseJournalists Savva and Karvelas knew the polling was wrong. Yet they kept quiet. Why?

Over the weekend, the Australian federal election ended in a manner that was the exact opposite of that expected by the public if one were to go by the opinion polls – Newspoll and Ipsos – that ran in the major media outlets. Both predicted a win for Labor. The result, as you are well aware, could not have been more different.

But surprisingly there were some people who were aware that the polling was not correct and kept mum about it. [Watch this video from 11:29].

ABC journalist Patricia Karvelas mentioned during election coverage on the network that she had been told of internal polling by the Labor Party that indicated that the reality was different. Karvelas said on the Insiders program on Sunday that Labor sources had told her of internal polling that indicated that things in Queensland were quite different to what was being reported in public.

And Niki Savva, a journalist who writes for The Australian, said on the same program that she had been told similar things by the Liberals; that their internal polling was totally different from what the public polls were saying and that there was no reason to fear they would lose any seats in that state.

Yet both these journalists kept quiet about it. One would have thought that this was a story well worth writing and one that the public should know about. In the case of the ABC, the public pays Karvelas’ wages and thus the obligation to report something so newsworthy was all the more pressing.

I think this is due to what I call an incestuous relationship between political reporters and politicians. It is not the first time it has happened. The public are the mugs in such cases.

Last year, the fact that Barnaby Joyce was having an affair with Vicki Campion, a media adviser of his, was revealed by the Telegraph. The reporter concerned earned a lot of righteous criticism from many others in the profession.

People like Katharine Murphy of the Guardian and Jacqueline Maley of the Sydney Morning Herald knew about it and kept mum. Another journalist, Julia Baird of the ABC, tried to spin the story as one that showed how a woman in Joyce’s position would have been treated – much worse, was her opinion. But all three did not say a word.

The story was in the public interest, because Joyce and Campion are both paid from the public purse. When their affair became an issue, Joyce had her moved around to the offices of his National Party mates, Matt Canavan and Damian Drum, at salaries that went as high as $190,000.

At the time, Joyce was no ordinary politician – he was the deputy prime minister and thus acted as the head of the country whenever the prime minister was out of the country. Thus anything that affected his functioning was of interest to the public as he could make decisions that affected them.

A third such case is that which concerns Peter Costello and John Howard. In 2005, journalists Michael Brissenden (ABC), Tony Wright (Fairfax) and Paul Daley (freelance) were at a dinner with former treasurer Peter Costello at which he told them he had set next April (2006) as the absolute deadline “that is, mid-term,” for John Howard to stand aside; if not, he would challenge him.

Costello was said by Brissenden to have declared that a challenge “will happen then” if “Howard is still there”. “I’ll do it,” he said. He said he was “prepared to go the backbench”. He said he’d “carp” at Howard’s leadership “from the backbench” and “destroy it” until he “won” the leadership.

But the three journalists kept mum about what would have been a big scoop, because Costello’s press secretary asked them not to write the yarn.

There was a great deal of speculation in the run-up to the 2007 election as to whether Howard would step down; one story in July 2006 said there had been an unspoken 1994 agreement between him and Costello to vacate the PM’s seat and make way for Costello to get the top job.

Had the three journalists at that 2005 dinner gone ahead and reported the story — as journalists are supposed to do — it is unlikely that Howard would have been able to carry on as he did. It would have forced Costello to challenge for the leadership or quit. In short, it would have changed the course of politics.

But Brissenden, Daley and Wright kept mum.

These are all cases that impact on the public knowing what they should, by right, know.

Planet Linux AustraliaJames Morris: Linux Security Summit 2019 North America: CFP / OSS Early Bird Registration

The LSS North America 2019 CFP is currently open, and you have until May 31st to submit your proposal. (That’s the end of next week!)

If you’re planning on attending LSS NA in San Diego, note that the Early Bird registration for Open Source Summit (which we’re co-located with) ends today.

You can of course just register for LSS on its own, here.

CryptogramThe Concept of "Return on Data"

This law review article by Noam Kolt, titled "Return on Data," proposes an interesting new way of thinking of privacy law.

Abstract: Consumers routinely supply personal data to technology companies in exchange for services. Yet, the relationship between the utility (U) consumers gain and the data (D) they supply -- "return on data" (ROD) -- remains largely unexplored. Expressed as a ratio, ROD = U / D. While lawmakers strongly advocate protecting consumer privacy, they tend to overlook ROD. Are the benefits of the services enjoyed by consumers, such as social networking and predictive search, commensurate with the value of the data extracted from them? How can consumers compare competing data-for-services deals? Currently, the legal frameworks regulating these transactions, including privacy law, aim primarily to protect personal data. They treat data protection as a standalone issue, distinct from the benefits which consumers receive. This article suggests that privacy concerns should not be viewed in isolation, but as part of ROD. Just as companies can quantify return on investment (ROI) to optimize investment decisions, consumers should be able to assess ROD in order to better spend and invest personal data. Making data-for-services transactions more transparent will enable consumers to evaluate the merits of these deals, negotiate their terms and make more informed decisions. Pivoting from the privacy paradigm to ROD will both incentivize data-driven service providers to offer consumers higher ROD, as well as create opportunities for new market entrants.

Planet DebianNeil Williams: New directions

It's been a difficult time, the last few months, but I've finally got some short updates.

First, in two short weeks I will be gainfully employed again at (UltraSoc) as Senior Software Tester, developing test framework solutions for SoC debugging, including on RISC-V. Despite vast numbers of discussions with a long list of recruitment agences, success came from a face to face encounter at a local Job Fair. Many thanks to Cambridge Network for hosting the event.

Second, I've finally accepted that https://www.codehelp.co.uk was too old to retain and I'm simply redirecting the index page to this blog. The old codehelp site hasn't kept up with new technology and the CSS handles modern screen resolutions particularly badly. I don't expect that many people were finding the PHP and XML content useful, let alone the now redundant WML content. In time, I'll add redirects to the other codehelp.co.uk pages.

Third, my job hunting has shown that the centralisation of decentralised version control is still a thing. As far as recruitment is concerned, if the code isn't visible on GitHub, it doesn't exist. (It's not the recruitment agencies asking for GitHub links, it is the company HR departments themselves.) So I had to add a bunch of projects to GitHub and there's a link now in the blog.

Time to pick up some Debian work again, well after I pay a visit or two to the Cambridge Beer Festival 2019, of course.

LongNowInterval Highlight: Stewart Brand on Reviving the Mammoth Steppe

Stewart Brand explains the theory behind the Pleistocene Park project, which for the last 3 decades has been placing grazing animals on Siberia’s tundra to recreate the mammoth steppe habitat of the Pleistocene epoch.

From the Conversation at The Interval, “Siberia: A Journey to the Mammoth Steppe.”

Planet DebianDirk Eddelbuettel: digest 0.6.19

Overnight, digest version 0.6.19 arrived on CRAN. It will get uploaded to Debian in due course.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, and spookyhash algorithms) permitting easy comparison of R language objects.

This version contains two new functions adding new digest functionality. First, Dmitriy Selivanov added a fast and vectorized digest2int to convert (arbitrary) strings into 32 bit integers using one-at-a-time hashing. Second, Kendon Bell, over a series of PRs, put together a nice implementation of spookyhash as a first streaming hash algorithm in digest. So big thanks to both Dmitriy and Kendon.

No other changes were made.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianCandy Tsai: How I Got In The Outreachy 2019 March-August Internship – The Application Process

Blah: Introduction

Really excited to be accepted for the project “Debian Continuous Integration: user experience improvements” (referred to as debci in this post) of the 2019 March-August round of the Outreachy internship! A huge thanks to my company and my manager Frank for letting me do this since I mentioned it out of the blue. Thanks to the Women Techmakers community for letting me know this program exists.

There are already blog posts that also has an introduction of the program, such as:

  1. How I got into Mozilla’s Outreachy open source internship program
  2. My Pathway to Outreachy with Mozilla
  3. Outreachy: What? How? Why?

To me, the biggest difference between Outreachy and Google Summer of Code (GSoC) is that you don’t have to be a student in order to apply.

This post won’t be going into the details of “What is Outreachy” in this post, and will focus on the process, where everyone will have a different story. This is my version, and hope that you can find yours in the near future!

Table of contents:

Goals: The Why

What I like about Outreachy’s application process is that it definitely lets you think about why you want to apply. For me, things were pretty simple and straightforward:

  • Experience what it is like to work remotely
  • Use my current knowledge to contribute to open source
  • Learn something different from my current job

Actually the most important reason that I kind of feel bad mentioning here is that I felt like leaving the male-dominated tech space for a bit. My colleagues are really nice and friendly, but… it’s hard to put into words.

Mindset: Start Right Away

The two main reasons I failed in the past:

  1. Hesitation
  2. Spent too much time browsing the project list

Hesitation

I have known about the Outreachy since 2017, but because it requires you to make a few contributions in order to apply, any bit of hesitation will result in a late start. It was a bit scary to approach the project mentors and thought my code has to be perfect in order to make a contribution. The truth is, without discussion, you might not know the details of the issue, hence you can’t even start coding. Almost every accepted applicant mentions the importance of starting early. To be precise, just start at the day when applications are open.

Spent too much time browsing the project list

Another reason that kept me from starting right away was that I had been browsing the project list for too long. Since the project list on the first day is not complete, it means that there will be projects that I might be more interested in joining the list as the time passes. Past projects can be referenced to get a better picture of which organizations were involved, but it is never a 100% sure bet. Also, the organizations participating for the March-August round is different from the December-March round. To avoid the starting too late scenario, the strategy I used was to choose 2 projects to contribute to. One in the initial phase (first week or so), and another during the following weeks.

Strategy: Choose 2 Projects

Choosing how many projects to work on really depends on the time you have available. The main idea of this strategy was to eliminate the cause of spending too much time on browsing the project list. Since already having a full-time job at the time, I really had to weigh my priorities. To be honest, I barely had time to work on the second project.

On the day the project list was announced, I quickly assessed my skills with the projects available and decided to try applying for Mozilla. Yep, you heard me right, my first choice wasn’t Debian because Mozilla seemed more familiar to me. Then I instantly realized that there were a flooding number of applicants also applying for Mozilla. All of the newcomer issues were claimed and it all happened in just a matter of days!

I started to look for other projects that were also in line with my goals, which led me to debci. Never have I used Ruby in projects and neither the Debian system. On the other hand, I’m familiar with the other skills listed in the requirements, so some of my knowledge can still be utilized.

The second project was announced at a later stage and came from Openstack. Had to admit it was a little too hard for me to setup the Ironic baremetal environment so wasn’t able to put in much.

Plan: Learn About the Community

An important aspect through the application process was to get in touch with the community. Although Debci and Openstack Ironic both use IRC chat, it feels very different. From a wiki search, it seems Openstack is backed by a corporate entity while Debian by volunteers.

Despite the difference, both communities were pretty active with Openstack involving more members. As long as the community was active and friendly, it fits the vision I was looking for.

Execution: Communication

Looking back at the contribution process, it actually took more time than I initially imagined. The whole application process consists of three stages:

  1. Initial application: has to wait a few days for it to be approved
  2. Record Contributions: the main part
  3. Final application: final dash

Except for the initial application, which can be done by myself, the rest involves communicating with others. Communication differs a lot compared to an office environment. My first merge request (a.k.a. pull request) had a ton of comments, and I couldn’t understand what the comments were suggesting at first. It became to clear up after some discussion and I was really excited to have it being merged. This was huge for me since this all happened online, which contains a bit of a time lag since in an office environment, my colleague would just come around for a face-to-face discussion.

TL;DR

Had no idea that so many words were written, so guess I will stop for now. Up until now, I haven’t mentioned much about writing code and that’s because you will feel for yourself whether you can get through during the process. So the TL;DR version of this post is:

  • Do not hesitate, just do it
  • Start as soon as applications are open
  • Do not lurk around the project list
  • Get in touch with the community and mentors
  • Communicate about the issues

Really excited to begin this Outreachy journey with debci and grateful for this opportunity. Stay tuned for more articles about the project itself!

Planet DebianBits from Debian: Lenovo Platinum Sponsor of DebConf19

lenovologo

We are very pleased to announce that Lenovo has committed to supporting DebConf19 as a Platinum sponsor.

"Lenovo is proud to sponsor the 20th Annual Debian Conference." said Egbert Gracias, Senior Software Development Manager at Lenovo. "We’re excited to see, up close, the great work being done in the community and to meet the developers and volunteers that keep the Debian Project moving forward!”

Lenovo is a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office solutions and data center solutions.

With this commitment as Platinum Sponsor, Lenovo is contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Lenovo, for your support of DebConf19!

Become a sponsor too!

DebConf19 is still accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf19 website at https://debconf19.debconf.org.

Planet DebianKeith Packard: itsybitsy-snek

ItsyBitsy Snek — snek on the Adafruit ItsyBitsy

I got an ItsyBitsy board from Adafruit a few days ago. This board is about as minimal an Arduino-compatible device as I can imagine. All it's got is an Atmel ATmega 32U4 SoC, one LED, and a few passive components.

I'd done a bit of work with the 32u4 under AltOS a few years ago when Bdale and I built a 'companion' board called TeleScience for TeleMetrum to try and measure rocket airframe temperatures in flight. So, I already had some basic drivers for some of the peripherals, including a USB driver.

USB Adventures

The 32u4 USB hardware is simple, and actually fairly easy to use. The AltOS driver used a separate thread to manage the setup messages on endpoint 0. I didn't imagine I'd have space for threading on this device, so I modified that USB driver to manage setup processing from the interrupt handler. I'd done that on a bunch of other USB parts, so while it took longer than I'd hoped, I did manage to get it working.

Then I spent a whole bunch of time reducing the code size of this driver. It started at about 2kB and is now almost down to 1kB. It's a bit less robust now; hosts sending odd setup messages may get unexpected results.

The last thing I did was to add a FIFO for OUT data. That's because we want to be able to see ^C keystrokes even while Snek is executing code.

Reset as longjmp

On the ATmega 328P, to reset Snek, I just reset the whole chip. Nice and clean. With integrated USB, I can't reset the chip without losing the USB connection, and that would be pretty annoying. Resetting Snek's state back to startup would take a pile of code, so instead, I gathered all of the snek-related .data and .bss variables by changing the linker script. Then, I wrote a reset function that does pretty much what the libc startup code does and then jumps back to main:

snek_poly_t
snek_builtin_reset(void)
{
    /* reset data */
    memcpy_P(&__snek_data_start__,
         (&__text_end__ + (&__snek_data_start__ - &__data_start__)),
          &__snek_data_end__ - &__snek_data_start__);

    /* reset bss */
    memset(&__snek_bss_start__, '\0', &__snek_bss_end__ - &__snek_bss_start__);

    /* and off we go! */
    longjmp(snek_reset_buf, 1);
    return SNEK_NULL;
}

I still need to write code to reset the GPIO pins.

Development Environment

To flash firmware to the device, I stuck the board into a proto board and ran jumpers from my AVRISP cable to the board.

Next, I hooked up a FTDI USB to Serial converter to the 32u4 TX/RX pins. Serial is always easier than USB, and this was certainly the case here.

Finally, I dug out my trusty Beagle USB analyzer. This lets me see every USB packet going between the host and the device and is invaluable for debugging USB issues.

You can see all of these pieces in the picture above. They're sitting on top of a knitting colorwork pattern of snakes and pyramids, which I may have to make something out of.

Current Status

Code for this part is on the master branch, which is available on my home machine as well as github:

I think this is the last major task to finish before I release snek version 1.0. I really wanted to see if I could get snek running on this tiny target. It's nearly there; I want to squeeze a few more things onto this chip.

Planet DebianPetter Reinholdtsen: MIME type "text/vnd.sosi" for SOSI map data

As part of my involvement in the work to standardise a REST based API for Noark 5, the Norwegian archiving standard, I spent some time the last few months to try to register a MIME type and PRONOM code for the SOSI file format. The background is that there is a set of formats approved for long term storage and archiving in Norway, and among these formats, SOSI is the only format missing a MIME type and PRONOM code.

What is SOSI, you might ask? To quote Wikipedia: SOSI is short for Samordnet Opplegg for Stedfestet Informasjon (literally "Coordinated Approach for Spatial Information", but more commonly expanded in English to Systematic Organization of Spatial Information). It is a text based file format for geo-spatial vector information used in Norway. Information about the SOSI format can be found in English from Wikipedia. The specification is available in Norwegian from the Norwegian mapping authority. The SOSI standard, which originated in the beginning of nineteen eighties, was the inspiration and formed the basis for the XML based Geography Markup Language.

I have so far written a pattern matching rule for the file(1) unix tool to recognize SOSI files, submitted a request to the PRONOM project to have a PRONOM ID assigned to the format (reference TNA1555078202S60), and today send a request to IANA to register the "text/vnd.sosi" MIME type for this format (referanse IANA #1143144). If all goes well, in a few months, anyone implementing the Noark 5 Tjenestegrensesnitt API spesification should be able to use an official MIME type and PRONOM code for SOSI files. In addition, anyone using SOSI files on Linux should be able to automatically recognise the format and web sites handing out SOSI files can begin providing a more specific MIME type. So far, SOSI files has been handed out from web sites using the "application/octet-stream" MIME type, which is just a nice way of stating "I do not know". Soon, we will know. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Worse Than FailureRepresentative Line: Destroying the Environment

Andrew H sends a line that isn't, on its own, terribly horrifying.

Utilities.isTestEnvironment = !"prd".equals(environment);

The underlying reason for this line is more disturbing: they've added code to their product which should only run in the test/dev environments. Andrew doesn't elaborate on what that code is, but what it has done is created situations where they can no longer test production behavior in the test environment, as in test, the code goes down different paths. Andrew's fix was to make this flag configurable, but it reminds me of some code I dealt with in the bad old days of the late 2000s.

The company I was with at the time had just started migrating to .NET, and was stubbornly insistent about not actually learning to do things correctly, and just pretending that it worked just like VB6. They also still ran 90% of their business through a mainframe with only two developers who actually knew how to do anything on that system.

This created all sorts of problems. For starters, no one actually knew how to create or run a test environment in the mainframe system. There was only a production environment. This meant that, by convention, if you wanted to test sending an invoice or placing an order, you needed to do the following:

  1. Get one of the two mainframe developers on the phone
  2. Prep the invoice/order in your application, and make sure the word "TEST" appears in the PO number field
  3. Make sure you're not near one of the mainframe processing windows which will automatically submit the invoice/order
  4. Submit the order
  5. Wait for the mainframe developer to confirm it arrived
  6. The mainframe developer deletes it before the next processing window

Much like in Andrew's case, our new .NET applications needed to not talk to the mainframe when in test, but they needed to actually talk to the mainframe in production. "Fortunately" for us, one of the first libraries someone wrote was an upgrade of a COM library they used in Classic ASP, called "Environment". In your code, you just called Environment.isProd or Environment.isTest, as needed.

I used those standard calls, but I didn't think too much about how they worked until I needed to test invoice sending from our test environment. You see, normally, changes to invoice processing would be quickly put in production, tested, and then reverted. But this change to invoice processing needed to be signed off on before the end of the month, but we were in the middle of month-end processing, which is when the users were hammering the system really hard to get their data in before the processing deadline.

So, I said to myself, "I mean, the Environment library must just be looking at a config flag or something, right?"

Well, sort of. You see, when any new server was provisioned, a text file would get dropped in C:\env.txt. It contained either "PROD" or "TEST". The Environment library just read that file when your application launched, and set its flags accordingly.

Given the choice of doing a quick custom build of Environment.isProd which always was true, or fighting with the ops team to get them to change the contents of the text file, I took the path of least resistance and made a custom build. Testing proceeded, and eventually, I managed to convince the organization that we should start using the built-in .NET configuration files to set our environment settings.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet Linux AustraliaMichael Still: Gangscan 0.6 boards

Share

So I’ve been pottering away for a while working on getting the next version of the gang scan boards working. These ones are much nicer: thicker tracks for signals, better labelling, support for a lipo battery charge circuit, a prototype audio circuit, and some LEDs to indicate status. I had them fabbed at the same place as last time, although the service was much faster this time around.

A gang scan 0.6 board

I haven’t got as far as assembling a board yet — I need to get some wire thin enough for the vias before I can do that. I’ll let you know how I go though.

Share

,

Planet DebianLouis-Philippe Véronneau: Am I Fomu ?

A few months ago at FOSDEM 2019 I got my hands on a pre-production version of the Fomu, a tiny open-hardware FPGA board that fits in your USB port. Building on the smash hit of the Tomu, the Fomu uses an ICE40UP5K FPGA instead of an ARM core.

I've never really been into hardware hacking, and much like hacking on the Linux kernel, messing with wires and soldering PCB boards always intimidated me. From my perspective, playing around with the Fomu looked like a nice way to test the water without drowning in it.

Since the bootloader wasn't written at the time, when I first got my Fomu hacker board there was no easy way to test if the board was working. Lucky for me, Giovanni Mascellani was around and flashed a test program on it using his Raspberry Pi and a bunch of hardware probes. I was really impressed by the feat, but it also seemed easy enough that I could do it.

My flashing jig

Back at home, I ordered a Raspberry Pi, bought some IC hooks and borrowed a soldering iron from my neighbour. It had been a while since I had soldered anything! Last time I did I was 14 years old and trying to save a buck making my own fencing mask and body cords...

My goal was to test foboot, the new DFU-compatible bootloader recently written by Sean Cross (xobs) to make flashing programs on the board more convenient. Replicating Giovanni's setup, I flashed the Fomu Raspbian image on my Pi and compiled the bootloader.

It took me a good 15 minutes to connect the IC hooks to the board, but I was successfully able to flash foboot on the Fomu! The board now greets me with:

[ 9751.556784] usb 8-2.4: new full-speed USB device number 31 using xhci_hcd
[ 9751.841038] usb 8-2.4: New USB device found, idVendor=1209, idProduct=70b1, bcdDevice= 1.01
[ 9751.841043] usb 8-2.4: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 9751.841046] usb 8-2.4: Product: Fomu Bootloader (0) v1.4-2-g1913767
[ 9751.841049] usb 8-2.4: Manufacturer: Kosagi

I don't have a use case for the Fomu yet, but I am sure by the time the production version ships out, people will have written interesting programs I can flash on it. In the meantime, it'll blink slowly in my laptop's USB port.

Planet DebianJoey Hess: 80 percent

I added dh to debhelper a decade ago, and now Debian is considering making use of dh mandatory. Not being part of Debian anymore, I'm in the position of needing to point out something important about it anyway. So this post is less about pointing in a specific direction as giving a different angle to think about things.

debhelper was intentionally designed as a 100% solution for simplifying building Debian packages. Any package it's used with gets simplified and streamlined and made less a bother to maintain. The way debhelper succeeds at 100% is not by doing everything, but by being usable in little pieces, that build up to a larger, more consistent whole, but that can just as well be used sparingly.

dh was intentionally not designed to be a 100% solution, because it is not a collection of little pieces, but a framework. I first built an 80% solution, which is the canned sequences of commands it runs plus things like dh_auto_build that guess at how to build any software. Then I iterated to get closer to 100%. The main iteration was override targets in the debian/rules file, to let commands be skipped or run out of order or with options. That closed dh's gap by a further 80%.

So, dh is probably somewhere around a 96% solution now. It may have crept closer still to 100%, but it seems likely there is still a gap, because it was never intended to completely close the gap.

Starting at 100% and incrementally approaching 100% are very different design choices. The end results can look very similar, since in both cases it can appear that nearly everyone has settled on doing things in the same way. I feel though, that the underlying difference is important.

PS: It's perhaps worth re-reading the original debhelper email and see how much my original problems with debstd would also apply to dh if its use were mandatory!

Cory DoctorowLos Angeles! Come see me at Exposition Park library this Thursday, talking about Big Tech, monopolies, mind control and the right of technological self-determination

From 6PM-730PM this Thursday, May 23, I’m presenting at the Exposition Park Library (Dr. Mary McLeod Bethune Regional Library, 3900 S Western Ave, Los Angeles, CA 90062) on the problems of Big Tech and how the problems of monopolization (in tech and every other industry) is supercharged by the commercial surveillance industry — and what we can do about it. It’s part of the LA Public Library’s “Book to Action” program and it’s free to attend — I hope to see you there!

Planet DebianDirk Eddelbuettel: RQuantLib 0.4.9: Another small updates

A new version 0.4.9 of RQuantLib reached CRAN and Debian. It completes the change of some internals of RQuantLib to follow suit to an upstream change in QuantLib. We can now seamlessly switch between shared_ptr<> from Boost and from C++11 – Luigi wrote about the how and why in an excellent blog post that is part of a larger (and also excellent) series of posts on QuantLib internals.

QuantLib is a very comprehensice free/open-source library for quantitative finance, and RQuantLib connects it to the R environment and language.

The complete set of changes is listed below:

Changes in RQuantLib version 0.4.9 (2019-05-15)

  • Changes in RQuantLib code:

    • Completed switch to QuantLib::ext namespace wrappers for either shared_ptr use started in 0.4.8.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianAndrew Cater: systemd.unit=rescue.target

Just another quick one liner: a Grub config argument which I had to dig for but which is really useful when this sort of thing happens

Faced with a server that was rebooting after an upgrade and dropping to sytemd emergency target:

Rebooting and adding

ssytemd.unit=rescue.target

to the end of the Linux command line in the Grub config as the machine booted and then pressing F10 allowed me to drop to a full featured rescue environment with read/write access to the disk and to sort out the partial upgrade mess.

Planet DebianDavid Kalnischkies: Newbie contributor: A decade later

Time flies. On this day, 10 years ago, a certain someone sent in his first contribution to Debian in Debbugs#433007: --dry-run can mark a package manually installed (in real life). What follows is me babbling randomly about what lead to and happened after that first patch.

That wasn't my first contribution to open source: I implemented (more like copy-pasted) mercurial support in the VCS plugin in the editor I was using back in 2008: Geany – I am pretty sure my code is completely replaced by now, I just remain being named in THANKS, which is very nice considering I am not a user anymore. My contributions to apt were coded in vim(-nox) already.

It was the first time I put my patch under public scrutiny through – my contribution to geanyvc was by private mail to the plugin maintainer – and not by just anyone but by the venerable masters operating in a mailing list called deity@

I had started looking into apt code earlier and had even written some patches for me without actually believing that I would go as far as handing them in. Some got in anyhow later, like the first commit with my name dated May the 7th allowing codenames to be used in pinning which dates manpage changes as being written on the 4th. So then I really started with apt is lost to history by now, but today (a decade ago) I got serious: I joined IRC, the mailing list and commented the bugreport mentioned above. I even pushed my branch of random things I had done to apt to launchpad (which back then was hosting the bzr repository).

The response was overwhelming. The bugreport has no indication of it, but Michael jumped at me. I realized only later that he was the only remaining active team member in the C++ parts. Julian was mostly busy with Python at the time and Christian turned out to be Mr. L18n with duties all around Debian. The old guard had left as well as the old-old guard before them.

I got quickly entangled in everything. Michael made sure I got invited by Canonical to UDS-L in November of 2009 – 6 months after saying hi. I still can't really believe that 21y old me made his first-ever fly across the ocean to Dallas, Texas (USA) because some people on the internet invited him over. So there was I, standing in front of the airport with the slow realisation that while I had been busy being scared about the fly, the week and everything I never really had worried about how to get from the airport to the hotel. An inner monologue started: "You got this, you just need the name of the hotel and look for a taxi. You wrote the name down right? No? Okay, you can remember the name anyhow, right? Just say it and … why are you so silent? Say it! … Goddammit, you are …" – "David?" was interrupting my inner voice. Of all people in the world, I happened to meet Michael for the first time right in front of the airport. "Just as planned you meany inner voice", I was kidding myself after getting in a taxi with a few more people.

I meet so many people over the following days! It was kinda scary, very taxing for an introvert but also 100% fun. I also meet the project that would turn me from promising newbie contributor to APT developer via Google Summer of Code 2010: MultiArch. There was a session about it and this time around it should really happen. I was sitting in the back, hiding but listening closely. Thankfully nobody had called me out as I was scared: I can't remember who it was, but someone said that in dpkg MultiArch could be added in two weeks. Nobody had to say it, for me it was clear that this meant APT would be the blocker as that most definitely would not happen in two weeks. Not even months. More like years if at all. What was I to do? Cut my looses and run? Na, sunk cost fallacy be damned. I hadn't lost anything, I had learned and enjoyed plenty of things granted to me by supercow and that seemed like a good opportunity to give back.

But there was so much to do. The cache had to grow dynamically (remember "mmap ran out of room" and feel old), commandline interfaces needed to be adapted, the resolver… oh my god, the resolver! And to top it all of APT had no tests to speak of. So after the UDS I started tackling them all: My weekly reports for GSoC2010 provide a glimpse into the abyss but before and after lots happened still. Many of the decisions I made back then are still powering APT. The shell scripting framework I wrote to be able to perform some automatic testing of apt as I got quickly tired of manual testing consists as of today of 255 scripts run not only by me but many CI services including autopkgtest. It probably prevented me from introducing thousands of regressions over the years. Even through it grew into kind of a monster (2000+ lines of posix shellscript providing the test framework alone), can be a bit slow (it can take more than the default 30min on salsa; for me locally it is about 3 minutes) and it has a strange function naming convention (all lowercase no separator: e.g. insertinstalledpackage). Nobody said you can't make mistakes.

And I made them all: First bug caused by me. First regression with complains hitting d-devel. First security bug. It was always scary. It still is, especially as simple probability kicks in and the numbers increase combined with seemingly more hate generated on the internet: The last security bug had people identify me as purposefully malicious. All my contributions should be removed – reading that made me smile.

Lots and lots of things happened since my first patch. git tells me that 174+ people contributed to APT over the years. The top 5 of contributors of all time (as of today) list:

  • 2904 commits by Michael Vogt (active mostly as wizard)
  • 2647 commits by David Kalnischkies (active)
  • 1304 commits by Arch Librarian (all retired, see note)
  • 1008 commits by Julian Andres Klode (active)
  • 641 commits by Christian Perrier (retired)

Note that "Arch Librarian" isn't a person, but a conversion artefact: Development started in 1998 in CVS which was later converted to arch (which eventually turned into bzr) and this CVS→arch conversion preserved the names of the initial team as CVS call signs in the commit messages only. Many of them belong hence to Jason Gunthorpe (jgg). Christians commits meanwhile are often times imports of po files for others, but there is still lots of work involved with this so that spot is well earned even if nowadays with git we have the possibility of attributing the translator not only in the changelog but also as author in the commit.

There is a huge gap after the top 5 with runner up Matt Zimmerman with 116 counted commits (but some Arch Librarian commits are his, too). And that gap for me to claim the throne isn't that small either, but I am working on it… 😉︎ I have also put enough distance between me and Julian that it will still take a while for him to catch up even if he is trying hard at the moment.

The next decade will be interesting: Various changes are queuing up in the master branch for a major break in ABI and API and a bunch of new stuff is still in the pipeline or on the drawing board. Some of these things I patched in all these years ago never made it into apt so far: I intend to change that this decade – you are supposed to have read this in "to the moon" style and erupt in a mighty cheer now so that you can't hear the following – time permitting, as so far this is all talk on my part.

The last year(s) had me not contribute as much as I would have liked due to – pardon my french – crazy shit I will hopefully be able to leave behind this (or at least next) year. I hadn't thought it would show that drastically in the stats, but looking back it is kinda obvious:

  • In year 2009 David made 167 commits
  • In year 2010 David made 395 commits
  • In year 2011 David made 378 commits
  • In year 2012 David made 274 commits
  • In year 2013 David made 161 commits
  • In year 2014 David made 352 commits
  • In year 2015 David made 333 commits
  • In year 2016 David made 381 commits
  • In year 2017 David made 110 commits
  • In year 2018 David made 78 commits
  • In year 2019 David made 18 commits so far

Lets make that number great again this year as I finally applied and got approved as DD in 2016 (I didn't want to apply earlier) and decreasing contributions (completely unrelated but still) since then aren't a proper response! 😉︎

Also: I enjoyed the many UDSes, the DebConfs and other events I got to participate in in the last decade and hope there are many more yet to come!

tl;dr: Looking back at the last decade made me realize that a) I seem to have a high luck stat, b) too few people contribute to apt given that I remain the newest team member and c) I love working on apt for all the things which happened due to it. If only I could do that full-time like I did as part of summer of code…

P.S.: The series APT for … will return next week with a post I had promised months ago.

Planet Linux AustraliaMichael Still: Trigs map

Share

A while ago I had a map of all the trig points in the ACT and links to the posts I’d written during my visits. That had atrophied over time. I’ve just spent some time fixing it up again, and its now at https://www.madebymikal.com/trigs_map.html — I hope its useful to someone else.

Share

Rondam RamblingsIf a fetus is a person...

Carliss Chatman raises some very interesting questions about the logical consequences of fetal personhood. To which I would like to add: if life begins at conception, and hence an embryo is a person, can I adopt a frozen embryo and write them off as a dependent on my taxes?  No, seriously, I want to know.  This could be very lucrative.

,

Rondam RamblingsThe mother of all buyer's remorse

[Part of an ongoing series of exchanges with Jimmy Weiss.] Jimmy Weiss responded to my post on teleology and why I reject Jimmy's wager (not to be confused with Pascal's wager) nearly a month ago.  I apologize to Jimmy and anyone who has been waiting with bated breath for my response (yeah, right) for the long delay.  Somehow, life keeps happening while I'm not paying attention. So, finally, to

Krebs on SecurityAccount Hijacking Forum OGusers Hacked

Ogusers[.]com — a forum popular among people involved in hijacking online accounts and conducting SIM swapping attacks to seize control over victims’ phone numbers — has itself been hacked, exposing the email addresses, hashed passwords, IP addresses and private messages for nearly 113,000 forum users.

On May 12, the administrator of OGusers explained an outage to forum members by saying a hard drive failure had erased several months’ worth of private messages, forum posts and prestige points, and that he’d restored a backup from January 2019. Little did the administrators of OGusers know at the time, but that May 12 incident coincided with the theft of the forum’s user database, and the wiping of forum hard drives.

On May 16, the administrator of rival hacking community RaidForums announced he’d uploaded the OGusers database for anyone to download for free.

The administrator of the hacking community Raidforums on May 16 posted the database of passwords, email addresses, IP addresses and private messages of more than 113,000 users of Ogusers[.]com.

“On the 12th of May 2019 the forum ogusers.com was breached [and] 112,988 users were affected,” the message from RaidForums administrator Omnipotent reads. “I have uploaded the data from this database breach along with their website source files. Their hashing algorithm was the default salted MD5 which surprised me, anyway the website owner has acknowledged data corruption but not a breach so I guess I’m the first to tell you the truth. According to his statement he didn’t have any recent backups so I guess I will provide one on this thread lmfao.”

The database, a copy of which was obtained by KrebsOnSecurity, appears to hold the usernames, email addresses, hashed passwords, private messages and IP address at the time of registration for approximately 113,000 users (although many of these nicknames are likely the same people using different aliases).

The publication of the OGuser database has caused much consternation and drama for many in the community, which has become infamous for attracting people involved in hijacking phone numbers as a method of taking over the victim’s social media, email and financial accounts, and then reselling that access for hundreds or thousands of dollars to others on the forum.

Several threads on OGusers quickly were filled with responses from anxious users concerned about being exposed by the breach. Some complained they were already receiving phishing emails targeting their OGusers accounts and email addresses. 

Meanwhile, the official Discord chat channel for OGusers has been flooded with complaints and expressions of disbelief at the hack. Members vented their anger at the main forum administrator, who uses the nickname “Ace,” claiming he altered the forum functionality after the hack to prevent users from removing their accounts. One user on the Discord chat summed it up:

“Ace be like:

-not replace broken hard drives, causing the site to time warp back four months
– not secure website, causing user info to be leaked
– disable selfban so people can’t leave”

It’s difficult not to admit feeling a bit of schadenfreude in response to this event. It’s gratifying to see such a comeuppance for a community that has largely specialized in hacking others. Also, federal and state law enforcement investigators going after SIM swappers are likely to have a field day with this database, and my guess is this leak will fuel even more arrests and charges for those involved.

Planet Linux AustraliaMichael Still: Trail run: Tuggeranong Stone Wall loop

Share

The Tuggeranong Stone wall is a 140 year old boundary between to former stations. Its also a nice downhill start to a trail run. This loop involves starting at the Hyperdome, following the wall down, and the continuing along to Pine Island before returning. Partially shaded, and with facilities at the Hyperdome and Pine Island. 6km, and 68m vertically.

Share

Planet Linux AustraliaMichael Still: Trail run: Lake Tuggeranong to Kambah Pool (return)

Share

This wasn’t the run I’d planned for this day, but here we are. This runs along the Centenary Trail between Kambah Pool and Lake Tuggeranong. Partially shaded, but also on the quite side of the ridge line where you can’t tell that you’re near the city. Don’t take the tempting river ford, there is a bridge a little further downstream! 14.11km and 296 vertical ascent.

Be careful of mountain bikers on this popular piece of single track. You’re allowed to run here, but some cyclists don’t leave much time to notice other track users.

Share

Planet Linux AustraliaMichael Still: Trail run: Barnes and ridgeline

Share

A first attempt at running to Barnes and Brett trigs, this didn’t work out quite as well as I’d expected (I ran out of time before I’d hit Brett trig). The area wasn’t as steep as I’d expected, being mostly rolling grazing land with fire trails. Lots of gates and now facilities, but stunning views of southern Canberra from the ridgeline. 11.11km and 421m of vertical ascent.

Share

Planet Linux AustraliaMichael Still: Trail run: Pine Island South to Point Hut with a Hill

Share

This one is probably a little bit less useful to others, as the loop includes a bit more of the suburb than is normal. That said, you could turn this into a suburb avoiding loop quite easily. A nice 11.88km run with a hill climb at the end. A total ascent of 119 metres. There isn’t much shade along the run, but there is some in patches. There are bathrooms at Point Hut and Pine Island.

Be careful of mountain bikers on this popular piece of single track. You’re allowed to run here, but some cyclists don’t leave much time to notice other track users.

Share

Planet Linux AustraliaMichael Still: Trail run: Cooleman Ridge

Share

This run includes Cooleman and Arawang trig points. Not a lot of shade, but a pleasant run. 9.86km and 264m of vertical ascent.

Share

Planet Linux AustraliaMichael Still: Trail running guide: Tuggeranong

Share

I’ve been running on trails more recently (I’m super bored with roads and bike paths), but running on trails makes load management harder — often I’m looking for a run of approximately XX length with no more than YY vertical ascent. So I was thinking, maybe I should just write the runs that I do down so that over time I create a menu of options for when I need them.

This page documents my Tuggeranong runs.

NameDistance (km)Vertical Ascent (m)NotesPosts
Cooleman Ridge9.78264Cooleman and Arawang Trigs. Not a lot of shade and no facilities.25 April 2019
Pine Island South to Point Hut with a Hill11.88119A nice Point Hut and Pine Island loop with a hill climb at the end. Toilets at Point Hut and Pine Island. Not a lot of shade. Beware of mountain bikes!21 February 2019
Barnes and ridgeline11.11421Not a lot of shade and no facilities, but stunning views of southern Canberra.2 May 2019
Lake Tuggeranong to Kambah Pool (return)14.11296Partial shade and great views, but beware the mountain bikes!11 May 2019
Tuggeranong Stone Wall loop668Partial shade and facilities at the Hyperdome and Pine Island.27 April 2019

Share

,

Planet DebianDebian XMPP Team: Debian XMPP Team Starts a Blog

The Debian XMPP Team, the people who package Dino, Gajim, Mcabber, Movim, Profanity, Prosody, Psi+, Salut à Toi, Taningia, and a couple of other packages related to XMPP a.k.a. Jabber for Debian, have this blog now. We will try to post interesting stuff here — when it's ready!

CryptogramFriday Squid Blogging: On Squid Intelligence

Two links.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramWhy Are Cryptographers Being Denied Entry into the US?

In March, Adi Shamir -- that's the "S" in RSA -- was denied a US visa to attend the RSA Conference. He's Israeli.

This month, British citizen Ross Anderson couldn't attend an awards ceremony in DC because of visa issues. (You can listen to his recorded acceptance speech.) I've heard of two other prominent cryptographers who are in the same boat. Is there some cryptographer blacklist? Is something else going on? A lot of us would like to know.

Worse Than FailureError'd: Professionals Wanted

"Searching for 'Pink Tile Building Materials' in Amazon results in a few 'novelty' items sprinkled in, which, to me, isn't a huge surprise," Brian G. wrote, "But, upon closer inspection...professional installation you say?"

 

"Well, at least they're being honest," Josh wrote.

 

Brian writes, "You know, I wonder if 'date math' would qualify as business technology? If it doesn't, they should probably make an exception."

 

"Spotted in Belgium, I can only assume this is Belgian for 'Lorem Ipsum'," writes Robin G.

 

Wouter writes, "Cool! Mozilla has invented time travel just to delete this old Firefox screenshot."

 

"To cancel or...cancel...that is the question...really...that's the question," Peter wrote.

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Krebs on SecurityFeds Target $100M ‘GozNym’ Cybercrime Network

Law enforcement agencies in the United States and Europe today unsealed charges against 11 alleged members of the GozNym malware network, an international cybercriminal syndicate suspected of stealing $100 million from more than 41,000 victims with the help of a stealthy banking trojan by the same name.

The locations of alleged GozNym cybercrime group members. Source: DOJ

The indictments unsealed in a Pennsylvania court this week stem from a slew of cyber heists carried out between October 2015 and December 2016. They’re also related to the 2016 arrest of Krasimir Nikolov, a 47-year-old Bulgarian man who was extradited to the United States to face charges for allegedly cashing out bank accounts that were compromised by the GozNym malware.

Prosecutors say Nikolov, a.k.a. “pablopicasso,” “salvadordali,” and “karlo,” was key player in the GozNym crime group who used stolen online banking credentials captured by GozNym malware to access victims’ online bank accounts and attempt to steal their money through electronic funds transfers into bank accounts controlled by fellow conspirators.

According to the indictment, the GozNym network exemplified the concept of ‘cybercrime as a service,’ in that the defendants advertised their specialized technical skills and services on underground, Russian-language, online criminal forums. The malware was dubbed GozNym because it combines the stealth of a previous malware strain called Nymaim with the capabilities of the powerful Gozi banking trojan.

The feds say the ringleader of the group was Alexander Konovolov, 35, of Tbilisi, Georgia, who controlled more than 41,000 victim computers infected with GozNym and recruited various other members of the cybercrime team.

Vladimir Gorin, a.k.a “Voland,”  “mrv,” and “riddler,” of Orenburg, Russia allegedly was a malware developer who oversaw the creation, development, management, and leasing of GozNym.

The indictment alleges 32-year-old Eduard Malancini, a.k.a. “JekaProf” and “procryptgroup” from Moldova, specialized in “crypting” or obfuscating the GozNym malware to evade detection by antivirus software.

Four other men named in the indictment were accused of recruiting and managing “money mules,” willing or unwitting people who can be used to receive stolen funds on behalf of the criminal syndicate. One of those alleged mule managers — Farkhad Rauf Ogly Manokhim (a.k.a. “frusa”) of Volograd, Russia was arrested in 2017 in Sri Lanka on an international warrant from the United States, but escaped and fled back to Russia while on bail awaiting extradition.

Also charged was 28-year-old Muscovite Konstantin Volchkov, a.k.a. “elvi,”  who allegedly provided the spamming service used to disseminate malicious links that tried to foist GozNym on recipients who clicked.

The malicious links referenced in those spam emails were served via the Avalanche bulletproof hosting service, a distributed, cloud-hosting network that for seven years was rented out to hundreds of fraudsters for use in launching malware and phishing attacks. Avalanche was dismantled in Dec. 2016 by a similar international law enforcement action.

The alleged administrator of the Avalanche bulletproof network — 36-year-old Gennady Kapkanov from Poltova, Ukraine — has eluded justice in prior scrapes with the law: During the Avalanche takedown in Dec. 2016, Kapkanov fired an assault rifle at Ukrainian police who were trying to raid his apartment.

After that incident, Ukrainian police arrested Kapkanov and booked him on cybercrime charges. But a judge later ordered him to be released, saying the prosecution had failed to file the proper charges. The Justice Department says Kapkanov is now facing prosecution in Ukraine for his role in providing bulletproof hosting services to the GozNym criminal network.

The five Russian nationals charged in the case remain at large. The FBI has released a “wanted” poster with photos and more details about them. The Justice Department says it is working with authorities in Georgia, Ukraine and Moldova to build prosecutions against the defendants in those countries.

Nikolov entered a guilty plea in federal court in Pittsburgh on charges relating to his participation in the GozNym conspiracy on April 10, 2019.  He is scheduled to be sentenced on Aug. 30, 2019.

It’s good to see this crime network being torn apart, even if many of its key members have yet to be apprehended. These guys caused painful losses for many companies — mostly small businesses — that got infected with their malware. Their activities and structure are remarkably similar to that of the “Jabberzeus” crime gang in Ukraine that siphoned $70 million – out of an attempted $220 million — from hundreds of U.S.-based small to mid-sized businesses several years ago.

The financial losses brought about by that gang’s string of cyberheists — or at least the few dozen heists documented in my series Target: Small Business — often caused victim companies to lay off employees, and in some cases go out of business entirely.

A copy of the GozNym indictment is here (PDF).

CryptogramMore Attacks against Computer Automatic Update Systems

Last month, Kaspersky discovered that Asus's live update system was infected with malware, an operation it called Operation Shadowhammer. Now we learn that six other companies were targeted in the same operation.

As we mentioned before, ASUS was not the only company used by the attackers. Studying this case, our experts found other samples that used similar algorithms. As in the ASUS case, the samples were using digitally signed binaries from three other Asian vendors:

  • Electronics Extreme, authors of the zombie survival game called Infestation: Survivor Stories,
  • Innovative Extremist, a company that provides Web and IT infrastructure services but also used to work in game development,
  • Zepetto, the South Korean company that developed the video game Point Blank.

According to our researchers, the attackers either had access to the source code of the victims' projects or they injected malware at the time of project compilation, meaning they were in the networks of those companies. And this reminds us of an attack that we reported on a year ago: the CCleaner incident.

Also, our experts identified three additional victims: another video gaming company, a conglomerate holding company and a pharmaceutical company, all in South Korea. For now we cannot share additional details about those victims, because we are in the process of notifying them about the attack.

Me on supply chain security.

CryptogramAnother Intel Chip Flaw

Remember the Spectre and Meltdown attacks from last year? They were a new class of attacks against complex CPUs, finding subliminal channels in optimization techniques that allow hackers to steal information. Since their discovery, researchers have found additional similar vulnerabilities.

A whole bunch more have just been discovered.

I don't think we're finished yet. A year and a half ago I wrote: "But more are coming, and they'll be worse. 2018 will be the year of microprocessor vulnerabilities, and it's going to be a wild ride." I think more are still coming.

Planet DebianMatrix on Debian blog: Welcome to Matrix on Debian blog

This is the first blog post on this Matrix on Debian blog. The Debian Matrix team will be regularly posting here updates on the progress of the packaging work we do, and the overall status of Matrix.org software in Debian.

Come chat to us in the Matrix room (#debian-matrix:matrix.org) we created!

Planet DebianJonathan Dowland: PhD Proposal

For my PhD, I'm currently working on my "1st" year Progression Report. The last formal deliverable I produced was my Project Proposal a (calendar) year ago. I've just realised I hadn't shared that here, so here we go, in the hope that this is interesting and/or useful to someone: PhD Proposal - Jon Dowland.pdf

When I started working on that, I cast around to find examples of other people's. I've attempted to do much the same for my Progression Report. In both cases I have been unable to find a great deal of examples of other people's proposals or reports. The exact format of these things is likely specific to your particular institution, or even your academic unit within your institution, and so a document produced for another institution's expectations might not be directly applicable to another. (I didn't want to directly apply such a thing of course.) If you do find a sample, you don't have any idea whether it has been judged to be particularly good or bad one by those who received it (you can make your own judgements). This is true of my own Proposal too.

In a "normal", full-time PhD, you would likely produce a proposal within a few months of starting, and your first Progression Report towards the end of your first academic (not calendar) year: so, a mere 6 months or so later. Since I am doing things part-time, this is all stretched out: I submitted the proposal in March last year, and my Progression Report is due next month, in June. Looking back at the Proposal now (for the first time in a while, I must admit), it's remarkable to me how "far" the formulation of my goals from then is compared to now.

Once I've had my Progression report passed I hope to share it here, too.

Planet DebianJonathan Dowland: RHEL8-based OpenShift OpenJDK containers

I'm pleased to announce that something I've been working on for the last 6 or so months is now public: Red Hat Enterprise Linux 8-based OpenJDK containers, for use with OpenShift. There are two flavours, one with OpenJDK 1.8 (8) and another for OpenJDK 11. These join the existing two RHEL7-based images.

If you have a Red Hat OpenShift subscription, follow the instructions in this Errata to update your image streams. The new images are named:

registry.redhat.io/openjdk/openjdk-8-rhel8
registry.redhat.io/openjdk/openjdk-11-rhel8

Last week Red Hat announced the Universal Base Image initiative: RHEL-based container images, intended to be a suitable base for other images to build upon, and available without a Red Hat subscription under a new EULA.

Our new OpenShift OpenJDK RHEL8 containers are built upon the UBI, as are (I believe) any RHEL8-based containers, but are not currently available under the UBI EULA as we incorporate content from the regular RHEL8 repositories not present in the UBI. If a UBI-based OpenJDK image, distributed under the UBI terms, would be interesting to you, please get in touch! What could it look like? Small, or kitchen-sink? Would you want builder content in it, or pure run-time? What environment would you want to use it in: OpenShift, or something else?

Worse Than FailureCodeSOD: True Confession: Without a Map

Today, we have the special true confession from Bruce, who wrote some bad code, but at least knows it’s bad.

Bruce is a C# developer. Bruce is not a web developer. Someone around the office, though, had read an article about how TypeScript was closer to “real” languages, like C#, and asked Bruce to do some TypeScript work.

Now, in C# parlance, your key/value pair data-structure is called a Dictionary. So, when Bruce got stuck on how to store key/value pairs in TypeScript, he googled “typescript dictionary”, and got no useful results.

Disappointed, Bruce set out to remedy this absence:

export class KeyValuePair<TKey,TValue> {
    Key: TKey;
    Value: TValue;
    constructor (key: TKey, value: TValue) {
        this.Key = key;
        this.Value = value;
    }
}
export class Dictionary<TKey, TValue>{
    private Collection: Array<KeyValuePair<TKey, TValue>>
    private IndexMap: Map<TKey, number>
    private index: number;
    public tryAdd(key: TKey, value: TValue): boolean {
        if (this.containsKey(key)) {
            return false;
        } else {
            var kv = new KeyValuePair(key, value);
            this.IndexMap.set(kv.Key, this.Collection.push(kv) - 1);
            return true;
        }
    }
    public tryRemove(key: TKey): boolean {
        var i = this.indexOf(key);
        if (i == -1) {
            return false;
        } else {
            this.Collection.splice(i, 1);
            this.reMap(i, key);
            return true;
        }
    }
   public indexOf(key: TKey): number {
        if (this.containsKey(key)) {
            return this.IndexMap.get(key);
        } else {
            return -1;
        }
    }
    public containsKey(key: TKey): boolean {
        if (this.IndexMap.has(key)) {
            return true;
        } else {
            return false;
        }
    }
   private reMap(index: number, key: TKey) {
        this.index = index;
        this.IndexMap.delete(key);
        this.IndexMap.forEach((value: number, key: TKey) => {
            if (value > this.index) {
                this.IndexMap.set(key, value - 1);
            }
        });
    }

//the rest is recreating C# dictionary methods: getKeys(),getValues(), clear(), etc.
}

The dictionary implementation stores an array of key/value pairs. Now, it’d be expensive to have to search every item in the collection to find the appropriate key/value pair, so Bruce knew he needed to find a way to map keys to values. So he used a Map to store pairs of keys and indexes within the array.

He spent an entire afternoon coding this masterpiece before realizing that Maps stored key/value pairs… just like a dictionary.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

TEDTED original podcast The TED Interview kicks off Season 2

TED returns with the second season of The TED Interview, a long-form podcast series that features Chris Anderson, head of TED, in conversation with leading thinkers. The podcast is an opportunity to reconnect with renowned speakers and dive deeper into their ideas within a different global climate. This season’s guests include Bill Gates, Monica Lewinsky, Tim Ferriss, Susan Cain, Yuval Noah Harari, David Brooks, Amanda Palmer, Kai-Fu Lee, Sylvia Earle, Andrew McAfee and Johann Hari. Plus, a bonus episode with Roger McNamee that was recorded live at TED2019.

Listen to the first episode with Bill Gates now on Apple Podcasts.

In its first season, The TED Interview played host to extraordinary conversations — such as the writer Elizabeth Gilbert on the death of her partner, Rayya Elias; Sir Ken Robinson on the education revolution; and Ray Kurzweil on what the future holds for humanity.

Season two builds on this success with new ideas from some of TED’s most compelling speakers. Listeners can look forward to hearing from Bill Gates on the future of technology and philanthropy; musician Amanda Palmer on how the future of creativity means asking for what you want; Susan Cain on introversion and other notable past speakers.

“Ideas are not static — they don’t land perfectly formed in an unchanging world,” said Chris Anderson. “As times change, opinions shift and new research is published, ideas must be iterated on. The TED Interview is a remarkable platform where past speakers can further explain, amplify, illuminate and, in some cases, defend their thinking. Season two listeners can expect a front-row seat as we continue to explore the theory behind some of TED’s most well-known talks.”

The TED Interview launches today and releases new episodes every Wednesday. It is available on Apple Podcasts, the TED Android app or wherever you like to listen to podcasts. Season 2 features 12 episodes, each being roughly an hour long. Collectively the Season Two speakers have garnered over 100 million views through their TED Talks.

The TED Interview is proudly sponsored by Klick Health, the world’s largest independent health agency. They use data, technology and creativity to help patients and healthcare professionals learn about and access life-changing therapies.

TED’s content programming extends beyond its signature TED Talk format with six original podcasts. Overall TED’s podcasts were downloaded over 420 million times in 2018 and have been growing 44% year-over-year since 2016. Among others, The TED Interview joins notable series like Sincerely, X, where powerful ideas are shared anonymously, which recently launched its second season exclusively on the Luminary podcast app.

Krebs on SecurityA Tough Week for IP Address Scammers

In the early days of the Internet, there was a period when Internet Protocol version 4 (IPv4) addresses (e.g. 4.4.4.4) were given out like cotton candy to anyone who asked. But these days companies are queuing up to obtain new IP space from the various regional registries that periodically dole out the prized digits. With the value of a single IP hovering between $15-$25, those registries are now fighting a wave of shady brokers who specialize in securing new IP address blocks under false pretenses and then reselling to spammers. Here’s the story of one broker who fought back in the courts, and lost spectacularly.

On May 14, South Carolina U.S. Attorney Sherri Lydon filed criminal wire fraud charges against Amir Golestan, alleging he and his Charleston, S.C. based company Micfo LLC orchestrated an elaborate network of phony companies and aliases to secure more than 735,000 IPs from the American Registry for Internet Numbers (ARIN), a nonprofit which oversees IP addresses assigned to entities in the U.S., Canada, and parts of the Caribbean.

Interestingly, Micfo itself set this process in motion late last year when it sued ARIN. In December 2018, Micfo’s attorneys asked a federal court in Virginia to issue a temporary restraining order against ARIN, which had already told the company about its discovery of the phony front companies and was threatening to revoke some 735,000 IP addresses. That is, unless Micfo agreed to provide more information about its operations and customers.

At the time, many of the IP address blocks assigned to Micfo had been freshly resold to spammers. Micfo ultimately declined to provide ARIN the requested information, and as a result the court denied Micfo’s request (the transcript of that hearing is instructive and amusing).

But by virtue of the contract Micfo signed with ARIN, any further dispute had to be settled via arbitration. On May 13, that arbitration panel ordered Micfo to pay $350,000 for ARIN’s legal fees and to cough up any of those 735,000 IPs the company hadn’t already sold.

According to the criminal indictment in South Carolina, in 2017 and 2018 Golestan sold IP addresses using a third party broker:

“Golestan sold 65,536 IPv4 addresses for $13 each, for a total of $851,896,” the indictment alleges. “Golestan also organized a second transaction for another 65,536 IP addresses, for another approximately $1 million. During this same time period, Golestan had a contract to sell 327,680 IP addresses at $19 per address, for a total of $6.22 million” [this last transaction would be blocked.]

The various front companies alleged to have been run by Micfo and Amir Golestan.

Mr. Golestan could not be immediately reached for comment. Golestan’s attorney in Micfo’s lawsuit against ARIN declined to comment on either the criminal charges or the arbitration outcome. Calls to nearly a dozen of the front companies named in the dispute mostly just rang and rang with no answer, or went to voicemail boxes that were full.

Stephen Ryan is a Washington, D.C.-based attorney who represented ARIN in the dispute filed by Micfo. Ryan said this was the first time ARIN’s decision to revoke IP address space resulted in a court battle — let alone arbitration.

“We have revoked addresses for fraud before, but that hasn’t previously resulted in litigation,” Ryan said. “The interesting thing here is that they litigated this for five months.”

According to a press release by ARIN, “Micfo obtained and utilized 11 shelf companies across the United States, and intentionally created false aliases purporting to be officers of those companies, to induce ARIN into issuing the fraudulently sought IPv4 resources and approving related transfers and reassignments of these addresses. The defrauding party was monetizing the assets obtained in the transfer market, and obtained resources under ARIN’s waiting list process.”

“This was an elaborate operation,” said Ryan, a former federal prosecutor. “All eleven of these front companies for Micfo are still up on the Web, where you see all these wonderful people who allegedly work there. And meanwhile we were receiving notarized affidavits in the names of people that were false. It made it much more interesting to do this case because it created 11 states where they’d violated the law.”

The criminal complaint against Golestan and Micfo (PDF) includes 20 counts of wire fraud associated with the phony companies allegedly set up by Micfo.

John Levine, author of The Internet for Dummies and a member of the security and stability advisory committee at ICANN, said ARIN does not exactly have a strong reputation for going after the myriad IP address scammers allegedly operating in a similar fashion as Micfo.

“It is definitely the case that for a long time ARIN has not been very aggressive about checking the validity of IP address applications and transfers, and now it seems they are somewhat better than they used to be,” Levine said. “A lot of people have been frustrated that ARIN doesn’t act more like a regulator in this space. Given how increasingly valuable IPv4 space is, ARIN has to be more vigilant because the incentive for crooks to do this kind of thing is very high.”

Asked if ARIN would have the stomach and budget to continue the fight if other IP address scammers fight back in a similar way, Ryan said ARIN would not back down from the challenge.

“If we find a scheme or artifice to defraud and it’s a substantial number of addresses and its egregious fraud, then yes, we have a reserve set aside for litigation and we can and will use it for cases like this,” Ryan said, adding that he’d welcome anyone with evidence of similar schemes to come forward. “But a better strategy is not to issue it and never have to go back and revoke it, and we’re good at that now.”

Planet DebianMarkus Koschany: My Free Software Activities in April 2019

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • This was a very quiet month compared to pre-freeze time. I reported three security vulnerabilities for Teeworlds (#927152) which were later fixed by Dylan Aïssi. Thank you.
  • I also reviewed and sponsored a new revision of OpenMW for Bret Curtis. I’m not sure why he didn’t ask the release team for an unblock but there may be a reason.

Debian Java

  • I fixed a security vulnerability in robocode (#926088) and asked for an unblock.
  • I corrected a mistake in solr-tomcat and learned, if you want to override a service file of another package (tomcat9) the conf file has to be installed into
    /etc/systemd/system/tomcat9.service.d/

    instead of /etc/systemd/system/tomcat9.d.*sigh*

Misc

  • Last month I wrote about the challenges of the ublock-origin addon (#926586). We came to the conclusion that we can no longer provide one version for Firefox and Chromium but that we don’t have to create two binary packages either. Now we use symlinks  and two different directories and hopefully this will solve all the troubles we had before. It is not a great solution but hopefully we can maintain the addon without relying on patches.  Thanks to Michael Meskes who implemented the changes. I will probably upload a new version to experimental in May, so that people can try it out and report back.

Debian LTS

This was my thirty-eight month as a paid contributor and I have been paid to work 17,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 29.04.2019 until 05.05.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in rebar, filezilla, lucene-solr, librecad, apparmor, phpbb3, jakarta-jmeter, jetty8, jetty, php-imagick and node-tar.
  • DLA-1753-2. Issued a regression update for proftpd-dfsg because it became clear that neither version 1.3.5.e nor 1.3.6 was a way forward to address the memory leaks because those versions also introduced new bugs that affected sftp setups negatively (#926719). I resolved these problems by backporting the patches for the memory leaks and by reverting to version 1.3.5 again.
  • DLA-1773-1. Issued a security update for signing-party fixing 1 CVE.
  • DLA-1774-1. Issued a security update for otrs2 fixing 1 CVE.
  • DLA-1775-1. Issued a security update for phpbb3 fixing 1 CVE.
  • DLA-1776-1. Issued a security update for librecad fixing 1 CVE.
  • DLA-1785-1. Issued a security update for imagemagick together with Hugo Lefeuvre (3 CVE) fixing 50 CVE in total.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my eleventh month and I have been paid to work 14,5 hours on ELTS.

  • I was in charge of our ELTS frontdesk from 15.04.2019 until 21.04.2019 and I triaged CVE in openjdk7, php5 and libvirt.
  • ELA-72-2. Issued a regression update for jasper which corrected the patch for CVE-2018-19542.
  • ELA-109-1. Issued a security update for jquery fixing 1 CVE.
  • ELA-111-1. Issued a security update for linux and linux-latest fixing 24 CVE.
  • ELA-117-1. Issued a security update for apache2 fixing 2 CVE and investigated four more CVE which I triaged as not-affected.

Thanks for reading and see you next time.

Cory Doctorow“What does it mean to keep the internet free?” An in-depth discussion with Why? on North Dakota Public Radio

A couple of weeks ago, I recorded a long, in-depth discussion on the subject of “What does it mean to keep the internet free” with Jack Russell Weinstein from Why?, the Institute for Philosophy in Public Life’s program on North Dakota Public Radio (MP3). Weinstein and I ranged pretty far and wide about what internet freedom really means, what threatens it, and how we can defend it.

Planet DebianJonathan McDowell: Go Baby Go

I’m starting a new job next month and their language of choice is Go. Which means I have a good reason to finally get around to learning it (far too many years after I saw Marga talk about it at DebConf). For that I find I need a project - it’s hard to find the time to just do programming exercises, whereas if I’m working towards something it’s a bit easier. Naturally I decided to do something home automation related. In particular I bought a couple of Xiaomi Mijia Temperature/Humidity sensors a while back which also report via Bluetooth. I had a set of shell scripts polling them every so often to get the details, but it turns out they broadcast the current status every 2 seconds. Passively listening for that is a better method as it reduces power consumption on the device - no need for a 2 way handshake like with a manual poll. So, the project: passively listen for BLE advertisements, make sure they’re from the Xiaomi device and publish them via MQTT every minute.

One thing that puts me off new languages is when they have a fast moving implementation - telling me I just need to fetch the latest nightly to get all the features I’m looking for is a sure fire way to make me hold off trying something. Go is well beyond that stage, so I grabbed the 1.11 package from Debian buster. That’s only one release behind current, so I felt reasonably confident I was using a good enough variant. For MQTT the obvious choice was the Eclipse Paho MQTT client. Bluetooth was a bit trickier - there were more options than I expected (including one by Paypal), but I settled on go-ble (sadly now in archived mode), primarily because it was the first one where I could easily figure out how to passively scan without needing to hack up any of the library code.

With all those pieces it was fairly easy to throw together something that does the required steps in about 200 lines of code. That seems comparable to what I think it would have taken in Python, and to a large extent the process felt a lot closer to writing something in Python than in C.

Now, this wasn’t a big task in any way, but it was a real problem I wanted to solve and it brought together various pieces that helped provide me with an introduction to Go. I’ve a lot more to learn, but I figure I should write up my initial takeaways. There’s no mention of goroutines or channels or things like that - I’m aware of them, but I haven’t yet had a reason to use them so don’t have an informed opinion at this point.

I should point out I read Rob Pike’s Go at Google talk first, which helped understand the mindset behind Go a lot - it’s not trying to solve the same problem as Rust, for example, but very much tailored towards a set of the problems that Google see with large scale software development. Also I’m primarily coming from a background in C and C++ with a bit of Perl and Python thrown in.

The Ecosystem is richer than I expected

I was surprised at the variety of Bluetooth libraries available to me. For a while I wasn’t sure I was going to find one that could do what I needed without hackery, but most of the Python BLE modules have the same problem.

Static binaries are convenient

Go builds a mostly static binary - my tool only links against various libraries from libc, with the Bluetooth and MQTT Go modules statically linked into the executable file. With my distro minded head on I object to this; it means I need a complete rebuild in case of any modification to the underlying modules. However the machine I’m running the tool on is different than the machine I developed on and there’s no doubt that being able to copy a single binary over rather than having to worry about all the supporting bits as well is a real time saver.

The binaries are huge

This is the flip side of static binaries, I guess. My tool is a 7.6MB binary file. That’s not a problem on my AMD64 server, but even though Go seems to have Linux/MIPS support I doubt I’ll be running things built using it on my OpenWRT router. Memory usage seems sane enough, but that size of file is a significant chunk of the available flash storage for small devices.

Module versioning isn’t as horrible as I expected

A few years back I attended a Go talk locally and asked a question about module versioning and the fact that by default modules were pulled directly from Git repositories, seemingly without any form of versioning. The speaker admitted that their example code had in fact failed to compile the previous day because of a change upstream that changed an API. These days things seem better; I was pointed at go mod and in particular setting GO111MODULE=on for my 1.11 compiler, and when I first built my code Go created a go.mod with a set of versioned dependencies. I’m still wary of build systems that automatically grab code from the internet, and the pinning of versions conflicts with an ability to be able to automatically rebuild and pick up module security fixes, but at least there seems to be some thought going into this these days.

I love maps

Really this is more a generic thing I miss when I write C. Perl hashes, Python dicts, Go maps. An ability to easily stash things by arbitrary reference without having to worry about reallocation of the holding structure. I haven’t delved into other features Go has over C particularly yet so I’m sure there’s more to take advantage of, but maps are a good start.

The syntax is easy enough

The syntax for Go felt comfortable enough to me. I had to look a few bits and pieces up, but nothing grated. go fmt is a nice touch; I like the fact that modern languages are starting to have a well defined preferred style. It’s a long time since I wrote any Pascal, but as a C programmer things made sense.

I’m still not convinced about garbage collection

One of the issues I hit while developing my tool was that it would sit and spin and take more and more memory. This turned out to be a combination of some flaky Bluetooth hardware returning odd responses, and my failure to handle the returned error message. Ultimately this resulted in a resource leak causing the growing memory use. This would still have been possible without garbage collection, but I think not having to think about memory allocation/deallocation made me more complacent. Relying on the garbage collector to free up resources means you have to be sure nothing is holding a reference any more (even if it won’t use it). I think it will take further time with Go development to fully make my mind up, but for now I’m still wary.

Code, in the unlikely event it’s helpful to anyone, is on GitHub.

CryptogramWhatsApp Vulnerability Fixed

WhatsApp fixed a devastating vulnerability that allowed someone to remotely hack a phone by initiating a WhatsApp voice call. The recipient didn't even have to answer the call.

The Israeli cyber-arms manufacturer NSO Group is believed to be behind the exploit, but of course there is no definitive proof.

If you use WhatsApp, update your app immediately.

Cory DoctorowNaked Capitalism reviews Radicalized

Naked Captalism is one of my favorite sites, both for its radical political commentary and the vigorous discussions that follow from it; now, John Siman has posted a review of my latest book, Radicalized, which collects four intensely political science fiction stories about our present day and near future.

Siman’s review frames Radicalized as a critique of neoliberalism, which is just right: from the story Unauthorized Bread, about the use of DRM-locked appliances to make the lives of refugees in subsidized housing miserable, to the title story Radicalized, which supposes that men who watch their loved ones die slowly after they’re denied treatment by their insurers might start murdering health insurance execs and the politicians they’ve purchased, and that if the men doing the killing are white and respectable enough, America might not immediately brand them as terrorists.

And the proof of the devil’s active existence in the Neoliberal USA is in the details, which Doctorow gets right in a way that is enrapturing in its precision: Here is the alluring beauty that arises from staring squarely at and studying what is most abhorrent.

The reigning devil is, of course, the Neoliberal dispensation by which the USA has been consumed for going on four decades now, in whose workshop the country has been purposefully divided into an oligarchy consisting of the billionaires and their Creative Class, hipsterocratic lieutenants on the one hand, and the lumpen deplorables and the immigrants of ambiguous documentation and the vestigial middle class and the poor of many colors on the other.

And Doctorow sees all this — or at least describes it — better than just about anybody.

”Who Says Violence Doesn’t Solve Anything?” A Review of Radicalized: Four Tales of Our Present Moment by Cory Doctorow

LongNowWhat Trees Tell Us

The rings of centuries-old trees are offering scientists a more complete picture of climate change and the role of humans in causing it.

Trees, it seems, are giant organic recording devices that contain information about past climate, civilizations, ecosystems and even galactic events, much of it many thousands of years old.

In recent years, the techniques for extracting information from tree rings has been honed and expanded. New technologies and techniques are able to pry a much deeper and wider range of information out of trees.

Via The New York Times

Planet DebianSteinar H. Gunderson: Bug fest

Yesterday:

It's a great time to be alive! Honorable mention goes to Alabama's new abortion laws. :-/

Worse Than FailureA Problem in the Backend

Gary works at a medium company. Big enough that workers are specialized in their duty, but small enough to know people in other departments and to make turf wars a little more close and personal: danger close. Most of the departments see themselves as part of a team, but a few individuals see themselves as McCarthy, who will themselves alone save the company and defeat the enemies (who are all spies inside the company sent to destroy it from the inside).

One of these individuals is named Eric. Eric is close to a Kevin. Eric is the front-end developer, and neither likes nor cares about what happens on the backend. Whenever Eric has an issue, he blames the backend. CSS rendering glitch? Backend problem. Browser crash? That’s the backend problem. Slow UI, even when all the data is cached clientside? Definitely a backend problem. Gary used to get mad, but now knows that Eric is so dumb that he doesn’t even know how dumb he is.

Eric grates on people’s nerves. Since nothing is his problem, he doesn’t seem to have any work, so he bugs backend developers like Gary with terrible jokes. A typically Eric joke:

“Do you know why they call back end developers back end,” Eric asked.
“No?” Gary questioned
“Because you look like your back end!”
“ …?…ha”

Another typically Eric joke is taping up backend developer work in progress on the men’s restroom stall. Gary knows it was Eric, because he came to ask the connection details for the color printer (the printer nearest Eric is grayscale only).

The first telling is almost funny. The second telling is less so. The 100th joke while Gary is trying to debug a large Perl regex, and Gary is less inclined to be sympathetic.

Eric and Gary have had in a couple danger close incidents. The first time involved an issue where Eric didn’t read the api naming conventions. He wrote front-end code with a different naming convention, and insisted that Gary change a variable name in the API. That variable is referenced twice on his front end and and in over 10,000 lines of backend code.

The most recent danger close incident involved the Big Boss. The Big Boss knows how Eric can be, so he generally gives Eric some time to find out the problem, but features need to ship. Eventually, the Big Boss calls a meeting.

“Hey Gary, Eric, I don’t know whose fault it is, but when I log in as a superuser, log out and log back in as a limited user, I still see data for a superuser. This is an information security issue. Now I’m not sure which part is causing this, but would you know?” asked Big Boss.

“I’m sure it’s the backend.,” Eric proclaimed.

“100% sure it’s a backend issue?,” the Big Boss asks to help give Eric an out.

“I only display what the backend returns. It must be what they are returning to me and not checking the user credentials,” Eric stated as the law of the universe. This recalled another joke of his: the front end is a pipe to the backend and the backend is the crap that the developers put into it.

“So you are positive that it’s a back end issue,” Big Boss asked.

“I mean I can even show you right now how to test it. If I’m sending different identities to the backend, then the backend should reply with different sets of data,” said Eric.

Eric grabbed the boss’s mouse and started clicking around on his computer. After a moment, he started shouldering the boss further away from his computer.

“Aha, See? It’s definitely different identities, but we see the same set of data, which as I’ve said is FROM THE BACKEND. You have to fix the backend, Gary,” Eric said.

Gary watched this, silently. He already knew exactly what was happening, and was just waiting to hang Eric out to dry. “Hey boss, can you try a couple things? Can you disable the browser cache and refresh the page?”

The Boss cleared and disabled the cache and refreshed. The Boss logged back in several more times, under different identities, and everything was correct. They re-enabled the cache, and the buggy behavior came back in. Clearly, the front-end was configured to aggressively cache on the client side, and Gary said as much.

“But it needs to be that way for performance…,” whined Eric. “Because the backend is so slow!”

“Well, it fixes the issue,“ the Big Boss said. ”So, Eric fix that front-end caching issue please."

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianKeith Packard: snek-neopixel

Snek and Neopixels

(click on the picture to see the movie)

Adafruit sells a bunch of things using the Neopixel name that incorporate Worldsemi WS2812B full-color LEDs with built-in drivers. These devices use a 1-wire link to program a 24-bit rgb value and can be daisy-chained to connect as many devices as you like using only one GPIO.

Bit-banging Neopixels

The one-wire protocol used by Neopixels has three signals:

  • Short high followed by long low for a 0 bit
  • Long high followed by a short low for a 1 bit
  • Really long low for a reset code

Short pulses are about 400ns, long pulses are around 800ns. The reset pulse is anything over about 50us.

I'd like to use some nice clocked signal coming out of the part to generate these pulses. A SPI output would be ideal; set the bit rate to 400ns and then send three SPI bits for each LED bit, either 100 or 110. Alas, none of the boards I've got connect the Neopixels to a pin that can be used as for SPI MOSI.

As a fallback, I tried using DMAC to toggle the GPIO outputs. Alas, on the SAMD21G part included in these boards, the DMAC controller can't actually write to the GPIO control registers. There's a missing connection inside the chip.

So, like all of the examples I found, I fell back to driving the GPIO registers directly with the processor, relying on a carefully written sequence of operations to get the timing within the tolerance required by the Neopixels. I have to disable interrupts during this process to avoid messing up the timing though.

Current Snek Neopixel API

I looked at the Circuit Python Neopixel API to see if there was anything I could adapt for Snek. That API uses 3-element tuples for the R,G,B values, and then places those in a list, one for each pixel in the chain. That seemed like a good idea. However, that API also has a lot of allocation churn, with new colors being created in newly allocated lists. Doing that with Snek would probably be too slow as Snek uses a garbage collector for allocation.

So, we'll allow mutable lists inside of a list or tuple, then Neopixel colors can be changed by modifying the value within the per-Neopixel lists.

Snek doesn't have objects, so we'll just create a function to send color data for a list of Neopixels out a pin. We'll use the existing Snek GPIO function, talkto, to select the pin. Finally, I'm using color values from 0-1 instead of 0-255 to make this API work more like the other analog interfaces.

> pixels = ([0.2, 0, 0],)
> talkto(NEOPIXEL)
> neopixel(pixels)

That make the first Neopixel a not-quite-blinding red. Now we can turn it green with:

> pixels[0][0] = 0
> pixels[0][1] = 0.2
> neopixel(pixels)

You can, of course, use tuples like with Circuit Python:

> pixels = [(0.2, 0, 0)]
> talkto(NEOPIXEL)
> neopixel(pixels)
> pixels[0] = (0, 0.2, 0)
> neopixel(pixels)

This does allocate a new list though.

Snek on Circuit Playground Express

As you can see in the pictures above, Snek is running on the Adafruit Circuit Playground Express. This board has a bunch of built-in hardware. At this point, I've got the buttons, switches, lights and analog input sensors (temperature and light intensity) all working. I don't have the motion sensor or audio bits going. I'll probably leave those pieces until after Snek v1.0 has been released.

,

Planet DebianLouis-Philippe Véronneau: TLS SIP support on the Cisco SPA112 ATA

A few days ago, my SIP provider (the ever reliable VoIP.ms) rolled out TLS+SRTP support. As much as I like their service, it was about time.

Following their wiki, I was able to make my Android smartphone work with TLS. Sadly, the Android SIP stack does not support TLS and I had to migrate to Linphone. It's a small price to pay for greatly increased security, but the Linphone interface and integration to the rest of the OS isn't as good.

I did have a lot of trouble getting my old Cisco SPA112 ATA working with TLS though. Although I setup the device correctly, I couldn't get it to register.

As always, the VoIP.ms support staff was incredibly helpful and reproduced the error I was getting in their lab1. Apparently, the trouble spawns from the latest firmware (1.4.1 SR3). After downgrading to 1.4.1 SR1, I was able to have the device successfully register with TLS.

Note that since SRTP is mandatory with TLS on VoIP.ms's servers, you'll need to active the Secure Call Serv option in the Line 1 menu and the Secure Call Setting in the User 1 menu in addition of changing the protocol and the port.

If like me you had the device running a more recent firmware version and want to downgrade, you will have to disable the HTTPS web interface since the snakeoil certificate used interferes with the firmware upgrade process.

2019-05-14 update

One of the changes in 1.4.1 SR3 firmware is that the SPA112 now validates TLS certificates, as per issue CSCvm49157 in the release notes. The problem I had with being unable to register the device was being caused by a missing Let’s Encrypt root certificate in its certificate store.

Thanks to Michael Davie for pointing this out to me! It turns out VoIP.ms also did their job and updated their documentation to include a section on adding a new root CA cert to the device. Sadly, the link they provide on their wiki is a plain HTTP one. I'd recommend you use the LE Root CA directly: https://letsencrypt.org/certs/isrgrootx1.pem.txt

One last thing: if like me you wondered what the heck was the new beep beep sound during the call, it turns out it's the "Secure Call Indication Tone". You can turn it off by following these instructions.


  1. Yes, you heard that right: they have a lab on hand with tons of devices so that they can help you debug your problems live. 

Cory DoctorowLA! Come see me this Saturday at the Nebula Awards Conference, and next Thursday at Exposition Park Library!


This Saturday, May 18, I’ll be appearing at the Nebula Awards Conference, at the Marriott Warner Center in Woodland Hills: I’ll be participating in the 1:30PM mass signing in the Grand Ballroom and then I’ll be on the “Megatrends for the Near Future” panel at 4PM in A/B Salon.

And then on Thursday, May 23d, I’ll be speaking at the Exposition Park Regional Library as part of the Los Angeles Public Library’s Book to Action program, speaking on algorithmic manipulation, monopolies and technological self determination from 6PM-730PM.

CryptogramUpcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Krebs on SecurityMicrosoft Patches ‘Wormable’ Flaw in Windows XP, 7 and Windows 2003

Microsoft today is taking the unusual step of releasing security updates for unsupported but still widely-used Windows operating systems like XP and Windows 2003, citing the discovery of a “wormable” flaw that the company says could be used to fuel a fast-moving malware threat like the WannaCry ransomware attacks of 2017.

The May 2017 global malware epidemic WannaCry affected some 200,000 Windows systems in 150 countries. Source: Wikipedia.

The vulnerability (CVE-2019-0708) resides in the “remote desktop services” component built into supported versions of Windows, including Windows 7, Windows Server 2008 R2, and Windows Server 2008. It also is present in computers powered by Windows XP and Windows 2003, operating systems for which Microsoft long ago stopped shipping security updates.

Microsoft said the company has not yet observed any evidence of attacks against the dangerous security flaw, but that it is trying to head off a serious and imminent threat.

“While we have observed no exploitation of this vulnerability, it is highly likely that malicious actors will write an exploit for this vulnerability and incorporate it into their malware,” wrote Simon Pope, director of incident response for the Microsoft Security Response Center.

“This vulnerability is pre-authentication and requires no user interaction,” Pope said. “In other words, the vulnerability is ‘wormable,’ meaning that any future malware that exploits this vulnerability could propagate from vulnerable computer to vulnerable computer in a similar way as the WannaCry malware spread across the globe in 2017. It is important that affected systems are patched as quickly as possible to prevent such a scenario from happening.”

The WannaCry ransomware threat spread quickly across the world in May 2017 using a vulnerability that was particularly prevalent among systems running Windows XP and older versions of Windows. Microsoft had already released a patch for the flaw, but many older and vulnerable OSes were never updated. Europol estimated at the time that WannaCry spread to some 200,000 computers across 150 countries.

CVE-2019-0708 does not affect Microsoft’s latest operating systems — Windows 10, Windows 8.1, Windows 8, Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, or Windows Server 2012.

More information on how to download and deploy the update for CVE-2019-0708 is here.

All told, Microsoft today released 16 updates targeting at least 79 security holes in Windows and related software — nearly a quarter of them earning Microsoft’s most dire “critical” rating. Critical bugs are those that can be exploited by malware or ne’er-do-wells to break into vulnerable systems remotely, without any help from users.

One of those critical updates fixes a zero-day vulnerability — (CVE-2019-0863) in the Windows Error Reporting Service — that’s already been seen in targeted attacks, according to Chris Goettl, director of product management for security vendor Ivanti.

Other Microsoft products receiving patches today including Office and Office365, Sharepoint, .NET Framework and SQL server. Once again — for the fourth time this year — Microsoft is patching yet another critical flaw in the Windows component responsible for assigning Internet addresses to host computers (a.k.a. “Windows DHCP client”).

“Any unauthenticated attacker who can send packets to a DHCP server can exploit this vulnerability,” to deliver a malicious payload, notes Jimmy Graham at Qualys.

Staying up-to-date on Windows patches is good. Updating only after you’ve backed up your important data and files is even better. A good backup means you’re not pulling your hair out if the odd buggy patch causes problems booting the system. So do yourself a favor and backup your files before installing any patches.

Note that Windows 10 likes to install patches all in one go and reboot your computer on its own schedule. Microsoft doesn’t make it easy for Windows 10 users to change this setting, but it is possible. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

As per usual, Adobe has released security fixes for Flash Player and Acrobat/Reader. The Flash Player update fixes a single, critical bug in the program. Adobe’s Acrobat/Reader update plugs at least 84 security holes.

Microsoft Update should install the Flash fix by default, along with the rest of this month’s patch bundle. Fortunately, the most popular Web browser by a long shot — Google Chrome — auto-updates Flash but also is now making users explicitly enable Flash every time they want to use it. By the summer of 2019 Google will make Chrome users go into their settings to enable it every time they want to run it.

Firefox also forces users with the Flash add-on installed to click in order to play Flash content; instructions for disabling or removing Flash from Firefox are here. Adobe will stop supporting Flash at the end of 2020.

As always, if you experience any problems installing any of these patches this month, please feel free to leave a comment about it below; there’s a good chance other readers have experienced the same and may even chime in here with some helpful tips.

Planet DebianMolly de Blanc: advice

Recently I was asked two very good questions about being involved in free/open source software: How do you balance your paid/volunteer activities? What sort of career advice do you have for people looking to get involved professionally?

I liked answering these in part because I have very little to do with the software side, and also because, much like many technical volunteers, my activities between my volunteer work and my paid work have been similar-to-identical over the years.

How do you balance paid/volunteer activities?

My answer at the time was, effectively: I set aside clearly defined time to work on my different activities, usually once a week — generally on Sundays. I check my email a few times a day, and respond to things that are immediate within a few hours, but I handle the bulk of my work at one time. The Anti-harassment team has a regularly scheduled meeting/work time during which we handle the bulk of our necessary labor. I’ve learned to say no, I’ve learned how to delegate, and I’ve learned how to say “I’m not going to be able to finish this thing I said I could do, how can we as a team make sure it gets completed.”

This works for me because 1) I’ve put a lot of work into developing my confidence and the skills needed for working collaboratively; and 2) my biggest responsibilities outside of my job (and free software, in general) are taking care of plants and having bash. (Note: Bash is my cat.) I don’t have children or a partner. I have a band and climbing partners, but these things, much like my free software activities, are time constrained. My band meets for practice at the same time each week; I sneak in moments to play a song or run through scales during the rest of the week. I climb with the same people at the same times each week. With my fancy new job, I work remotely and am able to now even work at the climbing gym, and take little breaks to run through a few bouldering problems.

Because of all these factors — my limited and optional responsibilities towards others (I travel a lot for free software, and miss band practice and climbing sometimes, for example) — I have been able to take up leadership positions in Debian and the open source community at large. Because of my job, I was able to take on even more responsibility at the OSI. I’ve held leadership positions in my unpaid work for over ten years now, since I was a student and able to use my lack of responsibilities beyond my studies (and student job) to focus on helping to stack chairs for open source. (Note: “Stack chairs” is Molly for “perform often unseen labor, often for events.”)

As an aside, one of my criticisms about unpaid project/org leader positions is that it means that the people who can do the jobs are:

  • students
  • contractors
  • unemployed
  • those with few to no other responsibilities
  • those with very supportive partners
  • those with very supportive employers
  • those who don’t need much sleep.

I’ve slowly been swayed into the belief that many (not all) leadership positions should be paid, grant funded, come with a stipend, or be led by committee. More on this in a future blog post.

In summary: learn to tell other people you can’t do things and work on those scheduling skillz.

What sort of career advice do you have for people looking to get involved professionally?

This question was asked in an evening of panels and one thing that really stood out to me was many of the panelists saying — in response to completely different questions — that they no longer cold apply for jobs, or that all of their jobs have come from social connections, or that they just don’t apply to jobs (and only go work for places where they have been given soft offers or are invited to go straight to an interview stage).

An acquaintance of mine once said to me: I don’t believe in luck, I believe in social connections.

Our social connections form complex causal graphs, which lead to many, if not all, of the good things that happen in our lives. I got my first job in free software not because of my cool skillz, but because I happened to hang out with a friend of someone who had a friend looking for an intern.

I actually have gotten (two) jobs where I cold applied — but in both cases the people were interested in me because of a certain social connection I had — whether they realized that or not. Even my job in college, at the school library, came because I had a friend who worked there.

Telling people to network really is general job advice that works for everyone in every field of endeavor.

If you’re an introvert (like myself!) one of the best ways to form social connections is through public speaking. When you give a talk at a conference not only are you building up your personal brand and letting other people know about your skills, competency, and expertise, but you’re also giving people something to talk to you about — and they will talk to you. Giving a talk is like putting a sign on yourself saying “Come talk to me about X,” when X is something you’re actually passionate about. It’s great because you don’t have to put yourself out there to talk to strangers — strangers come to you!

Public speaking also increases your visibility in the community — this is good if you want a job. That way, when someone sees your CV/resume your name will stand out because they’ll remember seeing it before. They might not remember your talk, or maybe they didn’t even attend your talk, but they will remember seeing your name. Having a section on your CV that lists presentations you’ve given helps you stand out from everyone else because it shows you can share information well and are actually interested in what you do. Where you speak and have spoken is a shibboleth for where you see yourself in the community and what values you have: Seeing “Open Source Bridge” tells me that you’re interested in communities and building spaces where everyone is a welcome participant; OSCON and PyCon convey confidence because you know you’re opening yourself up to a potentially big audience; local meetups and conferences share a value of wanting to participate in and build up your local community; international events say that you really understand that we’re looking at a global scale here.

We also just learn to communicate better when we speak publicly. We learn better ways to share ideas, how to turn thoughts into a cohesive narrative, and how to appear confident even if we’re not.

Building off of that, learning to write is extremely, extremely important. There is an art to written communication, whether it’s a brief letter between colleagues, presentations, comments in code or other documentation, blog posts, cover letters, etc. Communicating well through writing will take you so far, especially as more jobs, especially in tech, become increasingly focused on using chat tools for collaboration.

All of the things that are true for public speaking are also true for writing well: it helps you become a recognized and valued member of your community. When I was a community manager I loved the developers (and translators and doc writers and…) who were interested in writing blog posts, participating in community Q&A/round table sessions, etc because they were the ones who made us an approachable project, who made us a great place that included people whether they were getting paid to work on the project or not.

Anyone can learn to be a passable developer (or fill in your specific role here), and anyone can learn to be a passable writer. Someone who chooses to do both is special and who I want on my team.

In summary: Learn to talk to strangers, learn public speaking, learn to write.

The people asking me these questions were, I believe, developers or at least people with technical skill sets rather than administrative, community, organizational, social, etc skill sets. This advice holds true across the spectrum of paid labor.

One person came up to me later and explained that they had been working in generating content, but wanted to switch to a more organizational role managing the aggregation and sharing of content. They asked me how they could make that transition, and my advice was exactly the same: learn to talk to people so you can learn who has opportunities and learn to communicate well because it will help you stand out and also just make your life a lot easier.

I’d also like to briefly point out that ehash gave some great answers geared towards technical roles, and I hope will share them in some public forum.

CryptogramCryptanalysis of SIMON-32/64

A weird paper was posted on the Cryptology ePrint Archive (working link is via the Wayback Machine), claiming an attack against the NSA-designed cipher SIMON. You can read some commentary about it here. Basically, the authors claimed an attack so devastating that they would only publish a zero-knowledge proof of their attack. Which they didn't. Nor did they publish anything else of interest, near as I can tell.

The paper has since been deleted from the ePrint Archive, which feels like the correct decision on someone's part.

Worse Than FailureCodeSOD: Transport Layer Stupidity

Keith H’s boss came by his cube.

“Hey, you know how our insurance quote site has TLS enabled by default?”

“Yes,” Keith said. The insurance quote site was a notoriously kludgy .NET 4.5.1 web app, with no sort of automated deployment and two parallel development streams: one tracked in Git, and one done by editing files and compiling right on the production server.

“Yes, well, we need to turn that off. ‘Compliance reasons’.”

This created a number of problems for Keith. There was no way to know for sure what code was in Git and what was in production and how they didn’t match. Worse, they relied on reCAPTCHA, which required TLS. So Keith couldn’t just turn it off globally, he needed to disable it for inbound client connections but enable it for outbound connections.

Which he did. And everything was fine, until someone used the “Save as PDF” function, which took the page on the screen and saved it as a PDF to the user’s machine.

protected void btnInvoke_Click(object sender, EventArgs e)
        {
            var url = util.getUrl("Quote", "QuoteLetterPDF.aspx");
            url = url + "?QuoteId=" + hdnCurrentQuoteId.Value;
            var pdfBytes = UtilityManager.ConvertURLToPDF(url);
            // send the PDF document as a response to the browser for download
            var response = HttpContext.Current.Response;
            response.Clear();
            response.AddHeader("Content-Type", "application/pdf");
            response.AddHeader("Content-Disposition",
                String.Format("attachment; filename=QuoteLetter.pdf; size={0}", pdfBytes.Length));
            response.BinaryWrite(pdfBytes);
            // Note: it is important to end the response, otherwise the ASP.NET
            // web page will render its content to PDF document stream
            response.End();
        }

public static byte[] ConvertURLToPDF(string url)
        {
            // ...redacted for brevity
            byte[] pdfBytes = null;
            var uri = new Uri(url);
            var encryptedParameters = Encrypt(uri.Query);
            var encryptedUrl = uri.Scheme + "://" + uri.Authority + uri.AbsolutePath + "?pid=" + encryptedParameters;
            var htmlData = GetHtmlStringFromUrl(encryptedUrl);
            pdfBytes = pdfConverter.GetPdfBytesFromHtmlString(htmlData);
            return pdfBytes;
        }

 public static string GetHtmlStringFromUrl(string url)
        {
            string htmlData = string.Empty;
            HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
            System.Net.ServicePointManager.ServerCertificateValidationCallback =
        ((sender, certificate, chain, sslPolicyErrors) =>
        {
            return true;
        });
            var response = (HttpWebResponse)request.GetResponse();
                if (response.StatusCode == HttpStatusCode.OK)
                {
                    Stream receiveStream = response.GetResponseStream();
                    StreamReader readStream = null;
                    if (response.CharacterSet == null)
                    {
                        readStream = new StreamReader(receiveStream);
                    }
                    else
                    {
                        readStream = new StreamReader(receiveStream, Encoding.GetEncoding(response.CharacterSet));
                    }
                    htmlData = readStream.ReadToEnd();
                    response.Close();
                    readStream.Close();
                }
            return htmlData;
        }

It’s a lot of code here. btnInvoke_Click is the clearly named “save as PDF button” callback handler. What it does, using ConvertURLToPDF and GetHtmlStringFromUrl is… send a request to a different ASPX file in the same application. It downloads the HTML, and then passes the HTML off to a PDF converter which renders the HTML into a PDF.

For reasons which are unclear, it encrypts the parameters which it's passing in the query string. These requests never go across the network, and even if they did, it's generally more reasonable to pass those parameters in the request body, which would be encrypted via TLS.

And it does send this request using TLS! However, as Keith disabled support for incoming TLS requests, this doesn’t work.

Which is funny, because TLS being disabled is pretty much the only way in which this request could fail. An invalid certificate wouldn’t, because of this callback:

            System.Net.ServicePointManager.ServerCertificateValidationCallback =
        ((sender, certificate, chain, sslPolicyErrors) =>
        {
            return true;
        });

Keith re-enabled TLS throughout the application, “compliance” reasons be damned.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Cory Doctorowre:publica 2019 – Cory Doctorow: It’s monopolies, not surveillance.

,

Planet DebianBenjamin Mako Hill: The Shifting Dynamics of Participation in an Online Programming Community

Informal online learning communities are one of the most exciting and successful ways to engage young people in technology. As the most successful example of the approach, over 40 million children from around the world have created accounts on the Scratch online community where they learn to code by creating interactive art, games, and stories. However, despite its enormous reach and its focus on inclusiveness, participation in Scratch is not as broad as one would hope. For example, reflecting a trend in the broader computing community, more boys have signed up on the Scratch website than girls.

In a recently published paper, I worked with several colleagues from the Community Data Science Collective to unpack the dynamics of unequal participation by gender in Scratch by looking at whether Scratch users choose to share the projects they create. Our analysis took advantage of the fact that less than a third of projects created in Scratch are ever shared publicly. By never sharing, creators never open themselves to the benefits associated with interaction, feedback, socialization, and learning—all things that research has shown participation in Scratch can support.

Overall, we found that boys on Scratch share their projects at a slightly higher rate than girls. Digging deeper, we found that this overall average hid an important dynamic that emerged over time. The graph below shows the proportion of Scratch projects shared for male and female Scratch users’ 1st created projects, 2nd created projects, 3rd created projects, and so on. It reflects the fact that although girls share less often initially, this trend flips over time. Experienced girls share much more than often than boys!

Proportion of projects shared by gender across experience levels, measured as the number of projects created, for 1.1 million Scratch users. Projects created by girls are less likely to be shared than those by boys until about the 9th project is created. The relationship is subsequently reversed.

We unpacked this dynamic using a series of statistical models estimated using data from over 5 million projects by over a million Scratch users. This set of analyses echoed our earlier preliminary finding—while girls were less likely to share initially, more experienced girls shared projects at consistently higher rates than boys. We further found that initial differences in sharing between boys and girls could be explained by controlling for differences in project complexity and in the social connectedness of the project creator.

Another surprising finding is that users who had received more positive peer feedback, at least as measured by receipt of “love its” (similar to “likes” on Facebook), were less likely to share their subsequent projects than users who had received less. This relation was especially strong for boys and for more experienced Scratch users. We speculate that this could be due to a phenomenon known in the music industry as “sophomore album syndrome” or “second album syndrome”—a term used to describe a musician who has had a successful first album but struggles to produce a second because of increased pressure and expectations caused by their previous success


This blog post (published first on the Community Data Science Collective blog) and the paper are collaborative work with Emilia Gan and Sayamindu Dasgupta. You can find more details about our methodology and results in the text of our paper, “Gender, Feedback, and Learners’ Decisions to Share Their Creative Computing Projects” which is freely available and published open access in the Proceedings of the ACM on Human-Computer Interaction 2 (CSCW): 54:1-54:23.

Planet DebianHolger Levsen: 20190513-minidebconf-hamburg-beds+cfp

Some beds, some talk slots and many seats still available for the Mini-DebConf in Hamburg in June 2019

Moin!

We still have 14 affordable beds available for the the MiniDebConf Hamburg 2019, which will take place in Hamburg (Germany) from June 5 to 9, with three days of Debcamp style hacking, followed by two days of talks, workshops and more hacking. If you were unsure about coming because of accomodation, please reconsider and come around! (And please mail me directly if you would like to sleep in a bed on site.)

It's going to be awesome. You should all come! Register now!

Moar talks wanted

We also would like to receive more talk submissions at cfp@minidebconfhamburg.debian.net - please consider presenting your work. The DebConf videoteam will be present to preserve your presentation :)

Suggested topics include:

  • Packaging
  • Security
  • Debian usability
  • Cloud and containers
  • Automating with Debian
  • Debian social
  • New technologies & infrastructure (gitlab, autopkgtest, dgit, debomatic, etc)

This list is not exhaustive, and anything not listed here is welcome, as long as it's somehow related to Debian. If in doubt, propose a talk and we'll give feedback.

We will have talks on Saturday and Sunday, the exact slots are yet to be determined. We expect submissions and talks to be held in English, as this is the working language in Debian and at this event.

Moar info

Is available on the event wiki page.

Looking forward to see you in Hamburg!

LongNowTranslating the Big Bang into Blackfoot

Meet Corey Gray and Sharon Yellowfly, a mother-son duo translating astrophysics into the Native American language Siksika (Blackfoot).

On April 1, scientists will officially restart their search for gravitational waves after a year spent making improvements to massive twin detectors. Discoveries should soon start rolling in, and when they do, there’s a good chance the news will be translated into a Native American language called Blackfoot, or Siksika.

That’s thanks to Corey Gray, who works at the Laser Interferometer Gravitational-Wave Observatory (LIGO) site in Washington state. He has been collaborating with his mom to translate this cutting-edge field of science into an endangered languagespoken by just thousands of people worldwide.

Via NPR.

CryptogramReverse Engineering a Chinese Surveillance App

Human Rights Watch has reverse engineered an app used by the Chinese police to conduct mass surveillance on Turkic Muslims in Xinjiang. The details are fascinating, and chilling.

Boing Boing post.

Planet DebianJoachim Breitner: Artsy desktop background

Friends of mine took part in a competition where they had to present an art project of theirs using a video. At some point we had the plan of creating a time lapse video of a drawing being created, and for that mounted a camera above the drawing desk.

With paper

With paper

In the end we did not actually use the video, but it turns out that the still from the beginning (with blank paper) and the end of the video (no paper) are pretty nice, too. So I am sharing them here, in case anyone wants to use them as a desktop background or what not.

Without paper

Without paper

Feel free to re-use these photos under the terms of the Creative Commons Attribution 4.0 International License.

Worse Than FailureCodeSOD: The National Integration

Sergio works for a regional public administration. About a decade ago, the national government passed some laws or made some regulations, and then sent a software package to all the regional administrations. This software package was built to aggregate reports from the local databases into a single, unified, consistent interface on a national website.

Of course, each regional group did things their own way, so the national software package needed to be customized to work. Also, each regional administration had their own reporting package which already did some of this, and which they liked better, because they already knew how it worked. In the case of Sergio's employer, even more important: their organization's logo was prominently displayed.

Of course, there was also the plain old stubborness of an organization being told they have to do something when they really don't want to do that thing. In that situation, organizations have all the enthusiasm of a five year old being told to brush their teeth or eat their vegetables.

The end result was that the people tasked with doing the integration and customization didn't want to be doing that, and since the organization as a whole didn't want to do anything, they weren't exactly putting their top-tier resources on the project. The integration task was doled out to the people who literally couldn't be trusted to do anything else, but couldn't be fired.

Shockingly, national noticed a huge number of errors coming from their software, and after a few months of constant failures and outages, Sergio was finally tasked with cleaning up the mess.

private ReportsWSStub stub = null; public ReportHelper() { try { if (null == stub) { // URL del WS String url = InicializacionInformes.URL_WS_CLIENTE_INFORMES; System.out.println("URL " + url); stub = (ReportsWSStub)new ReportsWSStub(url); log.info("Report's 'stub' has been initialized"); log.info("URL for the Report's stub " + url); log.info("stub._getServiceClient() " + stub._getServiceClient()); ... } } catch (Exception e) { log.error(" Exception", e); System.out.println(" Exception" + e.getMessage()); } }

Here, we have the constructor for the ReportHelper class. It might be better named as a wrapper, since its entire job is to wrap around the ReportsWSStub object. It's important to note that this object is useless without a valid instance of ReportsWSStub.

With that in mind, there's all sorts of little things which pop out. First, note the if (null == stub) check. That's a backwards way to write that, which sort of sets the tone for the whole block. More than just backwards- it's pointless. This is the constructor. The stub variable was initialized to null (also unnecessary). The variable can't be anything but null right now.

Then we go on and mix log calls with System.out.println calls, which this is a JEE component running on a webserver, so those printlns fly off to the void- they're clearly left over debugging lines which shouldn't have been checked in.

The key problem in this code, however, is that while it logs the exception it gets when trying to initialize the stub, it doesn't do anything else. This means that stub could still be null at the end of the constructor. But this object is useless without a stub, which means you now have an unusuable object. At least it hopefully gives you good errors later?

public Report getReport(String id , String request) throws IOException { ... try { stub._getServiceClient().getOptions().setProperty( Constants.Configuration.ENABLE_MTOM, Constants.VALUE_TRUE); stub._getServiceClient().getOptions().setProperty( Constants.Configuration.CACHE_ATTACHMENTS, Constants.VALUE_TRUE); stub._getServiceClient().getOptions().setProperty( Constants.Configuration.FILE_SIZE_THRESHOLD, "10000"); stub._getServiceClient().getOptions() .setTimeOutInMilliSeconds(100000); stub._getServiceClient().getOptions().setProperty( org.apache.axis2.transport.http.HTTPConstants.CHUNKED, Boolean.FALSE); requestInforme.setGetReport(requestType); if(stub == null){ log.info("##################### Stub parameters are null!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"); } else { log.info("#####################Stub parameters are not null!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"); } responseInformes = stub.getReport(requestInforme); } catch (Exception e) { //... } }

As you can see, they've helpfully added a if (stub == null) check to see if stub is null, and log an appropriate message. And if stub isn't null, we have that else block to log a message stating that the parameters are not null, but with the same enthisuasm and exclamation points of the error. And of course, we're only trying to log that the stub is null, not, y'know, do anything different. "We know it's a problem, but we're doing absolutely nothing to change our behavior."

That's okay, though, because if stub is null, we'll never log that error anyway, because we'll get a NullReferenceException instead.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet DebianDirk Eddelbuettel: RcppAnnoy 0.0.12

A new release of RcppAnnoy is now on CRAN.

RcppAnnoy is the Rcpp-based R integration of the nifty Annoy library by Erik. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours—originally developed to drive the famous Spotify music discovery algorithm.

This release brings several updates: Seed settings follow up on changes in the previous release 0.0.11, this is also documented in the vignette thanks to James Melville; more documentation was added thanks to Adam Spannbauer, unit tests now use the brandnew tinytest package, and vignette building was decoupled from package building. All these changes in this version are summarized with appropriate links below:

Changes in version 0.0.12 (2019-05-12)

  • Allow setting of seed (Dirk in #41 fixing #40).

  • Document setSeed (James Melville in #42 documenting #41).

  • Added documentation (Adam Spannbauer in #44 closing #43).

  • Switched unit testing to the new tinytest package (Dirk in #45).

  • The vignette is now pre-made in included as-is in Sweave document reducing the number of suggested packages.

Courtesy of CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianBastian Venthur: Dotenv CLI

As a small weekend project I wrote my own dotenv CLI. Dotenv-CLI is a simple package that provides the dotenv command. It reads the .env file from the current directory puts the contents in the environment and executes the given command.

Example .env file:

BASIC=basic basic
export EXPORT=foo
EMPTY=
INNER_QUOTES=this 'is' a test
INNER_QUOTES2=this "is" a test
TRIM_WHITESPACE= foo
KEEP_WHITESPACE="  foo  "
MULTILINE="multi\nline"
# some comment

becomes:

$ dotenv env
BASIC=basic basic
EXPORT=foo
EMPTY=
INNER_QUOTES=this 'is' a test
INNER_QUOTES2=this "is" a test
TRIM_WHITESPACE=foo
KEEP_WHITESPACE=  foo
MULTILINE=multi
line

where env is a simple command that outputs the current environment variables. A dotenv CLI comes in handy if you follow the 12 factor app methodology or just need to run a program locally with specific environment variables set.

While dotenv-cli is certainly not the first dotenv implementation, this one is written in Python and has no external dependencies except Python itself. It also provides a bash completion, so you can prefix any command with dotenv while still be able to use completion:

$ dotenv make <TAB>
all      clean    docs     lint     release  test

Since there's also some other popular shells out there and I already struggled writing the simple bash completion script, it would be very nice if some more experienced zsh-, fish-, etc.- users could help me out in providing completions for them as well.

dotenv-cli is available on Debian and Ubuntu:

$ sudo apt-get install python3-dotenv-cli

and on PyPi as well:

$ pip install dotenv-cli

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.9.400.3.0

armadillo image

The recent 0.9.400.2.0 release of RcppArmadillo required a bug fix release. Conrad follow up on Armadillo 9.400.2 with 9.400.3 – which we packaged (and tested extensively as usual). It is now on CRAN and will get to Debian shortly.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 597 other packages on CRAN.

A brief discussion of possibly issues under 0.9.400.2.0 is at this GitHub issue ticket. The list of changes in 0.9.400.3.0 is below:

Changes in RcppArmadillo version 0.9.400.3.0 (2019-05-09)

  • Upgraded to Armadillo release 9.400.3 (Surrogate Miscreant)

    • check for symmetric / hermitian matrices (used by decomposition functions) has been made more robust

    • linspace() and logspace() now honour requests for generation of vectors with zero elements

    • fix for vectorisation / flattening of complex sparse matrices

The previous changes in 0.9.400.2.0 were:

Changes in RcppArmadillo version 0.9.400.2.0 (2019-04-28)

  • Upgraded to Armadillo release 9.400.2 (Surrogate Miscreant)

    • faster cov() and cor()

    • added .as_col() and .as_row()

    • expanded .shed_rows() / .shed_cols() / .shed_slices() to remove rows/columns/slices specified in a vector

    • expanded vectorise() to handle sparse matrices

    • expanded element-wise versions of max() and min() to handle sparse matrices

    • optimised handling of sparse matrix expressions: sparse % (sparse +- scalar) and sparse / (sparse +- scalar)

    • expanded eig_sym(), chol(), expmat_sym(), logmat_sympd(), sqrtmat_sympd(), inv_sympd() to print a warning if the given matrix is not symmetric

    • more consistent detection of vector expressions

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBits from Debian: New Debian Developers and Maintainers (March and April 2019)

The following contributors got their Debian Developer accounts in the last two months:

  • Jean-Baptiste Favre (jbfavre)
  • Andrius Merkys (merkys)

The following contributors were added as Debian Maintainers in the last two months:

  • Christian Ehrhardt
  • Aniol Marti
  • Utkarsh Gupta
  • Nicolas Schier
  • Stewart Ferguson
  • Hilmar Preusse

Congratulations!

Planet DebianIan Jackson: Rust doubly-linked list

I have now released (and published on crates.io) my doubly-linked list library for Rust.

Of course in Rust you don't usually want a doubly-linked list. The VecDeque array-based double-ended queue is usually much better. I discuss this in detail in my module's documentation.

Why a new library


Weirdly, there is a doubly linked list in the Rust standard library but it is good for literally nothing at all. Its API is so limited that you can always do better with a VecDeque. There's a discussion (sorry, requires JS) about maybe deprecating it.

There's also another doubly-linked list available but despite being an 'intrusive' list (in C terminology) list it only supports one link per node, and insists on owning the items you put into it. I needed several links per node for my planar graph work, and I needed Rc-based ownership.

Indeed given my analysis of when a doubly-linked list is needed, rather than a VecDeque, I think it will nearly always involve something like Rc too.

My module


You can read the documentation online.

It provides the facilities I needed, including lists where each node can be on multiple lists with runtime selection of the list link within each node. It's not threadsafe (so Rust will stop you using it across multiple threads) and would be hard to make threadsafe, I think.

Notable wishlist items: entrypoints for splitting and joining lists, and good examples in the documentation. Both of these would be quite easy to add.

Further annoyance from Cargo


As I wrote earlier, because I am some kind of paranoid from the last century, I have hit cargo on the head so that it doesn't randomly download and run code from the internet.

This is done with stuff in my ~/.cargo/config. Of course this stops me actually accessing the real public repository (cargo automatically looks for .cargo/config in all parent directories, not just in $PWD and $HOME). No problem - I was expecting to have to override it.

However there is no way to sensibly override a config file!

So I have had to override it in a silly way: I made a separate area on my laptop which belongs to me but which is not underneath my home directory. Whenever I want to run cargo publish, I copy the crate to be published to that other area, which is not a direct or indirect subdirectory of anything containing my usual .cargo/config.

Cargo really is quite annoying: it has opinions about how everything is and how everything ought to be done. I wouldn't mind that, but unfortunately when it happens to be wrong it is often lacking a good way to tell it what should be done instead. This kind of thing is a serious problem in a build system tool.

Edited 2019-05-11: minor grammar fix.


comment count unavailable comments

Planet DebianIan Jackson: Rust doubly-linked list, redux

I have declared rc-dlist-deque, my doubly-linked list library for Rust, to be 1.0.0. Little has changed, apart from the version number and some documentation updates.

In particular, I thought I would expand on my previous comments to the effect that you don't want a doubly linked list in Rust.

I've added a survey of the existing doubly linked list crates. (Please click through; I would prefer not to put a copy here in this blog which I would then also have to update if I update the table...)

Most of these crates, sadly, are not really useful. Perhaps people have been publishing their training exercises? (A doubly linked list makes a really bad Rust training exercise, too...)

comment count unavailable comments

,

Planet DebianKeith Packard: snek-amusement

Snek and the Amusement Park

(you can click on the picture to watch the model in action)

Here's an update to my previous post about Snek in a balloon. We also hooked up a Ferris wheel and controlled them both with the same Arduino Duemilanove compatible board. This one has sound so you can hear how quiet the new Circuit Cube motors are.

CryptogramFriday Squid Blogging: Cephalopod Appreciation Society Event

Last Wednesday was a Cephalopod Appreciation Society event in Seattle. I missed it.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDIn Case You Missed It: Highlights from TED2019

Twelve mainstage sessions, two rocking sessions of talks from TED Fellows, a special session of TED Unplugged, a live podcast recording and much more amounted to an unforgettable week at TED2019. (Photo: Marla Aufmuth / TED)

If we learned anything at TED2019, it’s that life doesn’t fit into simple narratives, and that there are no simple answers to the big problems we’re facing. But we can use those problems, our discomfort and even our anger to find the energy to make change.

Twelve mainstage sessions, two rocking sessions of talks from TED Fellows, a special session of TED Unplugged, a live podcast recording and much more amounted to an unforgettable week. Any attempt to summarize it all will be woefully incomplete, but here’s a try.

What happened to the internet? Once a place of so much promise, now a source of so much division. Journalist Carole Cadwalldr opened the conference with an electrifying talk on Facebook’s role in Brexit — and how the same players were involved in 2016 US presidential election. She traced the contours of the growing threat social media poses to democracy and calls out the “gods of Silicon Valley,” naming names — one of whom, Jack Dorsey, the CEO of Twitter, sat down to talk with TED’s Chris Anderson and Whitney Pennington Rodgers the following day. Dorsey acknowledged problems with harassment on the platform and explained some of the work his team is doing to make it better.

Hannah Gadsby broke comedy. Her words, and she makes a compelling case in one of the most talked-about moments of the conference. Look for her talk release on April 29.

Humanity strikes back! Eight huge Audacious Project–supported ideas launched at TED this year. From a groundbreaking project at the Center for Policing Equity to work with police and communities and to collect data on police behavior and set goals to make it more fair … to a new effort to sequester carbon in soil … and more, you can help support these projects and change the world for good.

10 years of TED Fellows. Celebrating a decade of the program in two sessions of exuberant talks, the TED Fellows showed some wow moments, including Brandon Clifford‘s discovery of how to make multi-ton stones “dance,” Arnav Kapur‘s wearable device that allows for silent speech and Skylar Tibbits‘s giant canvas bladders that might save sinking islands. At the same time, they reminded us some of the pain that can exist behind breakthroughs, with Brandon Anderson speaking poignantly about the loss of his life partner during a routine traffic stop — which inspired him to develop a first-of-its-kind platform to report police conduct — and Erika Hamden opening up about her team’s failures in building FIREBall, a UV telescope that can observe extremely faint light from huge clouds of hydrogen gas in and around galaxies.

Connection is a superpower. If you haven’t heard of the blockbuster megahit Crazy Rich Asians, then, well, it’s possible you’re living under a large rock. Whether or not you saw it, the film’s director, Jon M. Chu, has a TED Talk about connection — to his family, his culture, to film and technology — that goes far beyond the movie. The theme of connection rang throughout the conference: from Priya Parker’s three easy steps to turn our everyday get-togethers into meaningful and transformative gatherings to Barbara J. King’s heartbreaking examples of grief in the animal kingdom to Sarah Kay’s epic opening poem about the universe — and our place in it.

Meet DigiDoug. TED takes tech seriously, and Doug Roble took us up on it, debuting his team’s breakthrough motion capture tech, which renders a 3D likeness (known as Digital Doug) in real time — down to Roble’s facial expressions, pores and wrinkles. The demo felt like one of those shifts, where you see what the future’s going to look like. Outside the theater, attendees got a chance to interact with DigiDoug in VR, talking on a virtual TED stage with Roble (who is actually in another room close by, responding to the “digital you” in real time).

New hope for political leadership. There was no shortage of calls to fix the broken, leaderless systems at the top of world governments throughout the conference. The optimists in the room won out during Michael Tubbs’s epic talk about building new civic structures. The mayor of Stockton, California (and the youngest ever of a city with more than 100,000 people), Tubbs shared his vision for governing strategies that recognize systems that place people in compromised situations — and that view impoverished and violent communities with compassion. “When we see someone different from us, they should not reflect our fears, our anxieties, our insecurities, the prejudices we have been taught, our biases. We should see ourselves. We should see our common humanity.”

Exploring the final frontier. A surprise appearance from Sheperd Doeleman, head of the Event Horizon Telescope — whose work produced the historic, first-ever image of a black hole that made waves last week — sent the conference deep into space, and it never really came back. Astrophysicist Juna Kollmeier, head of the Sloan Digital Sky Survey, shared her work mapping the observable universe — a feat, she says, that we’ll complete in just 40 years.  “Think about it. We’ve gone from arranging clamshells to general relativity in a few thousand years,” she says. “If we hang on 40 more, we can map all the galaxies.” And in the Fellows talks, Moriba Jah, a space environmentalist and inventor of the orbital garbage monitoring software AstriaGraph, showed how space has a garbage problem. Around half a million objects, some as small as a speck of paint, orbit the Earth — and there’s no consensus on what’s in orbit or where.

Go to sleep. A lack of sleep can lead to more than drowsiness and irritability. Matt Walker shared how it can be deadly as well, leading to an increased risk of Parkinson’s, cancer, heart attacks and more. “Sleep is the Swiss army knife of health,” he says, “It’s not an optional lifestyle luxury. Sleep is a non-negotiable biological necessity. It is your life support system, and it is mother nature’s best effort yet at immortality.”

The amazing group of speakers who shared their world-changing ideas on the mainstage at TED2019: Bigger Than Us, April 15 – 19, 2019 in Vancouver, BC, Canada. (Photo: Bret Hartman / TED)

Krebs on SecurityNine Charged in Alleged SIM Swapping Ring

Eight Americans and an Irishman have been charged with wire fraud this week for allegedly hijacking mobile phones through SIM-swapping, a form of fraud in which scammers bribe or trick employees at mobile phone stores into seizing control of the target’s phone number and diverting all texts and phone calls to the attacker’s mobile device. From there, the attackers simply start requesting password reset links via text message for a variety of accounts tied to the hijacked phone number.

All told, the government said this gang — allegedly known to its members as “The Community” — made more than $2.4 million stealing cryptocurrencies and extorting people for restoring access to social media accounts that were hijacked after a successful SIM-swap.

Six of those charged this week in Michigan federal court were alleged to have been members of The Community of serial SIM swappers. They face a fifteen count indictment, including charges of wire fraud, conspiracy and aggravated identity theft (a charge that carries a mandatory two-year sentence). A separate criminal complaint unsealed this week charges three former employees of mobile phone providers for collaborating with The Community’s members.

Several of those charged have been mentioned by this blog previously. In August 2018, KrebsOnSecurity broke the news that police in Florida arrested 25-year-old Pasco County, Fla. city employee Ricky Joseph Handschumacher, charging him with grand theft and money laundering. As I reported in that story, “investigators allege Handschumacher was part of a group of at least nine individuals scattered across multiple states who for the past two years have drained bank accounts via an increasingly common scheme involving mobile phone SIM swaps.”

This blog also has featured several stories about the escapades of Ryan Stevenson, a 26-year-old West Haven, Conn. man who goes by the hacker name “Phobia.” Most recently, I wrote about how Mr. Stevenson earned a decent number of bug bounty rewards and public recognition from top telecom companies for finding and reporting security holes in their Web sites — all the while secretly operating a service that leveraged these same flaws to sell their customers’ personal data to people who were active in the SIM swapping community.

One of the six men charged in the conspiracy — Colton Jurisic, 20 of, Dubuque, Iowa — has been more well known under his hacker alias “Forza,” and “ForzaTheGod.” In December 2016, KrebsOnSecurity heard from a woman who had her Gmail, Instagram, Facebook and LinkedIn accounts hijacked after a group of individuals led by Forza taunted her on Twitter as they took over her phone account.

“They failed to get [her three-letter Twitter account name, redacted] because I had two-factor authentication turned on for twitter, combined with a new phone number of which they were unaware,” the source said in an email to KrebsOnSecurity in 2016. “@forzathegod had the audacity to even tweet me to say I was about to be hacked.”

Also part of the alleged Community of SIM swappers is Conor Freeman, 20, of Dublin, Ireland; Reyad Gafar Abbas, 19, of Rochester, New York; Garrett Endicott, 21, of Warrensburg, Missouri.

The three men criminally accused of working with the six through their employment at mobile phone stores are Fendley Joseph, 28, of Murrietta, Calif.; Jarratt White, 22, and Robert Jack, 22, both from Tucson, Ariz. Joseph was a Verizon employee; White and Jack both worked at AT&T stores.

If convicted on the charge of conspiracy to commit wire fraud, each defendant faces a statutory maximum penalty of 20 years in prison.  The charges of wire fraud each carry a statutory maximum penalty of 20 years in prison.

Last month, 20-year-old college student and valedictorian Joel Ortiz became the first person ever to be sentenced for SIM swapping — pleading guilty to a ten year stint in prison for stealing more than $5 million in cryptocurrencies from victims and then spending it lavishly at elaborate club parties in Las Vegas and Los Angeles.

A copy of the indictment against the six men is here (PDF). The complaint against the former mobile company employees is here (PDF).

Planet DebianJonathan Carter: #debian-meeting revival

Picture: Wasps participating in a BoF during DebConf15 in Heidelberg.

As part of my DPL campaign I suggested that we have more open community meetings, and also suggested that we have more generic open team meetings in a well-known public channel. Fortunately, that idea doesn’t really need a DPL to implement it, and on top of that our new DPL (Sam Hartman) supports the initiative. We do have a #debian-meeting IRC channel that’s been dormant for years, so we’re reviving that for these kind of meetings.

Today we had our first session, it was the first meeting on that channel since 2011 (almost 8 years!). The topic was “Meet the new DPL and ask him anything!”. It was announced on some of the Debian channels, most notably on Bits from Debian, I played it careful by not announcing too widely because we don’t yet have much in the way of moderation and I think if we had to deal with many trolls it would’ve been tough. This was also really early for people in the Americas (6am East Coast) so future sessions will be staggered across different times and days of the week. The session was a bit quieter than I expected, but Sam gave really nice answers and I learned a few new things so it all worked out ok, I would rather start small and build on it than it have been too chaotic and a mess. In 2017 I started a community channel called #debian-til (TIL standing for “Today I Learned”). The idea is that people share interesting Debian related things that they have learned, and it started with a hand full of people and took a year to grow to a hundred, but I’m very happy with how that worked out and how the culture of that channel has evolved, I’m hoping that #debian-meeting can also grow and evolve to be something useful and fun for our community, instead of only a channel to schedule meetings in.

You can view the full logs of the meeting stored by meetbot in html or plain text.

Upcoming meetings:

Date and TimeTitle
2019-06-03 12:00 UTCLearn all about Debian-Edu
2019-06-17 19:00 UTCBrainstorming for 100 paper cuts project kick-off

More details about both those meetings should be available soon. For the latest information, refer to the Debian Meeting wiki page. You can also subscribe to Bits from Debian (RSS/Atom) for the latest community news.

Anyone can schedule a meeting on the debian-meeting wiki page, so if you’re considering using it for a team meeting or any other kind of session idea, then please go ahead and do so!

Planet DebianMolly de Blanc: OSI Update: May 2019

A brick buildig with a wooden sign that says "Come in we're open source!"

At the most recent Open Source Initiative face-to-face board meeting I was elected president of the board of directors. In the spirit of transparency, I wanted to share a bit about my goals and my vision for the organization over the next five years. These thoughts are my own, not reflecting official organization policy or plans. They do not speak to the intentions nor desires of other members of the board. I am representing my own thoughts, and where I’d like to see the future of the OSI go.

A little context on the OSI

You can read all about the history of “open source” and the OSI, so I will spare you the history lesson for now. I believe the following are the organization’s main activities:

There are lots of other things the OSI does, to support the above activities and in addition to them. As I mentioned in my 2019 election campaign, most of what we do vacillates between “niche interesting” to “onerous,” with “boring” and “tedious” also on that list. We table at events, give talks, write and approve budgets, answer questions, have meetings, maintain our own pet projects, read mailing lists, keep up with the FLOSS/tech news, tweet, host events, and a number of other things I am inevitably forgetting.

The OSI, along with the affiliate and individual membership, defines the future of open source, through the above activities and then some.

Why I decided to run for president

I’ve been called an ideologue, an idealist, a true believer, a wonk, and a number of other things — flattering, embarrassing, and offensive — concerning my relationship to free and open source software. I recently said that “user freedom is the hill I will die on, and let the carrion birds feast on my remains.” While we are increasingly discussing the ethical considerations of technology we need to also raise awareness of the ways user freedom and software freedom are entwined with the ethical considerations of computing. These philosophies need to be in the foundational design of all modern technologies in order for us to build technology that is ethical.

I have a vision for the way the OSI should fit into the future of technology, I think it’s a good vision, and I thought that being president would be a good way to help move that forward. It also gave me a very concrete and candid opportunity to share my hopes for the present and the future with my fellow board directors, to see where they agree and where they dissent, and to collaboratively build a cohesive organizational mission.

So, what is my vision?

I have two main goals for my presidency: 1) strategic growth of the organization while encouraging sustainability and 2) re-examining and revising the license approval process where necessary.

I have a five point list of things I would like to see be true for the OSI over the next five years:

  • Organizational relevance: The OSI should continue its important mission of stewarding the OSD, the license list, and the integrity of the term open source.
  • Provide expert guidance on open source: Have others approach us for opinions and advice, and be looked to as an authority on issues and questions.
  • Coordinate contact within the community: Have a role connecting people with others within the community in order to share expertise and become better open source citizens.
  • A clear, effective license approval process: Have a clear licensing process, comprised of experts in the field of licensing, with a range of opinions and points of view, in order to create and maintain a healthy list of open source licenses.
  • Support growing projects: Provide strategic assistance wherever OSI is best placed to do so. For example, providing fiscal sponsorship where we are uniquely qualified to help a project flourish.

An additional disclaimer

As I mentioned above, these are my thoughts and opinions, and do not represent plans for the organization nor the opinions of the rest of the board. They are some things I think would be nice to see. After all, according to the bylaws my actual privileges and responsibilities as president are to “preside over all board meetings,” accept resignations, and call “special meetings.”

CryptogramCryptanalyzing a Pair of Russian Encryption Algorithms

A pair of Russia-designed cryptographic algorithms -- the Kuznyechik block cipher and the Streebog hash function -- have the same flawed S-box that is almost certainly an intentional backdoor. It's just not the kind of mistake you make by accident, not in 2014.

Planet DebianJonathan Dowland: Debian Buster and Wayland

The next release of Debian OS (codename "Buster") is due very soon. It's currently in deep freeze, with no new package updates permitted unless they fix Release Critical (RC) bugs. The RC bug count is at 123 at the time of writing: this is towards the low end of the scale, consistent with being at a late stage of the freeze.

As things currently stand, the default graphical desktop in Buster will be GNOME, using the Wayland desktop technology. This will be the first time that Debian has defaulted to Wayland, rather than Xorg.

For major technology switches like this, Debian has traditionally taken a very conservative approach to adoption, with a lot of reasoned debate by lots of developers. The switch to systemd by default is an example of this (and here's one good example of LWN coverage of the process we went through for that decision).

Switching to Wayland, however, has not gone through a process like this. In fact it's happened as a result of two entirely separate decisions:

  1. The decision that the default desktop environment for Debian should be GNOME (here's some notes on this decision being re-evaluated for Jessie, demonstrating how rigorous this was)

  2. The GNOME team's decision that the default GNOME session should be Wayland, not Xorg, consistent with upstream GNOME.

In isolation, decision #2 can be justified in a number of ways: within the limited scope of the GNOME desktop environment, Wayland works well; the GNOME stack has been thoroughly tested, it's the default now upstream.

But in a wider context than just the GNOME community, there are still problems to be worked out. This all came to my attention because for a while the popular Synaptic package manager was to be ejected from Debian for not working under Wayland. That bug has now been worked around to prevent removal (although it's still not functional in a Wayland environment). Tilda was also at risk of removal under the same rationale, and there may be more such packages that I am not aware of.

In the last couple of weeks I switched my desktop over to Wayland in order to get a better idea of how well it worked. It's been a mostly pleasant experience: things are generally very good, and I'm quite excited about some of innovative things that are available in the Wayland ecosystem, such as the Sway compositor/window manager and interesting experiments like a re-implementation of Plan 9's rio called wio. However, in this short time I have hit a few fairly serious bugs, including #928030 (desktop and session manager lock up immediately if the root disk fills and #928002 (Drag and Drop from Firefox to the file manager locks up all X-based desktop applications) that have led me to believe that things are not well integrated enough — yet — to be the default desktop technology in Debian. I believe that a key feature of Debian is that we incorporate tools and technologies from a very wide set of communities, and you can expect to mix and match GNOME apps with KDE ones or esoteric X-based applications, old or new, or terminal-based apps, etc., to get things done. That's at least how I work, and one of the major attractions of Debian as a desktop distribution. I argue this case in #927667.

I think we should default to GNOME/Xorg for Buster, and push to default to Wayland for the next release. If we are clear that this a release goal, hopefully we can get wider project engagement and testing and ensure that the whole Debian ecosystem is more tightly integrated and a solid experience.

If you are running a Buster-based desktop now, please consider trying GNOME/Wayland and seeing whether the things you care about work well in that environment. If you find any problems, please file bugs, so we can improve the experience, no matter the outcome for Buster.

LongNowLaura Welcher speaks at Rhizome 7 x 7

Last month, Long Now’s Laura Welcher was part of the group of artists and technologists that were invited to Rhizome’s Seven on Seven conference in New York City. Welcher and artist Hayal Pozanti presented their art project WantNot. From Art News:

“Fate brought us together,” Laura Welcher, the director of operations at the Long Now Foundation, said admiringly of her partner that day, the artist Hayal Pozanti. United by a shared interest in linguistics, the two self-classified “language nerds” presented a new work—WantNot, a glyph of sorts intended to communicate mutual respect to future humans (or aliens) who might come upon it. They 3D-printed their curious object, which resembles an abstract bird with its head bent, using terracotta, a biodegradable substance. And they shared the news that anyone who wanted to could do the same, thanks to a website they built. “We encourage the idea of not getting too attached to wanting WantNot,” Pozanti said.

Read the Art News writeup in full here.

Worse Than FailureError'd: From Error to Disaster

"They're a SEO company, so I'm pretty sure they know what they're doing," Björn E. wrote.

 

"When your toddler has to study for their ITIL certificate, the least you can do is give them a nice desk to study on," writes Sven V.

 

Esox L. wrote, "They claim to be in top 5%...but where would they be if they set name?"

 

"I guess you should never say 'never' when it comes to Microsoft Dynamics 365," writes Andrew.

 

Philip B. writes, "Say what you like, CNN, but there's the headline."

 

"So, wait, if all the images are for a single taxi...and the Google CAPTCHA is looking for taxis (plural)...you know, I must be robot or something," Peter G. wrote.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

CryptogramAnother NSA Leaker Identified and Charged

In 2015, the Intercept started publishing "The Drone Papers," based on classified documents leaked by an unknown whistleblower. Today, someone who worked at the NSA, and then at the National Geospatial-Intelligence Agency, was charged with the crime. It is unclear how he was initially identified. It might have been this: "At the agency, prosecutors said, Mr. Hale printed 36 documents from his Top Secret computer."

The article talks about evidence collected after he was identified and searched:

According to the indictment, in August 2014, Mr. Hale's cellphone contact list included information for the reporter, and he possessed two thumb drives. One thumb drive contained a page marked "secret" from a classified document that Mr. Hale had printed in February 2014. Prosecutors said Mr. Hale had tried to delete the document from the thumb drive.

The other thumb drive contained Tor software and the Tails operating system, which were recommended by the reporter's online news outlet in an article published on its website regarding how to anonymously leak documents.

Planet DebianThomas Goirand: OpenStack-cluster-installer in Buster

I’ve been working on this for more than a year, and finally, I am acheiving my goal. I wrote a OpenStack cluster installer that is fully in Debian, and running in production for Infomaniak.

Note: I originally wrote this blog post a few weeks ago, though it was pending validation from my company (to make sure I wouldn’t disclose company business information).

What is it?

As per the package description and the package name, OCI (OpenStack Cluster Installer) is a software to provision an OpenStack cluster automatically, with a “push button” interface. The OCI package depends on a DHCP server, a PXE (tftp-hpa) boot server, a web server, and a puppet-master.

Once computers in the cluster boot for the first time over network (PXE boot), a Debian live system squashfs image is served by OCI (served by Apache), to act as a discovery image. This live system then reports the hardware features of the booted machine back to OCI (CPU, memory, HDDs, network interfaces, etc.). The computers can then be installed with Debian from that live system. During this process, a puppet-agent is configured so that it will connect to the puppet-master of OCI. Uppong first boot, OpenStack services are then installed and configured, depending on the server role in the cluster.

OCI is fully packaged in Debian, including all of the Puppet modules and so on. So just doing “apt-get install openstack-cluster-installer” is enough to bring absolutely all dependencies, and no other artifact are needed. This is very important so one only needs a local Debian mirror to install an OpenStack cluster. No external components must be downloaded from internet.

OCI setting-up a Swift cluster

At the begining of OCI’s life, we first used it at Infomaniak (my employer) to setup a Swift cluster. Swift is the object server of OpenStack. It is perfect solution for a (very) large backup system.

Think of a massive highly available cluster, with a capacity reaching peta bytes, storing millions of objects/files 3 times (for redundancy). Swift can virtually scale to infinity as long as you size your ring correctly.

The Infomaniak setup is also redundant at the data center level, as our cluster spans over 2 data centers, with at least one copy everything stored on each data center (the location of the 3rd copy depends on many things, and explaining it is not in the scope of this post).

If one wishes to use swift, it’s ok to start with 7 machines to begin with: 3 machines for the controller (holding the Keystone authentication, and a bit more), at least 1 swift-proxy machine, and 3 storage nodes. Though for redundancy purpose, it is IMO not good enough to start with only 3 storage node: if one fails, the proxy server will fall into timeouts waiting for the 3rd storage node. So 6 storage nodes feels like a better minimum. Though it doesn’t have to be top-noch servers, a cluster made of refurbished old hardware with only a few disks can do it, if you don’t need to store too much data.

Setting-up an OpenStack compute cluster

Though swift was the first thing OCI did for us, it now can do a way more than just Swift. Indeed, it can also setup a full OpenStack cluster with Nova (compute), Neutron (networking) and Cinder (network block devices). We also started using all of that, setup by OCI, at Infomaniak. Here’s the list services currently supported:

  • Keystone (identity)
  • Heat (orchestration)
  • Aodh (alarming)
  • Barbican (key/secret manager)
  • Nova (compute)
  • Glance (VM images)
  • Swift (object store)
  • Panko (event)
  • Ceilometer (resource monitoring)
  • Neutron (networking)
  • Cinder (network block device)

On the backend, OCI can use LVM or Ceph for Cinder, local storage or Ceph for Nova instances.

Full HA redundancy

The nice thing is, absolutely every component setup by OCI is done in a high availability way. Each machine of the control plane of OpenStack is setup with an instance of the components: all OpenStack controller components, a MariaDB server part of the Galera cluster, etc.

HAProxy is also setup on all controllers, in front of all of the REST API servers of OpenStack. And finally, the web address where final clients will connect is in fact a virtual IP, that can move from one server to another, thanks to corosync. Routing to that VIP can be done either over L2 (ie: a static address on a local network), or over BGP (useful if you need multi-datacenter redundancy). So if one of the controllers is down, it’s not such a big deal, HAproxy will detect this within seconds, and if it was the server that had the virtual IP (matching the API endpoint), then this IP will move to one of the other servers.

Full SSL transport

One of the things that OCI does when installing Debian, is setup a PKI (ie: SSL certificates signed by a local root CA) so that everything in the cluster is transported over SSL. Haproxy, of course does the SSL, but it also connects to the different API servers over SSL too. All connections to the RabbitMQ servers are also performed SSL. If one wishes, it’s possible to replace the self-signed SSL certificates before the cluster is deployed, so that the OpenStack API endpoint can be exposed on a public address.

OCI as a quite modular system

If one decides to use Ceph for storage, then for every compute node of the cluster, it is possible to choose to use either Ceph for the storage of /var/lib/nova/instance, or use local storage. On the later case, then of course, using RAID is strongly advised, to avoid any possible loss of data. It is possible to mix both types of compute node storage in a single cluster, and create server aggregates so it is later possible to decide which type of compute server to run the workload on.

If a cluster Ceph is part of the cluster, then on every compute node, the cinder-volume and cider-backup services will be provisioned. They will be in use to control the Cinder volumes of the Ceph cluster. Even though the network block storage itself will not run on the compute machines, it makes sense to do that. The idea is that the amount of these process needs to scale at the same time as the amount of compute nodes, so it makes sense to do that. Also, on compute servers, the Ceph secret is already setup using libvirt, so it was also convenient to re-use this.

As for Glance, if you have Ceph, it will use it as backend. If not, it will use Swift. And if you don’t have a Swift cluster, it will fall-back to the normal file backend, with a simple rsync from the first controller to the others. On such a setup, then only the first controller is used for glance-api. The other controllers also run glance-api, but haproxy doesn’t use them, as we really want the images to be stored on the first controller, so they can be rsync to the others. In practice, it’s not such a big deal, because the images are anyway in the cache of the compute servers when in use.

If one setup cinder volume nodes, then cinder-volume and cinder-backup will be installed there, and the system will automatically know that there’s cinder with LVM backend. Both Cinder over LVM and over Ceph can be setup on the same cluster (I never really tried this, though I don’t see why it wouldn’t work, normally, simply both backend will be available).

OCI in Buster vs current development

Lots of new features are being added to OCI. These, unfortunately, wont make it to Buster. Though the Buster release has just enough to be able to provision a working OpenStack cluster.

Future features

What I envision for OCI, is to make it able to provision a cluster ready for serving as a public cloud. This means having all of the resource accounting setup, as well as cloudkitty (which is OpenStack resource rating engine). I’ve already played a bit with this, and it should be out fast. Then the only missing bit to go public will be billing of the rated resources, which obviously, has to be done in-house, and doesn’t need to live within the OpenStack cluster itself.

The other things I am planning to do, is add more and more services. Currently, even though OCI can setup a fully working OpenStack, it is still a basic one. I do want to add advanced features like Octavia (load balancer as a service), Magnum (kubernets cluster as a service), Designate (DNS), Manila (shared filesystems) and much more if possible. The number of available projects is really big, so it probably will keep me busy for a very long time.

At this point, what OCI misses as well, is a custom ISO debian installer image that would include absolutely all. It shouldn’t be hard to write, though I lack the basic knowledge on how to do this. Maybe I will work on this at this summer’s DebConf. At the end, it could be a debian pure blend (ie: a fully integrated distro-in-the-distro system, just like debian-edu or debian-meds). It’d be nice if this ISO image could include all of the packages for the cluster, so that no external resources would be needed. The setting-up an OpenStack cluster with no internet connectivity at all would become possible. Because in fact, only the API endpoint on the port 443, and the virtual machines need internet access, your management network shouldn’t be connected (it’s much safer this way).

No, there wasn’t 80 engineers that burned-out in the process of implementing OCI

One thing that makes me proud, is that I wrote all of my OpenStack installer nearly alone (truth: leveraging all the work of puppet-openstack, it woudn’t have been possible without it…). That’s unique in the (small) OpenStack world. Companies like my previous employer, or a famous companies working on RPM based distros, this kind of product is the work of dozens of engineers. I heard that Red Hat has nearly 100 employees working on TripleO. This was possible because I tried to keep OCI in the spirit of “keep it simple stupid”. It is doing only what’s needed, and implemented the mot simple way possible, so that it is easy to maintain.

For example, the hardware discovery agent is made of 63 lines of ISO shell script (that is: not even bash… but dash), while I’ve seen others using really over engineered stuff, like heavy ruby or Python modules. Ironic-inspector, for example, in the Rocky release, is made of 98 files, for a total of 17974 lines. I really wonder what they are doing with all of this (I didn’t dare to look). There is one thing I’m sure: what I did is really enough for OCI’s needs, and I don’t want to run a 250+ MB initrd as the discovery system: OCI’s live build based discovery image loaded over the web rather than PXE is a way smarter.

On the same spirit, the part that does the bare-metal provisioning, is the same shell script that I wrote to create the official Debian OpenStack images. It was about 700 lines of shell script to install Debian on a .qcow2 image, it’s not about 1500 lines, and made of a single file. That’s the smallest footprint you’ll ever find. However, it does all what’s needed, still, and probably even more.

In comparison, in Fuel, there was a super-complicated scheduler, written in Ruby, used to be able to provision a full cluster by only a single click of a button. There’s no such thing in OCI, because I do believe that’s a useless gadget. With OCI, a user simply needs to remember the order for setting-up a cluster: Cephmon nodes needs to be setup first, then CephOSD nodes, then controllers, then finally, in no particular order, the computes, swiftproxy, swiftstore and volume nodes last. That’s really not a big deal to let this done by the final user, as it is not expected that one will setup multiple OpenStack every day. And even so, if you use the “ocicli” tool, it shouldn’t be hard to do these final bits of the automation. But I would consider this a useless gadget.

While every company jumped into the micro-service in container thing, even at this time, I continue to believe this is useless, and mostly driven by the needs marketing people that needs to sell features. Running OpenStack directly on bare metal is already hard, and the amount of complexity added by running OpenStack services in Docker is useless: it doesn’t bring any feature. I’ve been told that it makes upgrades easier, I very much doubt it: upgrades are complex for other reasons than just upgrading the running services themselves. Rather, they are complex because one needs to upgrade the cluster components with a given order, and scheduling this isn’t easy.

So this is how I managed to write an OpenStack installer alone, in less than a year, without compromising on features: because I wrote things simply, and avoided the over-engineering I saw at all levels on other products.

OpenStack Stein is comming

I’ve just pushed to Debian Experimental, and to https://buster-stein.debian.net/debian the last release of OpenStack (code name: Stein), which was released upstream on the 10th or April (yesterday, as I write these lines). I’ve been able to install Stein on top of Debian Buster, and I could start VMs on it: it’s all working as expected after a bit of changes in the puppet manifests of OCI. What’s needed now, is testing upgrades from Stretch + Rocky to Buster + Stein. Normally, puppet-openstack can do that. Let’s see…

Want to know more?

Read on… the README.md is on https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer

Last words, last thanks

This concludes a bit more than a year of development. All of this wouldn’t have been possible without my employer, Infomaniak, giving me a total freedom on the way I implement things for going into production. So a big thanks to them, and also for being a platinium sponsor for this year’s Debconf in Brazil.

Also a big thanks to the whole of the OpenStack project, including (but not limited to) the Infra team and the puppet-openstack team.

Planet DebianMichal Čihař: Weblate blog moved

I've been publishing updates about Weblate on my blog for past seven years. Now the project has grown up enough to deserve own space to publish posts. Please update your bookmarks and RSS readers to new location directly on Weblate website.

The Weblate website will receive another updates in upcoming weeks, I'm really looking forward to these.

New address for Weblate blog is https://weblate.org/news/.

New address for the RSS feed is https://weblate.org/feed/.

Filed under: Debian English SUSE Weblate

CryptogramAmazon Is Losing the War on Fraudulent Sellers

Excellent article on fraudulent seller tactics on Amazon.

The most prominent black hat companies for US Amazon sellers offer ways to manipulate Amazon's ranking system to promote products, protect accounts from disciplinary actions, and crush competitors. Sometimes, these black hat companies bribe corporate Amazon employees to leak information from the company's wiki pages and business reports, which they then resell to marketplace sellers for steep prices. One black hat company charges as much as $10,000 a month to help Amazon sellers appear at the top of product search results. Other tactics to promote sellers' products include removing negative reviews from product pages and exploiting technical loopholes on Amazon's site to lift products' overall sales rankings.

[...]

AmzPandora's services ranged from small tasks to more ambitious strategies to rank a product higher using Amazon's algorithm. While it was online, it offered to ping internal contacts at Amazon for $500 to get information about why a seller's account had been suspended, as well as advice on how to appeal the suspension. For $300, the company promised to remove an unspecified number of negative reviews on a listing within three to seven days, which would help increase the overall star rating for a product. For $1.50, the company offered a service to fool the algorithm into believing a product had been added to a shopper's cart or wish list by writing a super URL. And for $1,200, an Amazon seller could purchase a "frequently bought together" spot on another marketplace product's page that would appear for two weeks, which AmzPandora promised would lead to a 10% increase in sales.

This was a good article on this from last year. (My blog post.)

Amazon has a real problem here, primarily because trust in the system is paramount to Amazon's success. As much as they need to crack down on fraudulent sellers, they really want articles like these to not be written.

Slashdot thread. Boing Boing post.

Worse Than FailureCodeSOD: What For?

Pretty much every language has many ways to do loops/iteration. for and while and foreach and do while and function application and recursion and…

It’s just too many. Mike inherited some code which cleans up this thicket of special cases for iteration and just uses one pattern to solve every iteration problem.

// snip. Preceding code correctly sets $searchArray as an array of strings.
$searchCount = count($searchArray);
if ($searchCount > 0) {
	$checked = 0;
	while ($checked != $searchCount) {
		$thisOne = $searchArray[$checked];
		// snip 86 lines of code 
		$checked++;
	}
}

Gone are the difficult choices, like “should I use a for or a foreach?” and instead, we have just a while loop. And that was the standard pattern, all through the codebase. while(true) useWhile();, as it were.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaSimon Lyall: Audiobooks – April 2019

Enlightenment Now: The Case for Reason, Science, Humanism, and Progress by Steven Pinker

Amazing good book, well argued and lots of information. The only downside is he talks to some diagrams [downloadable] at times. Highly Recommend. 9/10

A History of Britain, Volume : Fate of Empire 1776 – 2000 by Simon Schama

I didn’t enjoy this all that much. The author tried to use various lives to illustrate themes but both the themes and biographies suffered. Huge areas also left out. 6/10

Where Did You Get This Number? : A Pollster’s Guide to Making Sense of the World by Anthony Salvanto

An overview of (mostly) political polling and it’s history. Lots of examples for the 2016 US election campaign. Light but interesting. 7/10

Squid Empire: The Rise and Fall of the Cephalopods by Danna Staaf

Pretty much what the titles says. I got a little lost with all the similarly names species but the general story was interesting enough and not too long. 6/10

Apollo in the Age of Aquarius by Neil M. Maher

The story of the back and forth between NASA and the 60s counterculture from the civil rights struggle and the antiwar movement to environmentalism and feminism. Does fairly well. 7/10


Share

,

TEDA new mission to mobilize 2 million women in US politics … and more TED news

TED2019 may be past, but the TED community is busy as ever. Below, a few highlights.

Amplifying 2 million women across the U.S. Activist Ai-jen Poo, Black Lives Matter co-founder Alicia Garza and Planned Parenthood past president Cecile Richards have joined forces to launch Supermajority, which aims to train 2 million women in the United States to become activists and political leaders. To scale, the political hub plans to partner with local nonprofits across the country; as a first step, the co-founders will embark on a nationwide listening tour this summer. (Watch Poo’s, Garza’s and Richards’ TED Talks.)

Sneaker reseller set to break billion-dollar record. Sneakerheads, rejoice! StockX, the sneaker-reselling digital marketplace led by data expert Josh Luber, will soon become the first company of its kind with a billion-dollar valuation, thanks to a new round of venture funding.  StockX — a platform where collectible and limited-edition sneakers are bought and exchanged through real-time bidding — is an evolution of Campless, Luber’s site that collected data on rare sneakers. In an interview with The New York Times, Luber said that StockX pulls in around $2 million in gross sales every day. (Watch Luber’s TED Talk.)

A move to protect iconic African-American photo archives. Investment expert Mellody Hobson and her husband, filmmaker George Lucas, filed a motion to acquire the rich photo archives of iconic African-American lifestyle magazines Ebony and Jet. The archives are owned by the recently bankrupt Johnson Publishing Company; Hobson and Lucas intend to gain control over them through their company, Capital Holdings V. The collections include over 5 million photos of notable events and people in African American history, particularly during the Civil Rights Movement. In a statement, Capital Holdings V said: “The Johnson Publishing archives are an essential part of American history and have been critical in telling the extraordinary stories of African-American culture for decades. We want to be sure the archives are protected for generations to come.” (Watch Hobson’s TED Talk.)

10 TED speakers chosen for the TIME100. TIME’s annual round-up of the 100 most influential people in the world include climate activist Greta Thunberg, primatologist and environmentalist Jane Goodall, astrophysicist Sheperd Doeleman and educational entrepreneur Fred Swaniker — also Nancy Pelosi, the Pope, Leana Wen, Michelle Obama, Gayle King (who interviewed Serena Williams and now co-hosts CBS This Morning home to TED segment), and Jeanne Gang. Thunberg was honored for her work igniting climate change activism among teenagers across the world; Goodall for her extraordinary life work of research into the natural world and her steadfast environmentalism; Doeleman for his contribution to the Harvard team of astronomers who took the first photo of a black hole; and Swaniker for the work he’s done to educate and cultivate the next generation of African leaders. Bonus: TIME100 luminaries are introduced in short, sharp essays, and this year many of them came from TEDsters including JR, Shonda Rhimes, Bill Gates, Jennifer Doudna, Dolores Huerta, Hans Ulrich Obrest, Tarana Burke, Kai-Fu Lee, Ian Bremmer, Stacey Abrams, Madeleine Albright, Anna Deavere Smith and Margarethe Vestager. (Watch Thunberg’s, Goodall’s, Doeleman’s, Pelosi’s, Pope Francis’, Wen’s, Obama’s, King’s, Gang’s and Swaniker’s TED Talks.)

Meet Sports Illustrated’s first hijab-wearing model. Model and activist Halima Aden will be the first hijab-wearing model featured in Sports Illustrated’s annual swimsuit issue, debuting May 8. Aden will wear two custom burkinis, modestly designed swimsuits. “Being in Sports Illustrated is so much bigger than me,” Aden said in a statement, “It’s sending a message to my community and the world that women of all different backgrounds, looks, upbringings can stand together and be celebrated.” (Watch Aden’s TED Talk.)

Scotland post-surgical deaths drop by a third, and checklists are to thank. A study indicated a 37 percent decrease in post-surgical deaths in Scotland since 2008, which it attributed to the implementation of a safety checklist. The 19-item list created by the World Health Organization is supposed to encourage teamwork and communication during operations. The death rate fell to 0.46 per 100 procedures between 2000 and 2014, analysis of 6.8 million operations showed. Dr. Atul Gawande, who introduced the checklist and co-authored the study, published in the British Journal of Surgery, said to the BBC: “Scotland’s health system is to be congratulated for a multi-year effort that has produced some of the largest population-wide reductions in surgical deaths ever documented.” (Watch Gawanda’s TED Talk.) — BG

And finally … After the actor Luke Perry died unexpectedly of a stroke in February, he was buried according to his wishes: on his Tennessee family farm, wearing a suit embedded with spores that will help his body decompose naturally and return to the earth. His Infinity Burial Suit was made by Coeio, led by designer, artist and TED Fellow Jae Rhim Lee. Back in 2011, Lee demo’ed the mushroom burial suit onstage at TEDGlobal; now she’s focused on testing and creating suits for more people. On April 13, Lee spoke at Perry’s memorial service, held at Warner Bros. Studios in Burbank; Perry’s daughter revealed his story in a thoughtful instagram post this past weekend. (Watch Lee’s TED Talk.) — EM

TEDPreviewed at TED: Microfluidics sweat analysis from Gatorade

Gx patches at Sweat It Out, sponsored by Gatorade at TED2019: Bigger Than Us. April 15 – 19, 2019, Vancouver, BC, Canada. Photo: Dian Lofton / TED

Imagine if, after your next workout, you could see not only how much you sweat, but what you sweat — and how to replenish what’s missing. That’s the promise of a new sweat analysis patch from Gatorade, shown in preview form at TED2019.

How it works: You place the small, flexible patch on your arm before a workout. Then the microfluidics inside the patch get to work. As Tucker Fort, a partner at Gatorade collaborator Smart Design, explains: “It measures what your sweat rate is, and the electrolyte content of your sweat.” The channels in the patch turn color to indicate what they’re sensing. (The microfluidics tech is developed in collaboration with Epicore Biosystems.) Afterwards, you snap a picture of the patch with the Gx app, which uses image processing to interpret the data for you.

“With those data points in your profile,” says Fort, “we’re able to make recommendations for you based on how your body performs, and suggest what you should drink before and during your workout, and to recover.” Recommendations will change day to day, based on factors like the weather and the duration of your workout.

What to do with this data? Well, Gatorade’s got you covered. Once you’ve got your patch data, the Gx app — set to be available in 2020 — will help you select a personalized Gatorade hydration plan that recommends the right amount of fluid, electrolytes and carbohydrates that match your data. The personalized drink options are contained in small pods of concentrated Gatorade, each about the size of a tangerine. You pick your personalized pod of concentrate, pierce it onto a special reusable water bottle, and mix the concentrate with 30 ounces of fresh water. As Fort says. “It’s a totally new form factor for delivering a sports drink.”

You can’t get this patch+pod system just yet as a consumer, says Fort; “we’re going through the final scientific tests with sports scientists before we scale commercially.” But all week during TED, lucky attendees could try the patches during morning fitness events presented by Gatorade, ranging from early-morning runs to yoga, tai chi and an active class called, appropriately, Sweat It Out.

Click to view slideshow.

CryptogramLeaked NSA Hacking Tools

In 2016, a hacker group calling itself the Shadow Brokers released a trove of 2013 NSA hacking tools and related documents. Most people believe it is a front for the Russian government. Since, then the vulnerabilities and tools have been used by both government and criminals, and put the NSA's ability to secure its own cyberweapons seriously into question.

Now we have learned that the Chinese used the tools fourteen months before the Shadow Brokers released them.

Does this mean that both the Chinese and the Russians stole the same set of NSA tools? Did the Russians steal them from the Chinese, who stole them from us? Did it work the other way? I don't think anyone has any idea. But this certainly illustrates how dangerous it is for the NSA -- or US Cyber Command -- to hoard zero-day vulnerabilities.

LongNowTrevor Paglen’s Orbital Reflector Goes “Unrealized”

On December 3rd, 02018, Trevor Paglen’s Orbital Reflector launched into low orbit as part of the payload on SpaceX’s Falcon 9 rocket. The 100-foot-long diamond shaped mylar balloon was intended to be the world’s first space sculpture. It would be visible to the naked eye, appearing as a slowly-moving star in the sky. Paglen saw the project as a “catalyst” for asking what it means to be on this planet.

Unfortunately, Paglen’s vision won’t come to pass. Last week, the Nevada Museum of Art (co-producer of the project) confirmed that Orbital Reflector is “now officially being recognized as an unrealized artwork”:

Two unanticipated events occurred: 1) Due to the unprecedented number of satellites on the rocket, the U.S. Air Force was unable to distinguish between them and could not assign tracking numbers to many of them. Without a tracking number to verify location and position, the FCC could not give approval for inflation; and 2) The FCC was unavailable to move forward quickly due to the U.S. government shutdown.

Paglen also views Orbital Reflector’s demise as an unintended consequence of the U.S. government shutdown President Donald J. Trump initiated to secure funding for building a border wall with Mexico.

“I blame it completely on the government shutdown,” the artist said. “In order to deploy the balloon, you have to coordinate with the FCC, the military and NASA, but the FCC and the part of the military we need to deal with were both shut down so there was literally nobody we could call to get the approval for deployment.”

At 35 days, the government shutdown was the longest in United States history. By the time the FCC was back up and running, the satellite communication functions that would initiate the inflation and deployment of the Orbital Reflector balloon stopped working. The satellite was built to last for the expected interval of receiving FCC approval, but was never intended to withstand over a month in the heat of the sun in space.

Despite the failure to deploy, Paglen wrote in a Medium post that he nonetheless considered the Orbital Reflector project as a success for provoking conversation:

If the project’s goal was to provoke a conversation about the politics of space, it has been nothing less than a stellar success. And the story of OR has become an embodiment of those politics: the Trump administration’s insistence on building a wall between the United States and Mexico led to the demise of a spacecraft whose purpose was to questions these very kinds of politics.

He also believes that there’s still a possibility that Orbital Reflector will one day be realized:

I don’t know whether OR will ever deploy and inflate its reflective structure. It might, it might not. Ironically perhaps, the best chance of Orbital Reflector’s reflector deploying may come from a second system failure. If the spacecraft further degrades, any number of damaged components might inadvertently trigger the inflation sequence without warning. With this being the state of affairs, I think of Orbital Reflector’s current state as being in a state of unknown possibility, like an unopened present circling through the night sky. And I, for one, will keep my eyes on the stars, knowing that at any moment, a new one might spring to life.

Learn More:

CryptogramMalicious MS Office Macro Creator

Evil Clippy is a tool for creating malicious Microsoft Office macros:

At BlackHat Asia we released Evil Clippy, a tool which assists red teamers and security testers in creating malicious MS Office documents. Amongst others, Evil Clippy can hide VBA macros, stomp VBA code (via p-code) and confuse popular macro analysis tools. It runs on Linux, OSX and Windows.

The VBA stomping is the most powerful feature, because it gets around antivirus programs:

VBA stomping abuses a feature which is not officially documented: the undocumented PerformanceCache part of each module stream contains compiled pseudo-code (p-code) for the VBA engine. If the MS Office version specified in the _VBA_PROJECT stream matches the MS Office version of the host program (Word or Excel) then the VBA source code in the module stream is ignored and the p-code is executed instead.

In summary: if we know the version of MS Office of a target system (e.g. Office 2016, 32 bit), we can replace our malicious VBA source code with fake code, while the malicious code will still get executed via p-code. In the meantime, any tool analyzing the VBA source code (such as antivirus) is completely fooled.

Worse Than FailureEditor's Soapbox: The Master is Simplicity

When I was in college, as part of the general course requirements we had to take Probability and Statistics. The first time around I found it to be an impenetrable concept beyond my grasp, and I flunked. Since it was a requirement, I took it again and barely skated by. Joy; I had cleared the hurdle!

By that time, it had become clear to me that I was going into a field that required a whole lot more understanding of P&S than I had acquired. Since I wanted to be properly prepared, I signed up for a free summer school course to try it once more.

This time, the professor was a crotchety old German mathematician. He would not allow us to record the lectures. We were told not to bring the textbook to class as the turning of pages distracted him. We were not even allowed to take notes. Every class began with Good morning, Pencils Down! He firmly believed that you had not mastered a skill unless you could explain it in simple words to a complete neophyte, by merely describing it in non-technical terms that they already understood.

Based upon my prior two attempts at this subject, after two classes I was convinced that this was going to be a waste of time. After all, if I could barely understand it with the textbook and notes, what chance did I have like this? But I had already signed up and committed the time, so I stuck with it.

To my shock-surprise-awe, he managed to verbally paint a picture through which the concepts became crystal clear; not just to me, but to everyone in the class. I had no trouble acing all the homework assignments and tests, and my entire notebook for the course consisted of:

  Probability and Statistics MTH 203 - Summer 1977
  Textbook: ...
  Lecture recordings not allowed.
  Textbook not permitted in class.
  Notes not allowed.

The man was truly a master of his craft; arguably one of the very best teachers I ever had in my life. Unfortunately, they forced him to retire the following semester. Unable to teach, he was bored, lost his passion and died shortly thereafter.

From then on, I adopted his philosophy of The Master is in Simplicity and strived to incorporate it into everything I touched. If I couldn't just hand something that I had built to someone else with minimal explanation, then I had done something wrong and strived to fix it before moving on. Even in those cases where, for managerial/political/cow-orker reasons beyond my control, I was forced to implement something in an incredibly stupid way, I would at least break it up internally and follow the (by now, very old) rules:

  • one procedure should not extend beyond what you can see on the screen
  • classes should generally do one logical thing

Project mismanagement and ridiculous time constraints usually tested my resolve, but I promised myself early on that I would never turn in a project half-arsed for the sake of a deadline. If it wasn't done simply, then it wasn't done and not ready to be deployed. Period.

Sure, I've had to create some incredibly complex things to work around an assortment of WTF, but I always made sure that what and why were graphically documented (in both class- and block-level comments) out of simple respect for whomever came after me on the project.

Looking back on the forty years and all the projects since that time, I realized that I mostly stuck to that promise, thanks to the philosophy of that one teacher.

OTOH, I can't even count the projects I inherited that were so convoluted that it took longer to properly untangle the mess than it would have taken to rewrite the whole thing.

Have you ever had a teacher/mentor that shaped you in some profound way?

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Krebs on SecurityWhat’s Behind the Wolters Kluwer Tax Outage?

Early in the afternoon on Friday, May, 3, I asked a friend to relay a message to his security contact at CCH, the cloud-based tax division of the global information services firm Wolters Kluwer in the Netherlands. The message was that the same file directories containing new versions of CCH’s software were open and writable by any anonymous user, and that there were suspicious files in those directories indicating some user(s) abused that access.

Shortly after that report, the CCH file directory for tax software downloads was taken offline. As of this publication, several readers have reported outages affecting multiple CCH Web sites. These same readers reported being unable to access their clients’ tax data in CCH’s cloud because of the ongoing outages. A Reddit thread is full of theories.

One of the many open and writable directories on CCH’s site before my report on Friday.

I do not have any information on whether my report about the world-writable file server had anything to do with the outages going on now at CCH. Nor did I see any evidence that any client data was exposed on the site.

What I did see in those CCH directories were a few odd PHP and text files, including one that seemed to be promoting two different and unrelated Russian language discussion forums.

I sent Wolters Kluwer an email asking how long the file server had been so promiscuous (allowing anyone to upload files to the server), and what the company was doing to validate the integrity of the software made available for download by CCH tax customers.

Marisa Westcott, vice president of marketing and communications at Wolters Kluwer, told KrebsOnSecurity on Friday that she would “check with the team to see if we can get some answers to your questions.”

But subsequent emails and phone calls have gone unreturned. Calls to the company’s main support number (800-739-9998) generate the voice message, “We are currently experiencing technical difficulties. Please try your call again later.”

On Tuesday morning, Wolters Kluwer released an update on the extensive outage via Twitter, saying:

“Since yesterday, May 6, we are experiencing network and service interruptions affecting certain Wolters Kluwer platforms and applications. Out of an abundance of caution, we proactively took offline a number of other applications and we immediately began our investigation and remediation efforts. The secure use of our products and services is our top priority. we have ben able to restore network and services for a number – but not all — of our systems.”

Accounting Today reports today that a PR representative from Wolters Kluwer Tax & Accounting, which makes the CCH products, confirmed the outage was the result of a malware attack:

“On Monday May 6, we started seeing technical anomalies in a number of our platforms and applications,” the statement given to Accounting Today reads. “We immediately started investigating and discovered the installation of malware. As a precaution, in parallel, we decided to take a broader range of platforms and applications offline. With this action, we aimed to quickly limit the impact this malware could have had, giving us the opportunity to investigate the issue with assistance from third-party forensics consultants and work on a solution. Unfortunately, this impacted our communication channels and limited our ability to share updates. On May 7, we were able to restore service to a number of applications and platforms.”

Accounting Today says the limited ability to share updates angered CCH users, many of whom took to social media to air their grievances against a cloud partner they perceive to be ill-prepared for maintaining ongoing service and proper security online.

“Despite CCH stating that a number of applications and platforms were up and running today, May 7, several users on a Reddit thread on the topic have stated that as of this morning in Florida, Maine, Texas, Pittsburgh and South Carolina, their CCH systems are still down,” Accounting Today wrote.

Special thanks to Alex Holden of Hold Security for help in notifying CCH.

Update, May 9, 10:26 a.m. ET: Updated this story to include the latest statement from Wolters Kluwer:

“On Monday May 6, our monitoring system alerted us to technical anomalies in a few of our applications and platforms. We immediately started investigating and detected the installation of malware. When we detected the malware, we proactively took a broad range of platforms, specifically including the CCH tax software applications, offline to protect our customers’ data and isolate the malware. The service interruptions our customers experienced are the result of our aggressive, precautionary efforts.”

“On May 7, we were able to begin restoring service to a number of applications and platforms. At this time, we have brought CCH Axcess, CCH SureTax, CCH AnswerConnect, and CCH Intelliconnect back online. Our process and protocols assure a high degree of confidence in the security of our applications and platforms before they are brought back online. We have seen no evidence that customer data and systems were compromised or that there was a breach of confidentiality of that data.”

“At this time, we have notified law enforcement and our investigation is ongoing. We regret any inconvenience this has caused, and we are fully committed to restoring remaining services as quickly as possible for our customers.”

CryptogramLocked Computers

This short video explains why computers regularly came with physical locks in the late 1980s and early 1990s.

The one thing the video doesn't talk about is RAM theft. When RAM was expensive, stealing it was a problem.

Cory Doctorow“Steering With the Windshield Wipers”: why nothing we’re doing to fix Big Tech is working


My latest Locus column is “Steering with the Windshield Wipers,” and it ties together the growth of Big Tech with the dismantling of antitrust law (which came about thanks to Robert Bork’s bizarre alternate history of antitrust, a theory so ridiculous that it never would have gained traction except that it promised to make rich people a lot richer).

The problems of Big Tech are almost all the results of how big they are, not the fact that they’re doing tech. But all of our regulatory responses to Big Tech — copyright filters, automated moderation laws, etc — are about specifying the technology that these companies must use, not making the companies smaller so that their mistakes don’t carry so much weight and so that their self-interested preferences aren’t so readily turned into laws.

40 years ago, Robert Bork and Ronald Reagan ripped the steering wheel out of our industrial policy’s vehicle and since then, we’ve been “steering” with everything else, because that’s all we have. But just because the windshield wipers work and the steering wheel doesn’t, it doesn’t follow that the wipers can steer the car.

A lack of competition rewards bullies, and bullies have insatiable appetites. If your kid is starving because they keep getting beaten up for their lunch money, you can’t solve the problem by giving them more lunch money – the bullies will take that money too. Likewise: in the wildly unequal Borkean inferno we all inhabit, giving artists more copyright will just enrich the companies that control the markets we sell our works into – the media companies, who will demand that we sign over those rights as a condition of their patronage. Of course, these companies will be subsequently menaced and expropriated by the internet distribution companies. And while the media companies are reluctant to share their bounties with us artists, they reliably expect us to share their pain – a bad quarter often means canceled projects, late payments, and lower advances.

And yet, when a lack of competition creates inequities, we do not, by and large, reach for pro-competitive answers. We are the fallen descendants of a lost civilization, destroyed by Robert Bork in the 1970s, and we have forgotten that once we had a mighty tool for correcting our problems in the form of pro-competitive, antitrust enforcement: the power to block mergers, to break up conglomerates, to regulate anticompetitive conduct in the marketplace.

Steering with the Windshield Wipers [Cory Doctorow/Locus]

(Image: Reuters)

Worse Than FailureCodeSOD: Interpolat(interpolation)on

C# has always had some variation on “string interpolation”, although starting in C# version 6, they added an operator for it, to make it easier. Now, you can do something like $"{foo} has a {bar}", which can be a very easy to read method of constructing formatted strings. In this example, {foo} and {bar} will be replaced by the value of variables with the same name.

C#’s implementation is powerful. Pretty much any valid C# expression can be placed inside of those {}. Unfortunately for Petr, that includes the string interpolation operator, as a co-worker’s code demonstrates…

string query = $@"SELECT someColumn1, someColumn2, someColumn3 FROM myTable
	WHERE (TimeStamp >= '{startTime}' AND TimeStamp < '{endTime}')
	{(filterByType ? $"AND (DataType IN ({filetrOptions.Type}))" : string.Empty)}"; // Continued in this style for few more rows

This interpolated string contains an interpolated string. And a ternary. And it’s for constructing a SQL query dynamically as a string. There’s an argument to be made that this code is literally fractal in its badness, as it nests layers of badness within itself.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Cory DoctorowHow the diverse internet became a monoculture


I appeared on this week’s Canadaland podcast (MP3) with Jesse Brown to talk about the promise of the internet 20 years ago, when it seemed that we were headed for an open, diverse internet with decentralized power and control, and how we ended up with an internet composed of five giant websites filled with screenshots from the other four. Jesse has been covering this for more than a decade (I was a columnist on his CBC podcast Search Engine, back in the 2000s) and has launched a successful independent internet business with Canadaland, but as he says, the monopolistic gentrification of the internet is heading for podcasting like a meteor.

CryptogramFirst Physical Retaliation for a Cyberattack

Israel has acknowledged that its recent airstrikes against Hamas were a real-time response to an ongoing cyberattack. From Twitter:

CLEARED FOR RELEASE: We thwarted an attempted Hamas cyber offensive against Israeli targets. Following our successful cyber defensive operation, we targeted a building where the Hamas cyber operatives work.

HamasCyberHQ.exe has been removed. pic.twitter.com/AhgKjiOqS7

­Israel Defense Forces (@IDF) May 5, 2019

I expect this sort of thing to happen more -- not against major countries, but by larger countries against smaller powers. Cyberattacks are too much of a nation-state equalizer otherwise.

Another article.

EDITED TO ADD (5/7): Commentary.

,

Planet Linux Australiasthbrx - a POWER technical blog: Visual Studio Code for Linux kernel development

Here we are again - back in 2016 I wrote an article on using Atom for kernel development, but I didn't stay using it for too long, instead moving back to Emacs. Atom had too many shortcomings - it had that distinctive Electron feel, which is a problem for a text editor - you need it to be snappy. On top of that, vim support was mediocre at best, and even as a vim scrub I would find myself trying to do things that weren't implemented.

So in the meantime I switched to spacemacs, which is a very well integrated "vim in Emacs" experience, with a lot of opinionated (but good) defaults. spacemacs was pretty good to me but had some issues - disturbingly long startup times, mediocre completions and go-to-definitions, and integrating any module into spacemacs that wasn't already integrated was a big pain.

After that I switched to Doom Emacs, which is like spacemacs but faster and closer to Emacs itself. It's very user configurable but much less user friendly, and I didn't really change much as my elisp-fu is practically non-existent. I was decently happy with this, but there were still some issues, some of which are just inherent to Emacs itself - like no actually usable inbuilt terminal, despite having (at least) four of them.

Anyway, since 2016 when I used Atom, Visual Studio Code (henceforth referred to as Code) came along and ate its lunch, using the framework (Electron) that was created for Atom. I did try it years ago, but I was very turned off by its Microsoft-ness, it seeming lack of distinguishing features from Atom, and it didn't feel like a native editor at all. Since it's massively grown in popularity since then, I decided I'd give it a try.

Visual Studio Code

Vim emulation

First things first for me is getting a vim mode going, and Code has a pretty good one of those. The key feature for me is that there's Neovim integration for Ex-commands, filling a lot of shortcomings that come with most attempts at vim emulation. In any case, everything I've tried to do that I'd do in vim (or Emacs) has worked, and there are a ton of options and things to tinker with. Obviously it's not going to do as much as you could do with Vimscript, but it's definitely not bad.

Theming and UI customisation

As far as the editor goes - it's good. A ton of different themes, you can change the colour of pretty much everything in the config file or in the UI, including icons for the sidebar. There's a huge sore point though, you can't customise the interface outside the editor pretty much at all. There's an extension for loading custom CSS, but it's out of the way, finnicky, and if I wanted to write CSS I wouldn't have become a kernel developer.

Extensibility

Extensibility is definitely a strong point, the ecosystem of extensions is good. All the language extensions I've tried have been very fully featured with a ton of different options, integration into language-specific linters and build tools. This is probably Code's strongest feature - the breadth of the extension ecosystem and the level of quality found within.

Kernel development

Okay, let's get into the main thing that matters - how well does the thing actually edit code. The kernel is tricky. It's huge, it has its own build system, and in my case I build it with cross compilers for another architecture. Also, y'know, it's all in C and built with make, not exactly great for any kind of IDE integration.

The first thing I did was check out the vscode-linux-kernel project by GitHub user "amezin", which is a great starting point. All you have to do is clone the repo, build your kernel (with a cross compiler works fine too), and run the Python script to generate the compile_commands.json file. Once you've done this, go-to-definition (gd in vim mode) works pretty well. It's not flawless, but it does go cross-file, and will pop up a nice UI if it can't figure out which file you're after.

Code has good built-in git support, so actions like staging files for a commit can be done from within the editor. Ctrl-P lets you quickly navigate to any file with fuzzy-matching (which is impressively fast for a project of this size), and Ctrl-Shift-P will let you search commands, which I've been using for some git stuff.

git command completion in Code

There are some rough edges, though. Code is set on what so many modern editors are set on, which is the "one window per project" concept - so to get things working the way you want, you would open your kernel source as the current project. This makes it a pain to just open something else to edit, like some script, or checking the value of something in firmware, or chucking something in your bashrc.

Auto-triggering builds on change isn't something that makes a ton of sense for the kernel, and it's not present here. The kernel support in the repo above is decent, but it's not going to get you close to what more modern languages can get you in an editor like this.

Oh, and it has a powerpc assembly extension, but I didn't find it anywhere near as good as the one I "wrote" for Atom (I just took the x86 one and switched the instructions), so I'd rather use the C mode.

Terminal

Code has an actually good inbuilt terminal that uses your login shell. You can bring it up with Ctrl-`. The biggest gripe I have always had with Emacs is that you can never have a shell that you can actually do anything in, whether it's eshell or shell or term or ansi-term, you try to do something in it and it doesn't work or clashes with some Emacs command, and then when you try to do something Emacs-y in there it doesn't work. No such issue is present here, and it's a pleasure to use for things like triggering a remote build or doing some git operation you don't want to do with commands in the editor itself.

Not the most important feature, but I do like not having to alt-tab out and lose focus.

Well...is it good?

Yeah, it is. It has shortcomings, but installing Code and using the repo above to get started is probably the simplest way to get a competent kernel development environment going, with more features than most kernel developers (probably) have in their editors. Code is open source and so are its extensions, and it'd be the first thing I recommend to new developers who aren't already super invested into vim or Emacs, and it's worth a try if you have gripes with your current environment.

Sociological ImagesThe Stakes of Steak

In the United States, men have higher rates of life-threatening health conditions than women — including uncontrolled high blood pressure and heart disease. Recent research published in Socius shows they are also less likely than women to consider becoming vegetarian, and changing these eating habits may be important for their health and for the environment.

To learn more about meat and masculinity, Researchers Sandra Nakagawa and Chloe Hart conducted experiments to test whether a threat to masculinity influences men’s affinity to meat. In one experiment, the researchers told some men their answers from a previous gender identity survey fell in the “average female” range, while others fell into the “average male” range. The authors expected men who received “average female” results to feel like their masculinity was in question, and possibly express stronger attachment to meat on later surveys.

Men who experienced a threat to their masculinity showed more attachment to meat than those who did not experience the threat. They were also more likely to say they needed meat to feel full and were less likely to consider switching to a diet with no meat. This study shows how gendered assumptions about diet matter for how men think about maintaining their health, highlighting the standards men feel they must meet — and eat.

Allison Nobles is a PhD candidate in sociology at the University of Minnesota and Graduate Editor at The Society Pages. Her research primarily focuses on sexuality and gender, and their intersections with race, immigration, and law.

(View original at https://thesocietypages.org/socimages)

CryptogramProtecting Yourself from Identity Theft

I don't have a lot of good news for you. The truth is there's nothing we can do to protect our data from being stolen by cybercriminals and others.

Ten years ago, I could have given you all sorts of advice about using encryption, not sending information over email, securing your web connections, and a host of other things­ -- but most of that doesn't matter anymore. Today, your sensitive data is controlled by others, and there's nothing you can personally to do affect its security.

I could give you advice like don't stay at a hotel (the Marriott breach), don't get a government clearance (the Office of Personnel Management hack), don't store your photos online (Apple breach and others), don't use email (many, many different breaches), and don't have anything other than an anonymous cash-only relationship with anyone, ever (the Equifax breach). But that's all ridiculous advice for anyone trying to live a normal life in the 21st century.

The reality is that your sensitive data has likely already been stolen, multiple times. Cybercriminals have your credit card information. They have your social security number and your mother's maiden name. They have your address and phone number. They obtained the data by hacking any one of the hundreds of companies you entrust with the data­ -- and you have no visibility into those companies' security practices, and no recourse when they lose your data.

Given this, your best option is to turn your efforts toward trying to make sure that your data isn't used against you. Enable two-factor authentication for all important accounts whenever possible. Don't reuse passwords for anything important -- ­and get a password manager to remember them all.

Do your best to disable the "secret questions" and other backup authentication mechanisms companies use when you forget your password­ -- those are invariably insecure. Watch your credit reports and your bank accounts for suspicious activity. Set up credit freezes with the major credit bureaus. Be wary of email and phone calls you get from people purporting to be from companies you do business with.

Of course, it's unlikely you will do a lot of this. Pretty much no one does. That's because it's annoying and inconvenient. This is the reality, though. The companies you do business with have no real incentive to secure your data. The best way for you to protect yourself is to change that incentive, which means agitating for government oversight of this space. This includes proscriptive regulations, more flexible security standards, liabilities, certification, licensing, and meaningful labeling. Once that happens, the market will step in and provide companies with the technologies they can use to secure your data.

This essay previously appeared in the Rochester Review, as part of an alumni forum that asked: "How do you best protect yourself from identity theft?"

Worse Than FailureCodeSOD: If I Failed

Let's simply start with some code, provided by Adam:

static bool Failed(bool value) { return value; }

Now, you look at this method, and you ask yourself, "what use is this?" Well, scattered through the codebase, you can see it in use:

bool bOK; bOK = someProcessWhichMightFail(); return bOK ? true : Failed(false);

Adam went through the commit history and was able to get a sense of what the developer was actually trying to do. You see, in some places, there are many reasons why the call might fail. So by wrapping a method around the kind of failure, you had a sense of why it failed, for example, it could be return bOK ? true : FILE_NOT_FOUND(false);

Now, you know why it failed. Well, you know why if you read the code. As this is C++, one could have communicated failure states using exceptions, which would be more clear. If you wanted to stick to return codes for some calling convention reason, one could use enums. Or a pile of #defines. Pretty much anything but this.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Planet Linux AustraliaMichael Still: Ignition!

Share

Whilst the chemistry was sometimes over my head, this book is an engaging summary of the history of US liquid rocket fuels during the height of the cold war. Fun to read and interesting as well. I enjoyed it.

Ignition! Book Cover Ignition!
John Drury Clark
Technology & Engineering
1972
214

Share

,

Planet Linux AustraliaDavid Rowe: Codec2 and FreeDV Update

Quite a lot of Codec2/FreeDV development going on this year, so much that I have been neglecting the blog! Here is an update…..

Github, Travis, and STM32

Early in 2019, the number of active developers had grown to the point where we needed more sophisticated source control, so in March we moved the Codec 2 project to GitHub. One feature I’m enjoying is the collaboration and messaging between developers.

Danilo (DB4PLE) immediately had us up and running with Travis, a tool that automatically builds our software every time it is pushed. This has been very useful in spotting build issues quickly, and reducing the amount of “human in the loop” manual testing.

Don (W7DMR), Danilo, and Richard (KF5OIM) have been doing some fantastic work on the cmake build and test system for the stm32 port of 700D. A major challenge has been building the same code on desktop platforms without breaking the embedded stm32 version, which has tight memory constraints.

We now have a very professional build and test system, and can run sophisticated unit tests from anywhere in the world on remote stm32 development hardware. A single “cmake test all” command can build and run a suite of automated tests on the x86 and stm32 platforms.

The fine stm32 work by Don will soon lead to new firmware for the SM1000, and FreeDV 700D is already running on radios that support the UHSDR firmware.

FreeDV in the UK

Mike (G4ABP), contacted me with some fine analysis of the FreeDV modems on the UK NVIS channel. Mike is part of a daily UK FreeDV net, which was experiencing some problems with loss of sync on FreeDV 700C. Together we have found (and fixed) bugs with FreeDV 700C and 700D.

The UK channel is interesting: high SNR (>10dB), but at times high Doppler spread (>3Hz) which the earlier FreeDV 700C modem may deal with better due to it’s high sampling rate of the channel phase. In contrast, FreeDV 700D has been designed for moderate Doppler (1Hz), but heavily optimised for low SNR operation. More investigation required here with off air samples to run any potential issues to ground.

I would like to hear from you if you have problems getting FreeDV 700D to work with strong signals! This could be a sign of fast fading “breaking” the modem. By working together, we can improve FreeDV.

FreeDV in Argentina

Jose, LU5DKI, is part of an active FreeDV group in Argentina. They have a Facebook page for their Radio Club Coronel Pringles LU1DIL that describes their activities. They are very happy with the low SNR and interference rejecting capabilities of FreeDV 700D:

Regarding noise FREEDV IS IMMUNE TO NOISE, coincidentally our CLUB is installed in a TELEVISION MONITORING CENTER, where the QRN by the monitors and computers is very intense, it is impossible to listen to a single SSB station, BUT FREEDV LISTENS PERFECTLY IN 40 METERS

Roadmap for 2019

This year I would like to get FreeeDV 2020 released, and FreeDV 700D running on the SM1000. A bonus would be some improvement in the speech quality for the lower rate modes.

Reading Further

FreeDV 2020 First On Air Tests
Porting a LDPC Decoder to a STM32 Microcontroller
Universal Ham Software Defined Radio Github page

Cory DoctorowA wonderful review for Radicalized in the Winnipeg Free Press

Joel Boyce:

The tagline of Cory Doctorow’s latest release is “dystopia is now.” In four novellas, the Canadian ex-pat ably covers a broad swath of pressing social concerns ranging from police racism to affordable American health care through an only slightly science-fictional lens.

No prior volume has so perfectly encapsulated who Doctorow is or what he thinks we should be worrying about as this one does. In the past, a new reader might have had to read lots of long essays about Makerspaces and net neutrality and the Digital Millennium Copyright Act on his website to get the whole picture.

But now, the answer to the question of where to start with Doctorow can be answered with “right here.”

Previous novels Little Brother and Homeland were like instruction manuals for millennial and generation Z activists, written in the shadow of George W. Bush’s war on terror and the 2008 financial crisis, respectively. They represented moments in time when government curtailment of civil liberties and economic oppression by corporate interests seemed to demand a response.

But that response — a particular brand of socialist and technogeek activism that blends community organizing with internet crowd-sourcing — is even better encapsulated in Unauthorized Bread, in which a young newcomer to the United States risks everything to bust open the operating system of her smart toaster so that she, and an entire building full of refugees, can actually afford to eat.

Read the rest

Planet Linux AustraliaDavid Rowe: FreeDV 2020 First On Air Tests

Brad (AC0ZJ), Richard (KF5OIM) and I have been putting the pieces required for the new FreeDV 2020 mode, which uses LPCNet Neural Net speech synthesis technology developed by Jean-Marc Valin. The goal of this mode is 8kHz audio bandwidth in just 1600 Hz of RF bandwidth. FreeDV 2020 is designed for HF channels where SSB an “armchair copy” – SNRs of better than 10dB and slow fading.

FreeDV 2020 uses the fine OFDM modem ported to C by Steve (K5OK) for the FreeDV 700D mode. Steve and I have modified this modem so it can operate at the higher bit rate required for FreeDV 2020. In fact, the modem can now be configured on the command line for any bandwidth and bit rate that you like, and even adapt the wonderful LDPC FEC codes developed by Bill (VK5DSP) to suit.

Brad is working on the integration of the FreeDV 2020 mode into the FreeDV GUI program. It’s going well, and he has made 1200 mile transmissions across the US to a SDR using the Linux version. Brad has also done some work on making FreeDV GUI deal with USB sound cards that come and go in different order.

Mark, VK5QI has just made a 3200km FreeDV transmission from Adelaide, South Australia to a KiwiSDR in the Bay of Islands, New Zealand. He decoded it with the partially working OSX version (we do most of our development on Ubuntu Linux).

I’m surprised as I didn’t think it would work so well over such long paths! There’s a bit of acoustic echo from Mark’s shack but you can get an idea of the speech quality compared to SSB. Thanks Mark!

For the adventurous, the freedv-gui source code 2020 development branch is here). We are currently performing on air tests with the Linux version, and Brad is working on the Windows build.

Reading Further

Steve Ports an OFDM modem from Octave to C
Bill’s (VK5DSP) Low SNR Blog

,

CryptogramFriday Squid Blogging: Squid Skin "Inspires" New Thermal Sheeting

Researchers are making space blankets using technology based on squid skin. Honestly, it's hard to tell how much squid is actually involved in this invention.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDMeyer Sound at TED, from the stage to the stars

Small but mighty speakers from Meyer Sound helped bring sound into the front rows at TED2019

Small but mighty speakers from Meyer Sound helped bring rich sound to the sonically challenging front-row seats of TED2019: Bigger Than Us, April 15–19, 2019, Vancouver, BC, Canada. Photo: Bret Hartman / TED

Given John Meyer’s roots in the Bay Area’s 1960s radio and music scenes, and his innovations for just about every acoustic application — electronically dampening ambient noise in loud rooms, building 3D Cirque du Soleil soundscapes, and helping develop the Grateful Dead’s revolutionary “Wall of Sound” — it’s not surprising to spot his team behind the scenes at TED. With his state-of-the-art audio production platforms and speaker systems, Meyer and his colleagues at Meyer Sound have significantly improved TED’s music and voice reproduction game, and opened the door to a world of new sonic possibilities at TED’s events — including an on-site audio refuge at TED2019 to provide conference-goers with a serene space to digest heavy ideas.

Meyer is a living legend, and accordingly, I caught up with him as he’s revisiting one of his most legendary projects: the sound design of Apocalypse Now, which first toured the US in 1979 using Meyer’s subsonic speaker system. Director Francis Ford Coppola wanted audiences to literally feel every explosion in the film, and he tapped Meyer to provide special subwoofers that would reach to 30 cycles per second (or Hz) — well below the range of human hearing — to provide that impact. For the film’s 40th-anniversary screening at the Beacon Theatre in New York City, Meyer’s speakers sunk even lower to a gut-rumbling 13 Hz.

“Sound can change your emotion more than any other tool that’s ever existed,” Meyer says. “The movie people know this, because they change the sound to change the mood of a scene. They’ve known this for 50 years; neuroscience is just studying this now. And we know that low frequencies — which we’re doing for Apocalypse Now — create emotion.”

This exploratory and thoughtful approach to sound and all its possibilities forms the cornerstone of Meyer Sound (which Meyer and his wife, Helen, founded in Berkeley in 1979), and it’s enshrined in their motto: “Thinking sound.” “‘Thinking sound’ embodies our philosophy of making sound something that matters for everyone in all situations,” Meyer explains. “Sound is a crucial contributor to quality of life, because it is all around us all of the time.” By developing new technologies, Meyer Sound constantly seeks to “create audio solutions that heighten the quality and enjoyment of each of these kinds of sonic experiences.”

Mina Sabet, TED's director of production/video operations

Meet Mina Sabet, TED’s director of production/video operations. It’s her job to make TED’s custom-built theater look and sound better year after year. TED2019: Bigger Than Us. April 15–19, 2019, Vancouver, BC, Canada. Photo: Dian Lofton / TED

If this kind of thinking sounds familiar, it’s because it dovetails perfectly with the values of TED’s production team, for whom sound and video are equal ingredients in an ideal conference experience. Mina Sabet, TED’s Director of Production and Video Operations, sought to up the ante of TED’s audio production — and Meyer Sound was a “clear choice” to reboot the sound system for the 2019 Vancouver conference.

Building a PA system that blends into the background, doesn’t block anyone’s view of the stage, and yet still provides adequate sound coverage is a daunting task. According to Sabet, “One specific red flag we noticed when sitting in the theater was that our front rows” — specifically couches arranged at the front of the theater — “did not have a full audio experience.” The existing speakers were high overhead, creating a sonic void at the front of the hall. Loudspeakers must compete with lighting rigs and video projectors for ceiling real estate, and they had lost that battle. Speakers in the aisles are both hazardous and, well, ugly.

The solution was both innovative and comically obvious — hide speakers under the furniture. Sabet says that Meyer Sound’s “UP4-Slim speaker could fit nicely under the couch, face the people in the couches, and never be visible to the audience or our cameras. It was a perfect fit.” From there, the team optimized the rest of the room — as Meyer’s business manager John Monitto says, “making sure that we had equal coverage between all the seats, and just really making it a dynamic space… completely blanketing the seats with sound.”

This tranquil simulcast room became a chillout lounge between sessions, with sound environment from Meyer Sound.

This quiet simulcast room became a chillout lounge between sessions of talks, thanks to a tranquil sound environment from Meyer Sound. TED2019: Bigger Than Us. April 15–19, 2019, Vancouver, BC, Canada. Photo: Dian Lofton / TED

Once Meyer Sound had conquered the challenges in the main theater, they rewired the simulcast rooms to provide relaxed, uncrowded viewing spaces away from the main theater. As they explored the theme of relaxation, the teams began to wonder — how could they design a space that is not only a great place to listen to the conference, but also a meditative environment where attendees could really lose themselves and quietly observe the torrent of ideas they’d just experienced? More important, how could the production team exploit Meyer Sound’s powerful sound design suites — which can enable small halls to sound like cathedrals or caverns, or muffle echoes to make large spaces sound tiny — to their fullest potential?

As Monitto tells it, “TED had brought us the idea of a room that has two purposes: one, it’s a simulcast space [where] you can watch a talk happening live. [Two], between those sessions, when there’s not somebody on a stage or they’re not presenting material, there’s a place to go to be able to just chill out. And that’s what this room was all about. They brought us a theme of ‘Under the stars,’ and they wanted us to run with it.” And so the “Under the stars” room was born, centered around an interactive ceiling installation that would display the constellations of different cultures with the wave of a baton.

Monitto continues: “We did something really creative — creating an outdoor theme, with an audio soundscape that allowed you to just kind of chill out and relax.” By manipulating high-quality recordings of wind, water, insects and birds flying overhead with Spacemap — an audio matrix that maps up to 288 input sources to output locations — the Meyer Sound team created the illusion of an outdoor cinema under the stars, with sounds not only drifting between speakers, but also soaring overhead and far away. “It just was a real nice place to hang out,” Monitto says.

Leveraging sound to redefine spaces and moods within the conference venue is just the beginning — TED and Meyer Sound have a wide spectrum of challenges and possibilities ahead of them. Using their boundless curiosity, ingenuity, and creativity, both teams seek to redefine the aesthetic boundaries of their events — and seeking to master data-driven tools to achieve this is perhaps the most daunting task of all. As John Meyer puts it, “We [can analyze sound], but it’s like analyzing food — it’s hard. Analyzing whiskey or anything like that with chemistry is hard to figure out. Does it taste good?” As they enter their multi-year partnership, TED and Meyer hope to deliver complex, rich, and five-star flavors to audiences in their theater and in rooms at TED’s flagship conference in Vancouver for years to come.

A meditative soundscape and a ceiling full of stars turned this simulcast space into a calm, relaxing environment, thanks to sound design from Meyer Sound.

A meditative soundscape and a ceiling full of stars turned this simulcast space into a calm, relaxing environment, thanks to sound design from Meyer Sound. TED2019: Bigger Than Us. April 15–19, 2019, Vancouver, BC, Canada. Photo: Dian Lofton / TED

TEDTEDsters are optimists who get great ideas in the shower: the Brightline data experience at TED2019

Attendees line up to vote on where great ideas are born: at the office or in the shower. Guess who won.

Attendees line up to vote on where great ideas are born: at the office, or in the shower? (Spoiler: see headline.) They’re interacting with a data portals installation, presented by Brightline Initiative at TED2019: Bigger Than Us. April 15–19, 2019, Vancouver, BC, Canada. Photo: Marla Aufmuth / TED

TED2019 opened in Vancouver on April 15 with the ambitious theme of “Bigger than us.” For the next five days, attendees were treated to a lively buffet of topics and speakers, with more than 70 talks, Q&As, performances, workshops and discovery sessions. And that was just the official schedule.

As any attendee can tell you, the conversations inspired by the events are just as smart and stimulating, and they’re a major draw for the people who return year after year to the conference. Brightline Initiative, a TED partner, wondered: Could they create an installation that could highlight this important aspect and provide a playful peek inside TEDsters’ minds?

Their answer to this question took shape in two dynamic pieces. Scattered around the Vancouver Convention Center (VCC) were three sets of data-collection portals. Each set consisted of a pair of side-by-side gates, similar to the security gates found at an airport. Every day, a different question was posted above each set of gates — three questions a day x 5 days meant 15 different questions were posed during the week.

The most popular question of the conference was “Where are great ideas born?” Choices: “in the shower” and “at the office.” Shower got 518 votes; office, 98. People voted by stepping up to the gate of their preferred answer, and as they walked through, a counter advanced — to the pleasing sound of plastic dots clicking — and a new total appeared atop the front of the gate.

The tallies from the three sets of portals were shown on a scoreboard at the Brightline main exhibit on the VCC’s ground floor. But those scores were just a garnish to the centerpiece of the space: a supernaturally glowing wall, or “moodbeam.” This eye-catching piece, and the gates too, were built by Domestic Data Streamers, a Barcelona-based data communication firm, in collaboration with Brightline Initiative.

Next to the moodbeam were clear plastic tiles in three colors, which conveyed three distinct feelings. Yellow meant “I’m optimistic”; orange, “I’m hopeful but we better start now”; and blue, “I’m concerned.” Attendees chose a tile that corresponded with how they felt, wrote on it the subject on their minds or the action they were taking on an issue, and slotted it into the backlit wall.

How the "moodbeam" works: pick an idea, decide if you're optimistic, guardedly hopeful or pessimistic, and cast your vote. It's part of a project from Domestic Data Streamers, presented by Brightline Initiative

How the “moodbeam” works: pick an idea, decide if you’re optimistic, guardedly hopeful or pessimistic, and cast your vote. It’s part of a project from Domestic Data Streamers, presented by Brightline Initiative at TED2019: Bigger Than Us. April 15 – 19, 2019, Vancouver, BC, Canada. Photo: Dian Lofton / TED

 

The moodbeam was filled in from left to right over the course of the conference, serving as a giant mood ring for TED2019. By the end, “I’m optimistic” finished on top, with “I’m hopeful but we better start now” close behind and “I’m concerned” a bit further behind.

Qingqing Han, head of partnerships at Brightline, says, “The reason we’re doing the social space is to help people better reflect” — on the talks and speakers, on the gates’ questions, and on how people compare to other attendees. She adds, “It’s also a way to help people remind themselves that action is important,” something that is central to Brightline’s mission (“from thinking to doing” is one of the initiative’s taglines).

Attendee Fajir Amin adds an idea to the "moodbeam" installation at TED2019. The board was designed by Domestic Data Streamers and presented by Brightline Initiative.

Attendee Fajir Amin adds an idea to the “moodbeam” installation at TED2019. The board was designed by Domestic Data Streamers and presented by Brightline Initiative at TED2019: Bigger Than Us. April 15–19, 2019, Vancouver, BC, Canada. Photo: Lawrence Sumulong / TED

 

“Our installation here is a dialogue with TED attendees,” says Miquel Santasusana, chief operations officer at Domestic Data Streamers. Their company first used the gates at a Spanish music festival, where concertgoers were given light-hearted choices such as Khaleesi or Jon Snow, Dumbledore or Gandalf. “You can’t stop anyone in the festival and ask them something; you have to do it in a way that is fast and simple,” he says. “So we decided to use the flows of the people from one stage to another.”

The TED Conference is another fast-moving crowd that flows among venues and spaces, and voting via the gates wouldn’t require extra time or effort from them. In fact, says Domestic Data Streamers CEO Pau Garcia (watch his TEDxBarcelona talk), “I’ve seen people here going through the gates in a circle because they didn’t want to decide — so they chose both of them.” As a result, “this shouldn’t be taken as statistically significant information to analyze TEDsters,” says Santasusana. “At the end, it’s not the numbers that matter; it’s about starting a discussion.”

Here are the highly unscientific results to the five most-answered gates questions (after the shower vs. office one); they’re listed in ascending order of popularity:

5. There’s more wisdom in …
the Internet, 93
Traditions, 290

4. Who do you share ideas with?
Everyone, 219
Trusted circle, 199

3. The world needs more …
Artists, 284
Engineers, 148

2. The future of humanity is in …
Creating, 271
Adapting, 168

1. The ideas at TED inspire me to …
Think deeper, 238
Take action, 231

Casting a decisive vote for heart-driven decisionmaking, an attendee steps through a data portal, presented by Brightline Initiative at TED2019

Casting a decisive vote for heart-driven decisionmaking, an attendee steps through a data Portal, presented by Brightline Initiative at TED2019, Brightline at TED2019: Bigger Than Us. April 15–19, 2019, Vancouver, BC, Canada. Photo: Dian Lofton / TED

Krebs on SecurityFeds Bust Up Dark Web Hub Wall Street Market

Federal investigators in the United States, Germany and the Netherlands announced today the arrest and charging of three German nationals and a Brazilian man as the alleged masterminds behind the Wall Street Market (WSM), one of the world’s largest dark web bazaars that allowed vendors to sell illegal drugs, counterfeit goods and malware. Now, at least one former WSM administrator is reportedly trying to extort money from WSM vendors and buyers (supposedly including Yours Truly) — in exchange for not publishing details of the transactions.

The now-defunct Wall Street Market (WSM). Image: Dark Web Reviews.

A complaint filed Wednesday in Los Angeles alleges that the three defendants, who currently are in custody in Germany, were the administrators of WSM, a sophisticated online marketplace available in six languages that allowed approximately 5,400 vendors to sell illegal goods to about 1.15 million customers around the world.

“Like other dark web marketplaces previously shut down by authorities – Silk Road and AlphaBay, for example – WSM functioned like a conventional e-commerce website, but it was a hidden service located beyond the reach of traditional internet browsers, accessible only through the use of networks designed to conceal user identities, such as the Tor network,” reads a Justice Department release issued Friday morning.

The complaint alleges that for nearly three years, WSM was operated on the dark web by three men who engineered an “exit scam” last month, absconding with all of the virtual currency held in marketplace escrow and user accounts. Prosecutors say they believe approximately $11 million worth of virtual currencies was then diverted into the three men’s own accounts.

The defendants charged in the United States and arrested Germany on April 23 and 24 include 23-year-old resident of Kleve, Germany; a 31-year-old resident of Wurzburg, Germany; and a 29-year-old resident of Stuttgart, Germany. The complaint charges the men with two felony counts – conspiracy to launder monetary instruments, and distribution and conspiracy to distribute controlled substances. These three defendants also face charges in Germany.

Signs of the dark market seizure first appeared Thursday when WSM’s site was replaced by a banner saying it had been seized by the German Federal Criminal Police Office (BKA).

The seizure message that replaced the homepage of the Wall Street Market on on May 2.

Writing for ZDNet’s Zero Day blog, Catalin Cimpanu noted that “in this midst of all of this, one of the site’s moderators –named Med3l1n— began blackmailing WSM vendors and buyers, asking for 0.05 Bitcoin (~$280), and threatening to disclose to law enforcement the details of WSM vendors and buyers who made the mistake of sharing various details in support requests in an unencrypted form.

In a direct message sent to my Twitter account this morning, a Twitter user named @FerucciFrances who claimed to be part of the exit scam demanded 0.05 bitcoin (~$286) to keep quiet about a transaction or transactions allegedly made in my name on the dark web market.

“Make it public and things gonna be worse,” the message warned. “Investigations goes further once the whole site was crawled and saved and if you pay, include the order id on the dispute message so you can be removed. You know what I am talking about krebs.”

A direct message from someone trying to extort money from me.

I did have at least one user account on WSM, although I don’t recall ever communicating on the forum with any other users, and I certainly never purchased or sold anything there. Like most other accounts on dark web shops and forums, it was created merely for lurking. I asked @FerucciFrances to supply more evidence of my alleged wrongdoing, but he has not yet responded.

The Justice Department said the MED3LIN moniker belongs to a fourth defendant linked to Wall Street Market — Marcos Paulo De Oliveira-Annibale, 29, of Sao Paulo, Brazil — who was charged Thursday in a criminal complaint filed in the U.S. District Court in Sacramento, California.

Oliviera-Annibale also faces federal drug distribution and money laundering charges for allegedly acting as a moderator on WSM, who, according to the charges, mediated disputes between vendors and their customers, and acted as a public relations representative for WSM by promoting it on various sites.

Prosecutors say they connected MED3LIN to his offline identity thanks to photos and other clues he left behind online years ago, suggesting once again that many alleged cybercriminals are not terribly good at airgapping their online and offline selves.

“We are on the hunt for even the tiniest of breadcrumbs to identify criminals on the dark web,” said McGregor W. Scott, United States Attorney for the Eastern District of California. “The prosecution of these defendants shows that even the smallest mistake will allow us to figure out a cybercriminal’s true identity. As with defendant Marcos Annibale, forum posts and pictures of him online from years ago allowed us to connect the dots between him and his online persona ‘Med3l1n.’ No matter where they live, we will investigate and prosecute criminals who create, maintain, and promote dark web marketplaces to sell illegal drugs and other contraband.”

A copy of the Justice Department’s criminal complaint in the case is here (PDF).

Cory DoctorowOttawa! I’m speaking tomorrow at the Writers Festival (and then Re:publica in Berlin and Comicpalooza in Houston!)

Tomorrow night at 7:30PM, I’m giving a presentation about my new book, Radicalized, as part of the Ottawa Writers Festival, at Christ Church Cathedral (414 Sparks St.) — I haven’t spoken in Ottawa for years (maybe a decade?!) so I’m really looking forward to it.

From there, I’m heading to Berlin for May 7, where I’m keynoting at the Re:publica conference with a talk about surveillance and monopolies, followed by a launch and signing for the German edition of my novella Unauthorized Bread (I’m doing a smaller AMA earlier in the day about the aftermath of the catastrophic European Copyright Directive vote).

On May 8th, I’m speaking at Otherland, Berlin’s science fiction and fantasy bookstore at 8PM.

Then I’m off to Houston for a weekend at Comicpalooza, including a panel about copyright on May 10 at 12:30PM; presenting a keynote talk on May 11 at 12PM; and then another copyright panel on May 12 at 10:30AM.

Hope to see you!

Krebs on SecurityCredit Union Sues Fintech Giant Fiserv Over Security Claims

A Pennsylvania credit union is suing financial industry technology giant Fiserv, alleging that “baffling” security vulnerabilities in the company’s software are “wreaking havoc” on its customers. The credit union said the investigation that fueled the lawsuit was prompted by a 2018 KrebsOnSecurity report about glaring security weaknesses in a Fiserv platform that exposed personal and financial details of customers across hundreds of bank Web sites.

Brookfield, Wisc.-based Fiserv [NASDAQ:FISV] is a Fortune 500 company with 24,000 employees and $5.8 billion in earnings last year. Its account and transaction processing systems power the Web sites for hundreds of financial institutions — mostly small community banks and credit unions.

In August 2018, in response to inquiries by KrebsOnSecurity, Fiserv fixed a pervasive security and privacy hole in its online banking platform. The authentication weakness allowed bank customers to view account data for other customers, including account number, balance, phone numbers and email addresses.

In late April 2019, Fiserv was sued by Bessemer System Federal Credit Union, a comparatively tiny financial institution with just $38 million in assets. Bessemer said it was moved by that story to launch its own investigation into Fiserv’s systems, and it found a startlingly simple flaw: Firsev’s platform would let anyone reset the online banking password for a customer just by knowing their account number and the last four digits of their Social Security number.

Bessemer claims Fiserv’s systems let anyone reset a customer’s online banking password just by knowing their SSN and account number.

Recall that in my Aug 2018 report, Fiserv’s own systems were exposing online banking account numbers for its customers. Thus, an attacker would only need to know the last four digits of a target’s SSN to reset that customer’s password, according to Bessemer. And that information is for sale in multiple places online and in the cybercrime underground for a few bucks per person.

Bessemer further alleges Fiserv’s systems had no checks in place to prevent automated attacks that might let thieves rapidly guess the last four digits of the customer’s SSN — such as limiting the number of times a user can submit a login request, or imposing a waiting period after a certain number of failed login attempts.

The lawsuit says the fix Fiserv scrambled to put in place after Bessemer complained was “pitifully deficient and ineffective:”

“Fiserv attempted to fortify Bessemer’s online banking website by requiring users registering for an account to supply a member’s house number. This was ineffective because residential street addresses can be readily found on the internet and through other public sources. Moreover, this information can be guessed through a trial-and-error process. Most alarmingly, this security control was purely illusory. Because some servers were not enforcing this security check, it could be readily bypassed.”

Bessemer says instead of fixing these security problems and providing the requested assurances that information was being adequately safeguarded, Fiserv issued it a “notice of claims,” alleging the credit union’s security review of its own online banking system gave rise to civil and criminal claims.

The credit union says Fiserv demanded it not disclose information relating to the security review to any third parties, “including Fiserv’s other clients (who presumably were affected with the same security problems at their financial institutions) as well as media sources.”

Fiserv did not immediately respond to requests for comment. But Fiserv spokesperson Ann Cave was quoted in several publications saying, “We believe the allegations have no merit and will respond to the claims as part of the legal process.”

Charles Nerko, the attorney representing Bessemer in the lawsuit, said to protect the credit union’s members, the credit union is replacing its core processing vendor, although Nerko would not specify where the credit union might be taking its business.

According to FedFis.com, Fiserv is by far the top bank core processor, with more than 37 percent market share. And it’s poised to soon get much bigger.

In January 2019, Fiserv announced it was acquiring payment processing giant First Data in a $22 billion all-stock deal. The deal is expected to close in the second half of 2019, pending an antitrust review by the U.S. Justice Department.

That merger, should it go through, may not bode well for Fiserv’s customers, argues Paul Schaus of American Banker.

“Banks should take this trend as a warning sign,” Schaus wrote. “Rather than delivering new innovations that banks and their customers crave, legacy vendors are looking to remain relevant by acquiring existing products and services that expand their portfolios into new areas of financial services. As emerging technologies grow more critical to everyday business, these legacy vendors, which banks have deep longstanding relationships with, likely won’t be on the leading edge in every product or channel. Instead, financial institutions will need to seek out newer vendors that have deeper commitments and focus in cutting-edge technologies that will drive industry change.”

CryptogramCybersecurity for the Public Interest

The Crypto Wars have been waging off-and-on for a quarter-century. On one side is law enforcement, which wants to be able to break encryption, to access devices and communications of terrorists and criminals. On the other are almost every cryptographer and computer security expert, repeatedly explaining that there's no way to provide this capability without also weakening the security of every user of those devices and communications systems.

It's an impassioned debate, acrimonious at times, but there are real technologies that can be brought to bear on the problem: key-escrow technologies, code obfuscation technologies, and backdoors with different properties. Pervasive surveillance capitalism­ -- as practiced by the Internet companies that are already spying on everyone -- ­matters. So does society's underlying security needs. There is a security benefit to giving access to law enforcement, even though it would inevitably and invariably also give that access to others. However, there is also a security benefit of having these systems protected from all attackers, including law enforcement. These benefits are mutually exclusive. Which is more important, and to what degree?

The problem is that almost no policymakers are discussing this policy issue from a technologically informed perspective, and very few technologists truly understand the policy contours of the debate. The result is both sides consistently talking past each other, and policy proposals­ -- that occasionally become law­ -- that are technological disasters.

This isn't sustainable, either for this issue or any of the other policy issues surrounding Internet security. We need policymakers who understand technology, but we also need cybersecurity technologists who understand -- ­and are involved in -- ­policy. We need public-interest technologists.

Let's pause at that term. The Ford Foundation defines public-interest technologists as "technology practitioners who focus on social justice, the common good, and/or the public interest." A group of academics recently wrote that public-interest technologists are people who "study the application of technology expertise to advance the public interest, generate public benefits, or promote the public good." Tim Berners-Lee has called them "philosophical engineers." I think of public-interest technologists as people who combine their technological expertise with a public-interest focus: by working on tech policy, by working on a tech project with a public benefit, or by working as a traditional technologist for an organization with a public benefit. Maybe it's not the best term­ -- and I know not everyone likes it­ -- but it's a decent umbrella term that can encompass all these roles.

We need public-interest technologists in policy discussions. We need them on congressional staff, in federal agencies, at non-governmental organizations (NGOs), in academia, inside companies, and as part of the press. In our field, we need them to get involved in not only the Crypto Wars, but everywhere cybersecurity and policy touch each other: the vulnerability equities debate, election security, cryptocurrency policy, Internet of Things safety and security, big data, algorithmic fairness, adversarial machine learning, critical infrastructure, and national security. When you broaden the definition of Internet security, many additional areas fall within the intersection of cybersecurity and policy. Our particular expertise and way of looking at the world is critical for understanding a great many technological issues, such as net neutrality and the regulation of critical infrastructure. I wouldn't want to formulate public policy about artificial intelligence and robotics without a security technologist involved.

Public-interest technology isn't new. Many organizations are working in this area, from older organizations like EFF and EPIC to newer ones like Verified Voting and Access Now. Many academic classes and programs combine technology and public policy. My cybersecurity policy class at the Harvard Kennedy School is just one example. Media startups like The Markup are doing technology-driven journalism. There are even programs and initiatives related to public-interest technology inside for-profit corporations.

This might all seem like a lot, but it's really not. There aren't enough people doing it, there aren't enough people who know it needs to be done, and there aren't enough places to do it. We need to build a world where there is a viable career path for public-interest technologists.

There are many barriers. There's a report titled A Pivotal Moment that includes this quote: "While we cite individual instances of visionary leadership and successful deployment of technology skill for the public interest, there was a consensus that a stubborn cycle of inadequate supply, misarticulated demand, and an inefficient marketplace stymie progress."

That quote speaks to the three places for intervention. One: the supply side. There just isn't enough talent to meet the eventual demand. This is especially acute in cybersecurity, which has a talent problem across the field. Public-interest technologists are a diverse and multidisciplinary group of people. Their backgrounds come from technology, policy, and law. We also need to foster diversity within public-interest technology; the populations using the technology must be represented in the groups that shape the technology. We need a variety of ways for people to engage in this sphere: ways people can do it on the side, for a couple of years between more traditional technology jobs, or as a full-time rewarding career. We need public-interest technology to be part of every core computer-science curriculum, with "clinics" at universities where students can get a taste of public-interest work. We need technology companies to give people sabbaticals to do this work, and then value what they've learned and done.

Two: the demand side. This is our biggest problem right now; not enough organizations understand that they need technologists doing public-interest work. We need jobs to be funded across a wide variety of NGOs. We need staff positions throughout the government: executive, legislative, and judiciary branches. President Obama's US Digital Service should be expanded and replicated; so should Code for America. We need more press organizations that perform this kind of work.

Three: the marketplace. We need job boards, conferences, and skills exchanges­ -- places where people on the supply side can learn about the demand.

Major foundations are starting to provide funding in this space: the Ford and MacArthur Foundations in particular, but others as well.

This problem in our field has an interesting parallel with the field of public-interest law. In the 1960s, there was no such thing as public-interest law. The field was deliberately created, funded by organizations like the Ford Foundation. They financed legal aid clinics at universities, so students could learn housing, discrimination, or immigration law. They funded fellowships at organizations like the ACLU and the NAACP. They created a world where public-interest law is valued, where all the partners at major law firms are expected to have done some public-interest work. Today, when the ACLU advertises for a staff attorney, paying one-third to one-tenth normal salary, it gets hundreds of applicants. Today, 20% of Harvard Law School graduates go into public-interest law, and the school has soul-searching seminars because that percentage is so low. Meanwhile, the percentage of computer-science graduates going into public-interest work is basically zero.

This is bigger than computer security. Technology now permeates society in a way it didn't just a couple of decades ago, and governments move too slowly to take this into account. That means technologists now are relevant to all sorts of areas that they had no traditional connection to: climate change, food safety, future of work, public health, bioengineering.

More generally, technologists need to understand the policy ramifications of their work. There's a pervasive myth in Silicon Valley that technology is politically neutral. It's not, and I hope most people reading this today knows that. We built a world where programmers felt they had an inherent right to code the world as they saw fit. We were allowed to do this because, until recently, it didn't matter. Now, too many issues are being decided in an unregulated capitalist environment where significant social costs are too often not taken into account.

This is where the core issues of society lie. The defining political question of the 20th century was: "What should be governed by the state, and what should be governed by the market?" This defined the difference between East and West, and the difference between political parties within countries. The defining political question of the first half of the 21st century is: "How much of our lives should be governed by technology, and under what terms?" In the last century, economists drove public policy. In this century, it will be technologists.

The future is coming faster than our current set of policy tools can deal with. The only way to fix this is to develop a new set of policy tools with the help of technologists. We need to be in all aspects of public-interest work, from informing policy to creating tools all building the future. The world needs all of our help.

This essay previously appeared in the January/February 2019 issue of IEEE Security & Privacy. I maintain a public-interest tech resources page here.

LongNowNew Podcast: Siberia Salon

In our opening Conversation at The Interval for 02019, Stewart Brand, Kevin Kelly and Executive Director Alexander Rose discuss a 02018 research trip that witnessed the ongoing restoration of a part of Siberia back to its Pleistocene-era ecosystem. The team brought back DNA samples to evaluate for mammoth de-extinction, and lots of photos, video, and stories of a place where climate change and arctic deep time can be witnessed at once.

Listen here.

Worse Than FailureError'd: Rise of the Sandwich Based Economy

"When I ordered on Hasbro Pulse's site, I don't remember paying using a large sandwich, but...here we are," writes Zeke.

 

"Yep, it's Follow Up Friday again, soon to be followed by Bug Fix Saturday," Michael P. wrote.

 

"Great! Avast found something...and the rest is for me to solve," Bellons wrote.

 

Scott writes, "Statements like this can be easily reworded as 'we aren't perfect, but we definitely suck less than our competitors'."

 

"Yes, I'm pretty sure I wish to do one or the other," Drake C. writes.

 

Greg P. writes, "When I see quotes like these, it triggers me a little and I sometimes find myself moved to tears...As in I'm going to lose my afternoon fixing something weird in Prodution."

 

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

TEDIn Case You Missed It: Highlights from day 4 of TED2019

Legendary artist and stage designer Es Devlin takes us on a tour of the mind-blowing sets she’s created for Beyoncé, Adele, U2 and others. She speaks at TED2019: Bigger Than Us, on April 18, 2019 in Vancouver, BC, Canada. (Photo: Bret Hartman / TED)

Day 4 of TED2019 played on some of the more powerful forces in the world: mystery, play, connection, wonder and awe. Some themes and takeaways from a jam-packed day:

Sleep is the Swiss Army knife of health. The less you sleep, the shorter your life expectancy and the higher your chance of getting a life-threatening illness like Alzheimer’s or cancer, says sleep scientist Matt Walker. It’s all about the deep sleep brain waves, Walker says: those tiny pulses of electrical activity that transfer memories from the brain’s short-term, vulnerable area into long-term storage. He shares some crazy stats about a global experiment performed on 1.6 billion people across 70 countries twice a year, known to us all as daylight savings time. In the spring, when we lose an hour of sleep, we see a 24 percent increase in heart attacks that following day, Walker says. In the autumn, when we gain an hour of sleep, we see a 21 percent reduction in heart attacks.

Video games are the most important technological change happening in the world right now. Just look at the scale: a full third of the world’s population (2.6 billion people) find the time to game, plugging into massive networks of interaction, says entrepreneur Herman Narula. These networks let people exercise a social muscle they might not otherwise exercise. While social media can amplify our differences, could games create a space for us to empathize? That’s what is happening on Twitch, says cofounder Emmett Shear. With 15 million daily active users, Twitch lets viewers watch and comment on livestreamed games, turning them into multiplayer entertainment. Video games are a modern version of communal storytelling, says Shear, with audiences both participating and viewing as they sit around their “virtual campfires.”

We’re heading for a nutrition crisis. Plants love to eat CO2, and we’re giving them a lot more of it lately. But as Kristie Ebi shows, there’s a hidden, terrifying consequence — the nutritional quality of plants is decreasing, reducing levels of protein, vitamins and nutrients that humans need. Bottom line: the rice, wheat and potatoes our grandparents ate might have contained more nutrition than our kids’ food will. Asmeret Asefaw Berhe studies the soil where our food grows — “it’s just a thin veil that covers the surface of land, but it has the power to shape our planet’s destiny,” she says. In a Q&A with Ebi, Berhe connects the dots between soil and nutrition: “There are 13 nutrients that plants get only from soil. They’re created from soil weathering, and that’s a very slow process.” CO2 is easier for plants to consume — it’s basically plant junk food.  

Tech that folds and moves. Controlling the slides in his talk with the swipe on the arm of his jean jacket, inventor Ivan Poupyrev shows how, with a bit of collaboration, we can design literally anything to be plugged into the internet — blending digital interactivity with everyday analog objects like clothing. “We are walking around with supercomputers in our pockets. But we’re stuck in the screens with our faces? That’s not the future I imagine.” Some news: Poupryev announced from stage that his wearables platform will soon be made available freely to other creators, to make of it what they will. Meanwhile Jamie Paik shows folding origami robots — call them “robogami” — that morph and change to respond to what we’re asking them to do. “These robots will no longer look like the characters from the movies,” she says. “Instead, they will be whatever you want them to be.”

Inside the minds of creators. Actor Joseph Gordon-Levitt has gotten more than his fair share of attention in his acting career (in which, oddly, he’s played two TED speakers: tightrope walker Philippe Petit and whistleblower Edward Snowden). But as life has morphed on social media, he’s found that there’s a more powerful force than getting attention: giving it. Paying attention is the real essence of creativity, he says — and we should do more of it. Legendary artist and stage designer Es Devlin picks up on that theme of connection, taking us on a tour of the mind-blowing sets she’s created for Beyoncé, Adele, U2 and others; her work is aimed at fostering lasting connections and deep empathy in her audience. As she quotes E.M. Forster: “Only connect!”

We can map the universe — the whole universe. On our current trajectory, we’ll map every large galaxy in the observable universe by 2060, says astrophysicist Juna Kollmeier, head of the Sloan Digital Sky Survey (SDSS). “Think about it. We’ve gone from arranging clamshells to general relativity to SDSS in a few thousand years,” she says, tracing humanity’s rise in a sentence. “If we hang on 40 more, we can map all the galaxies.” It’s a truly epic proposition — and it’s also our destiny as a species whose calling card is to figure things out.

TEDWonder: Notes from Session 11 of TED2019

Richard Bona performs at TED2019

Multi-instrumental genius, Grammy winner and songwriter Richard Bona held the audience spellbound at TED2019: Bigger Than Us, April 18, 2019, Vancouver, BC, Canada. Photo: Bret Hartman / TED

Session 11 of TED2019 amazed, enriched, inspired and dazzled — diving deep into the creative process, exploring what it’s like to be a living artwork and soaring into deep space.

The event: Talks and performances from TED2019, Session 11: Wonder, hosted by TED’s Helen Walters and Kelly Stoetzel

When and where: Thursday, April 18, 2019, 5pm, at the Vancouver Convention Centre in Vancouver, BC

Speakers: Beau Lotto with performers from Cirque du Soleil, Joseph Gordon-Levitt, Jon Gray, Daniel Lismore, Richard Bona, Es Devlin and Juna Kollmeier

Music: Multi-instrumentalist and singer-songwriter Richard Bona, mesmerizing the audience with his “magic voodoo machine” — weaving beautiful vocal loops into a mesh of sound

Beau Lotto, neuroscientist, accompanied by performers and artists from Cirque du Soleil

  • Big idea: Awe is more than an experience; it’s a physiological state of mind, one that could positively influence how we approach conflict and uncertainty.
  • How? Humans possess a fundamental need for closure that, when unmet, often turns to conflict-heavy emotions like fear and anger. The antidote may be one of our most profound perceptual experiences: awe. Lotto and his team recorded the brain activity of 280 people before, during and after watching a Cirque du Soleil performance, discovering promising insights. In a state of awe, research shows that humans experience more connection to others and more comfort with uncertainty and risk-taking. These behaviors demonstrate that a significant shift in how we approach conflict is possible — with humility and courage, seeking to understand rather than convince. Read how this talk was co-created by Beau Lotto’s Lab of Misfits and the Cirque du Soleil.
  • Quote of the talk: “Awe is neither positive nor negative. What’s really important is the context in which you create awe.”

Joseph Gordon-Levitt, actor, filmmaker and founder of HITRECORD

  • Big idea: If your creativity is driven by a desire to get attention, you’re never going to be creatively fulfilled. What drives truly fulfilling creativity? Paying attention.
  • How? Social media platforms are fueled by getting attention, and more and more people are becoming experts at it — turning creativity from a joyous expression into a means to an end. But while Joseph Gordon-Levitt certainly knows what it feels like to get attention — he’s been in show business since he was 6, after all — he realized that the opposite feeling, paying attention, is the real essence of creativity. He describes the feeling of being locked in with another actor — thinking about and reacting only to what they’re doing, eliminating thoughts about himself. So get out there and collaborate, he says. Read more about Joseph Gordon-Levitt’s talk here.
  • Quote of the talk: “It’s like a pavlovian magic spell: ​rolling, speed, marker ​(clap)​, set and action​. Something happens to me, I can’t even help it. My attention narrows. And everything else in the world, anything else that might be bothering, or that might otherwise grab my attention, it all goes away.”
Jon Gray speaks at TED2019

“We decided the world needed some Bronx seasoning on it”: The founder of Ghetto Gastro, Jon Gray, speaks at TED2019: Bigger Than Us. April 18, 2019, Vancouver, BC, Canada. Photo: Ryan Lash / TED

Jon Gray, designer, food lover, entrepreneur and cofounder of Ghetto Gastro

  • Big idea: We can bring people together, connect cultures and break stereotypes through food.
  • How? Jon Gray is a founder of Ghetto Gastro, a collective based in the Bronx that works at the intersection of food, art and design. Their goal is to craft products and experiences that challenge perceptions. At first, Gray and his co-creators aimed to bring the Bronx to the wider world. Hosting an event in Tokyo, for example, they served a Caribbean patty made with Japanese Wagyu beef and shio kombu — taking a Bronx staple and adding international flair. Now Ghetto Gastro is bringing the world to the Bronx. The first step: their recently opened “idea kitchen” — a space where they can foster a concentration of cultural and financial capital in their neighborhood.
  • Quote of the talk: “Breaking bread has always allowed me to break the mold and connect with people.”
Daniel Lismore speaks at TED2019

“These artworks are me”: Daniel Lismore talks about his life as a work of art, created anew each morning. He speaks at TED2019: Bigger Than Us. April 15 – 19, 2019, Vancouver, BC, Canada. Photo: Ryan Lash / TED

Daniel Lismore, London-based artist who lives his life as art, styling elaborate ensembles that mix haute couture, vintage fabrics, found objects, ethnic jewelry, beadwork, embroidery and more

  • Big idea: We can all make ourselves into walking masterpieces. While it takes courage — and a lot of accessories — to do so, the reward is being able to express our true selves.
  • How? Drawing from a massive, 6,000-piece collection that occupies a 40-foot container, three storage units and 30 IKEA boxes, Lismore creates himself anew every day. His materials range from beer cans and plastic crystals to diamonds, royal silks and 2,000-year-old Roman rings. And he builds his outfits from instinct, piling pieces on until — like a fashion-forward Goldilocks — everything feels just right.
  • Quote of the talk: “I have come to realize that confidence is a concept you can choose. I have come to realize that authenticity is necessary and it’s powerful. I have spent time trying to be like other people; it didn’t work. it’s a lot of hard work not being yourself.”
Es Devlin speaks at TED2019

“So much of what I make is fake. It’s an illusion. And yet every artist works in pursuit of communicating something that’s true.” Artist and stage designer Es Devlin speaks at TED2019: Bigger Than Us, April 18, 2019, Vancouver, BC, Canada. Photo: Bret Hartman / TED

Es Devlin, artist and stage designer

  • Big idea: Art is about communication and expression, and designers have the power to foster lasting connections and deep empathy with their work.
  • How? Es Devlin weaves boundless thinking into her stunning stage designs, emphasizing empathy, intimacy and connection for the performers and the audience. As a set designer for some of the world’s most iconic performers and events — including Beyoncé’s Formation tour, Adele’s first live concert in five years, U2 and Kanye West, among many others — Devlin dives into the heart of each performer’s work. She sculpts visual masterpieces that reflect the shape and sound of each artist she works with. Audiences come to shows for connection and intimacy, Devlin says, and it’s the task of set designers, directors and artists to deliver it for the fans.
  • Quote of the talk: “Most of what I’ve made over the last 25 years doesn’t exist anymore — but our work endures in memories, in synaptic sculptures in the minds of those who were once present in the audience.”

Juna Kollmeier, astrophysicist

  • Big idea: Mapping the observable universe is … a pretty epic proposition. But it’s actually humanly achievable.
  • How? We’ve been mapping the stars for thousands of years, but the Sloan Digital Sky Survey is on a special mission: to create the most detailed three-dimensional maps of the universe ever made. Led by Kollmeier, the project divides the sky into three “mappers” that it documents: galaxies, black holes and stars. Our own Milky Way galaxy has 250 billion(ish) stars. “That is a number that doesn’t make practical sense to pretty much anybody,” says Kollmeier. We’re not going to map all of those anytime soon. But galaxies? We’re getting there. On our current trajectory, we’ll map every large galaxy in the observable universe by 2060, she says.
  • Quote of the talk: “Black holes are among the most perplexing objects in the universe. Why? Because they are literally just math incarnate in a physical form that we barely understand.”
Juna Kollmeier speaks at TED2019

“Stars are exploding all the time. Black holes are growing all the time. There is a new sky every night”: Astronomer Juna Kollmeier speaks at TED2019: Bigger Than Us, April 18, 2019, Vancouver, BC, Canada. Photo: Bret Hartman / TED

,

Cory DoctorowI’m teaching on this year’s Writing Excuses Cruise!

I’m one of the guest instructors on this year’s Writing Excuses Cruise, a nine-day intensive writing program on land and at sea, departing from Galveston and putting into port at Cozumel, Georgetown, and Falmouth, with a roster of instructors including Brandon Sanderson, Piper Drake, Kathy Chung, K Tempest Bradford, DongWon Song, Mary Robinette Kowal, Dan Wells, and Howard Tayler. The program starts with a two-day workshop at a Houston hotel and then sets sail, running Sept 22-30 altogether. I’ve taught many other workshops, but this is my first Writing Excuses Cruise and I’m really looking forward to it. I hope to see you there!

Cory DoctorowInternet Activist Cory Doctorow on How to Change the World

I spoke with Arik Korman from I Heart Radio about #Radicalized, hope, my theory of change, and a better technological future!

Cory Doctorow, blogger, journalist, science fiction author, and co-editor of the blog Boing Boing, talks about why he’s a great believer in the Internet, warts and all; how, as a white male, he became aware of the struggles of people furthest from opportunity; and how he keeps a positive outlook on life. Cory’s latest book is Radicalized: Four Tales of Our Present Moment.

Planet Linux AustraliaClinton Roy: Restricted Sleep Regime

Since moving down to Melbourne my poor sleep has started up again. It’s really hard to say what the main factor driving this is. My doctor down here has put me onto a drug free way of trying to improve my sleep, and I think I kind of like it, while it’s no silver bullet, it is something I can go back to if I’m having trouble with my sleep, without having to get a prescription.

The basic idea is to maximise sleep efficiency. If you’re only getting n hours sleep a night, only spend n hours  a night in bed. This forces you to stay up and go to bed rather late for a few nights. Hopefully, being tired will help you sleep through the night in one large segment. Once you’ve successfully slept through the night a few times, relax your bed time by say fifteen minutes, and get used to that. Slowly over time, you increase the amount of sleep you’re getting, while keeping your efficiency high.

CryptogramWhy Isn't GDPR Being Enforced?

Politico has a long article making the case that the lead GDPR regulator, Ireland, has too cozy a relationship with Silicon Valley tech companies to effectively regulate their privacy practices.

Despite its vows to beef up its threadbare regulatory apparatus, Ireland has a long history of catering to the very companies it is supposed to oversee, having wooed top Silicon Valley firms to the Emerald Isle with promises of low taxes, open access to top officials, and help securing funds to build glittering new headquarters.

Now, data-privacy experts and regulators in other countries alike are questioning Ireland's commitment to policing imminent privacy concerns like Facebook's reintroduction of facial recognition software and data sharing with its recently purchased subsidiary WhatsApp, and Google's sharing of information across its burgeoning number of platforms.

Worse Than FailureCodeSOD: Switching Daily

A not uncommon pattern is to use a dictionary, or array as a lookup table in place of a switch or conditional. In some languages, like Python, there is no switch statement, and dictionaries are the main way to imitate that behavior.

In languages like JavaScript, where the line between objects and dictionaries is blurred to the point of non-existence, it’s a common approach. A lot of switch statements can be converted to an object literal with functions as its values, e.g.:

var myVal = getMyVal();
let lookup = {'foo': doFoo, 'bar': doBar};
lookup[myVal]();

Cassi had a co-worker who was at least peripherally aware of this technique. They might have heard about it, and maybe they saw it used once, from a distance, on a foggy day and their glasses were covered with rain and they also weren’t actually paying attention.

When they needed to convert a day, represented as a number from 1–7, into a day-as-a-string, they wrote some code which looks like this:

	/**
	 * Converts the given weekday number (Sunday = 1) to the weekday name
	 * @param {number} dayNum
	 * @return {string}
	 */
	$company.time.dayNumToDayName = function(dayNum) {
		return /** @type {string} */ ($company.misc.shortSwitch(dayNum, [
			[1, 'Sunday'],
			[2, 'Monday'],
			[3, 'Tuesday'],
			[4, 'Wednesday'],
			[5, 'Thursday'],
			[6, 'Friday'],
			[7, 'Saturday']
		]));
	};

We’ll get to shortSwitch in a moment, but let’s just look at how they designed their lookup table. I’d say that they’ve reinvented the dictionary, in that these are clearly key/value pairs, but since the keys are indexes, what they’ve really done is reinvented the array so that they can have arrays which start at one.

Yes, it would be simpler to have just done return listOfDayNames[dayNum - 1], but with some bounds checking. Maybe that’s what $company.misc.shortSwitch does, somehow?

	/**
	 * Simulates a switch statement (with fewer lines of code) by looping through
	 * the given cases and returning the value of the matching case.
	 * @template C
	 * @template V
	 * @param {C} switchVal - the value to switch on
	 * @param {Array<(C|V)>!} cases - an array of [case, value] pairs
         * @return V
	 */
	$company.misc.shortSwitch = function (switchVal, cases) {
		for(var i = 0; i < cases.length; i++) {
			if(cases[i][0] === switchVal) return cases[i][1];
		}
	};

Simulates a switch statement (with fewer lines of code), they say. In reality, they’ve simulated a lookup table with more lines of code, and as a bonus, this needs to inspect every element (until it finds a match), versus a lookup table which only needs to directly access the match.

Whether or not Cassi fixes the dayNumToDayName method, that’s a small thing. As you can imagine, when someone writes a hammer like shortSwitch, that’s gonna be used to turn everything into a nail. shortSwitch is called all over the codebase.

I can only assume/hope that Cassi will be switching jobs, shortly.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet Linux AustraliaLeon Brooks

Person T has had Person A design a one-page flyer and sent it to Person J... as a single image. Person T is two hours ahead, time-zone wise, and Person A is roughly 12 hours behind.

Person J also wishes to email out the flyer with hyperlinks on each of two names in the image.

Sent as a bare image, she will not fly.

Embedding the image in a PDF would allow only the entire image to possess a single hyperlink.

So... crank up GIMP, open image, select the Move tool, drag Guides from each Ruler to section up the image. Each Guide changes nothing, however its presence allows the Rectangle Select tool to be very precise and consistent.

Now File ⇒ Save the work-file in case you wish to adjust things for another round. Here, I have applied the Cubist tool from the Filters to most of the content, so the idea is conveyed without revealing details of said content.

The next step is to Rectangle Select the top area (in the screenshot above, the left-name area has been Rectangle Selected), then Copy it (Ctrl+C is the keyboard shortcut), then File ⇒ Create ⇒ From Clipboard (Ctrl+Shift+V is the shortcut) to make the copy into a new image, export that image (File ⇒ Export) as a PNG (lossless compression), repeat for the bottom area, then in the central section, for the left, left-name, centre, right-name, right areas.

Open LibreOffice Writer, Insert ⇒ Image the top-area image, right-click, choose Properties, under the Type tab make it “As character” under the Crop tab set the Scale so it will all fit nicely (58% in this case, which can be tweaked later to suit), OK. Click to the right of the image, press Shift+Enter to insert a NewLine (rather than a paragraph).

Now Insert ⇒ Image the centre left area, then left-name, centre, right-name, right. With the name areas (in this case) I also chose the Hyperlink tab within the Properties dialogue, and pasted the link into the URL field, making that image section click-able. When done, Shift+Enter to make a place for the bottom area.

Finally, Insert ⇒ Image the bottom-area image (and if it does not all butt up squarely, check (Format ⇒ Paragraph) that the Line Spacing for the document’s sole paragraph is set to Single). Now save (for the sake of posterior) and click the “Export as PDF” button.

,

TEDMystery: Notes from Session 8 of TED2019

“Soil is just a thin veil that covers the surface of land, but it has the power to shape our planet’s destiny,” says Asmeret Asefaw Berhe at TED2019: Bigger Than Us, on April 18, 2019, at Vancouver, BC, Canada. Photo: (Bret Hartman / TED)

To kick off day 4 of TED2019, we give you (many more) reasons to get a good night’s sleep, plunge into the massive microbiome in the Earth’s crust — and much, more more.

The event: Talks from TED2019, Session 8: Mystery, hosted by head of TED Chris Anderson and TED’s science curator David Biello

When and where: Thursday, April 16, 2019, 8:45am, at the Vancouver Convention Centre in Vancouver, BC

Speakers: Andrew Marantz, Kristie Ebi, Asmeret Asefaw Berhe, Edward Tenner, Matt Walker and Karen Lloyd

The talks in brief:

Andrew Marantz, journalist, author who writes about the internet

  • Big idea: We have the power — and responsibility — to steer digital conversation away from noxious conspiracies and toward an open, equal world.
  • How? The internet isn’t inherently toxic or wholesome — after all, it’s shaped by us, every day. Andrew Marantz would know: he’s spent three years interviewing the loudest, cruelest people igniting conversation online. He discovered that people can be radicalized to hate through social media, messaging boards and other internet rabbit holes because these tools maximize their algorithms for engagement at all costs. And what drives engagement? Intense emotion, not facts or healthy debate. Marantz calls for social media companies to change their algorithms — and, in the meantime, offers three ways we can help build a better internet: Be a smart skeptic; know that “free speech” is only the start of the conversation; and emphasize human decency over empty outrage. The internet is vast and sometimes terrible, but we can make little actions to make it a safer, healthier and more open place. So, keep sharing cute cat memes!
  • Quote of the talk: “We’ve ended up in this bizarre dynamic online where some people see bigoted propaganda as being edgy, and see basic truth and human decency as pearl-clutching.”
TED2019

“Free speech is just a starting point,” says Andrew Marantz onstage at TED2019: Bigger Than Us, April 18, 2019, Vancouver, BC, Canada. Photo: Bret Hartman / TED

Kristie Ebi, public health researcher, director of the Center for Health and the Global Environment

  • Big idea: Climate change is affecting the foods we love — and not in a good way. The time to act is now.
  • How? As we continue to burn fossil fuels, the concentration of CO2 in the atmosphere rises. This much we know. But Ebi’s team is discovering a new wrinkle in our changing climate: all this CO2 is altering the nutritional quality of some key global staples, like rice, potatoes and wheat. Indeed, the very chemistry of these crops is being modified, reducing levels of protein, vitamins and nutrients — which could spell disaster for the more than two billion people who subsist on rice, for instance, as their primary food source. But we don’t have to sit by and watch this crisis unfold: Ebi calls for large-scale research projects that study the degradation of our food and put pressure on the world to quit fossil fuels.
  • Quote of the talk: “It’s been said that if you think education is expensive, try ignorance. Let’s not. Let’s invest in ourselves, in our children and in our planet.”

Asmeret Asefaw Berhe, scientist and “dirt detective” studying the impact of ecological change on our soils

  • Big idea: The earth’s soil is not only necessary for agriculture — it’s also an under-appreciated resource in the fight against climate change.
  • How? Human beings tend to treat soil like, well, dirt: half of the world’s soil has been degraded by human activity. But soil stores carbon — 3,000 billion metric tons of it, in fact, equivalent to 315 times the amount entering our atmosphere (and contributing to climate change) every year. Picture this: there’s more twice as much carbon in soil as there is in all of the world’s vegetation — the lush tropical rainforests, giant sequoias, expansive grasslands, every kind of flora you can imagine on Earth — plus all the carbon currently up in the atmosphere, combined. If we treated soil with more respect, Berhe says, it could be a valuable tool to not only fight, but also eventually reverse, global warming.
  • Quote of the talk: “Soil is just a thin veil that covers the surface of land, but it has the power to shape our planet’s destiny… [it] represents the difference between life and lifelessness in the earth’s system.”

Host David Biello speaks with soil scientist Asmeret Asefaw Berhe and public health researcher Kristie Ebi during Session 8 of TED2019: Bigger Than Us. April 18, 2019, Vancouver, BC, Canada. Photo: Bret Hartman / TED

Asmeret Asefaw Berhe, Kristie Ebi and Joanne Chory in conversation with TED’s science curator David Biello

  • Big idea: CO2 is basically junk food for plants. As plants consume more and more CO2 from the air, they’re drawing up fewer of the trace nutrients from the soil that humans need to eat. What can we do to make sure plants stay nutritious?
  • How? Yes, we’re grateful to the plants that capture carbon dioxide from the air — but as Kristie Ebi notes, in the process, they’re taking up fewer nutrients from the soil that humans need. As Asmeret says: “There are 13 nutrients that plants get only from soil. They’re created from soil weathering, and that’s a very slow process.” To solve these interlocking problems — helping rebuild the soil, helping plants capture carbon, and helping us humans get our nutrients — we need all hands on deck, and many approaches to the problem. But as Joanne Chory, from the audience, reminds us, “I think we can get the plants to help us; they’ve done it before.”
  • Quote of the talk: Kristie Ebi: “Plants are growing for their own benefit. They’re not growing for ours. They don’t actually care if you don’t get the nutrition you need; it’s not on their agenda.”

Speaking from the audience, Joanne Chory joins the conversation with soil scientist Asmeret Asefaw Berhe and public health researcher Kristie Ebi at TED2019: Bigger Than Us, April 18, 2019, Vancouver, BC, Canada. Photo: Marla Aufmuth / TED

Edward Tenner, writer and historian

  • Big idea: An obsession with efficiency can actually make us less efficient. What we need is “inspired inefficiency.”
  • How? Our pursuit of more for less can cause us to get in our own way. Switching to electronic medical records made it easier to exchange information, for instance, but also left doctors filling out forms for hours — and feeling they have less time to spend with patients. Efficiency, Tenner says, is best served with a side of intuition, and a willingness to take the scenic route rather than cutting straight through to automation. Tenner’s advice: Allow for great things to happen by accident, embrace trying the hard way and seek security in diversity. “We have no way to tell who is going to be useful in the future,” he says. “We need to supplement whatever the algorithm tells us … by looking for people with various backgrounds and various outlooks.”
  • Quote of the talk: “Sometimes the best way to move forward is to follow a circle.”

Matt Walker, sleep scientist

  • Big idea:  If you want to live a longer and healthier life, get more sleep. And beware, the opposite holds true: the less your sleep, the shorter your life expectancy and the higher chance you have of getting a life-threatening illness.
  • How? Walker has seen the results of a good night’s sleep on the brain – and the frightening results of a bad one. Consider one study: the brains of participants who slept a full night lit up with healthy learning-related activity in their hippocampi, the “informational inbox” of the brain. Those who were sleep-deprived, however, showed hippocampi that basically shut down. But why, exactly, is a good night’s sleep so good for the brain? It’s all about the deep sleep brain waves, Walker says: those tiny pulses of electrical activity that transfer memories from the brain’s short-term, vulnerable area into long-term storage. These findings have vast potential implications on aging and dementia, our education system and our immune systems. Feeling tired? Listen to your body! As Walker says: “Sleep is the Swiss Army knife of health.”
  • Quotes from the talk: “Sleep, unfortunately, is not an optional lifestyle luxury. Sleep is a non-negotiable biological necessity. It is your life support system, and it is mother nature’s best effort yet at immortality.”

Karen Lloyd, microbiologist

  • Big idea: Deep in the Earth’s crust, carbon-sucking microbes have survived for hundreds of thousands of years. And we just might be able to use them to store excess CO2 — and slow down climate change.
  • How? Karen Lloyd studied microbes in hot springs and volcanoes in Costa Rica, and the results were astounding: as a side effect of its very slow survival, chemolithoautotroph — a kind of microbe that eats by turning rocks into other kinds of rocks — locks carbon deep in the Earth, turning CO2 into carbonate mineral. And it gets better: there are more CO2-reducing microbes laying in wait elsewhere in the Earth’s biosphere, from the Arctic to the mud in the Marianas Trench. We’re not sure how they’ll react to a rush of new carbon from the atmosphere, so we’ll need more research to illuminate possible negative (or positive!) results.
  • Quote of the talk: “It may seem like life buried deep within the Earth’s crust is so far away from our daily experiences, but this weird, slow life may actually have the answers to some of the greatest mysteries to life on Earth.”

Before his talk, historian Edward Tenner reviews his notes one last time backstage at TED2019: Bigger Than Us, on April 18, 2019, Vancouver, BC, Canada. Photo: Lawrence Sumulong / TED