Planet Russell

,

Sam VargheseDavid Warner must pay for his sins. As everyone else does

What does one make of the argument that David Warner, who was behind the ball tampering scandal in South Africa in 2018, was guilty of less of a mistake than Ben Stokes who indulged in public fights? And the argument that since Stokes has been made England captain for the series against the West Indies, Warner, who committed what is called a lesser sin, should also be in line for the role of Australian skipper?

The suggestion has been made by Peter Lalor, a senior cricket writer at The Australian, that Warner has paid a bigger price for past mistakes than Stokes. Does that argument really hold water?

Stokes was involved in a fracas outside a nightclub in Bristol a few years back and escaped tragedy and legal issues. He got into a brawl and was lucky to get off without a prison term.

But that had no connection to the game of cricket. And when we talk of someone bringing the game into disrepute, such incidents are not in the frame.

Had Stokes indulged in such immature behaviour on the field of play or insulted spectators who were at a game, then we would have to criticise the England board for handing him the mantle of leadership.

Warner brought the game into disrepute. He hatched a plot to use sandpaper in order to get the ball to swing, then shamefully recruited the youngest player in the squad, rookie Cameron Bancroft, to carry out his plan, and then expects to be forgiven and given a chance to lead the national team.

Really? Lalor argues that the ball tampering did not hurt anyone and the umpires did not even have to change the ball. Such is the level of morality we have come to, where arguments that have little ballast are advanced because nationalistic sentiments come into the picture.

It is troubling that as senior a writer as Lalor would seek to advance such an argument, when someone has clearly violated the spirit of the game. Doubtless there will be cynics who poke fun at any suggestion that cricket is still a gentleman’s game, but without those myths that surround this pursuit, would it still have its appeal?

The short answer to that is a resounding no.

Lalor argues that Stokes’ fate would have been different had he been an Australian, I doubt that very much because given the licence extended to Australian sports stars to behave badly, his indulgences would have been overlooked. The word used to excuse him would have ” larrikinism”.

But Warner cheated. And the Australian public, no matter what their shortcomings, do not like cheats.

Unfortunately, at a pivotal moment during the cricket team’s South African tour, this senior member could only think of cheating to win. That is sad, unfortunate, and even tragic. It speaks of a big moral chasm somewhere.

But once one has done the crime, one must do the time. Arguing as Lalor does, that both Steve Smith, the captain at the time, and Bancroft got away with no leadership bans, does not carry any weight.

The man who planned the crime was nailed with the heaviest punishment. And it is doubtful whether anyone who has a sense of justice would argue against that.

Worse Than FailureError'd: Take a Risk on NaN

"Sure, I know how long the free Standard Shipping will take, but maybe, just maybe, if I choose Economy, my package will have already arrived! Or never," Philip G. writes.

 

"To be honest, I would love to hear how a course on guitar will help me become certified on AWS!" Kevin wrote.

 

Gergő writes, "Hooray! I'm going to be so productive for the next 0 days!"

 

"I guess that inbox count is what I get for using Yahoo mail?" writes Becky R.

 

Marc W. wrote, "Try all you want, PDF Creator, but you'll never sweet talk me with your 'great' offer!"

 

Mark W. wrote, "My neighborhood has a personality split, but at least they're both Pleasant."

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianReproducible Builds (diffoscope): diffoscope 150 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 150. This version includes the following changes:

[ Chris Lamb ]
* Don't crash when listing entries in archives if they don't have a listed
  size (such as hardlinks in .ISO files).
  (Closes: reproducible-builds/diffoscope#188)
* Dump PE32+ executables (including EFI applications) using objdump.
  (Closes: reproducible-builds/diffoscope#181)
* Tidy detection of JSON files due to missing call to File.recognizes that
  checks against the output of file(1) which was also causing us to attempt
  to parse almost every file using json.loads. (Whoops.)
* Drop accidentally-duplicated copy of the new --diff-mask tests.
* Logging improvements:
  - Split out formatting of class names into a common method.
  - Clarify that we are generating presenter formats in the opening logs.

[ Jean-Romain Garnier ]
* Remove objdjump(1) offsets before instructions to reduce diff noise.
  (Closes: reproducible-builds/diffoscope!57)

You find out more by visiting the project homepage.

,

Planet DebianBen Hutchings: Debian LTS work, June 2020

I was assigned 20 hours of work by Freexian's Debian LTS initiative, and worked all 20 hours this month.

I sent a final request for testing for the next update to Linux 3.16 in jessie. I also prepared an update to Linux 4.9, included in both jessie and stretch. I completed backporting of kernel changes related to CVE-2020-0543, which was still under embargo, to Linux 3.16.

Finally I uploaded the updates for Linux 3.16 and 4.9, and issued DLA-2241 and DLA-2242.

The end of June marked the end of long-term support for Debian 8 "jessie" and for Linux 3.16. I am no longer maintaining any stable kernel branches, but will continue contributing to them as part of my work on Debian 9 "stretch" LTS and other Debian releases.

CryptogramThe Security Value of Inefficiency

For decades, we have prized efficiency in our economy. We strive for it. We reward it. In normal times, that's a good thing. Running just at the margins is efficient. A single just-in-time global supply chain is efficient. Consolidation is efficient. And that's all profitable. Inefficiency, on the other hand, is waste. Extra inventory is inefficient. Overcapacity is inefficient. Using many small suppliers is inefficient. Inefficiency is unprofitable.

But inefficiency is essential security, as the COVID-19 pandemic is teaching us. All of the overcapacity that has been squeezed out of our healthcare system; we now wish we had it. All of the redundancy in our food production that has been consolidated away; we want that, too. We need our old, local supply chains -- not the single global ones that are so fragile in this crisis. And we want our local restaurants and businesses to survive, not just the national chains.

We have lost much inefficiency to the market in the past few decades. Investors have become very good at noticing any fat in every system and swooping down to monetize those redundant assets. The winner-take-all mentality that has permeated so many industries squeezes any inefficiencies out of the system.

This drive for efficiency leads to brittle systems that function properly when everything is normal but break under stress. And when they break, everyone suffers. The less fortunate suffer and die. The more fortunate are merely hurt, and perhaps lose their freedoms or their future. But even the extremely fortunate suffer -- maybe not in the short term, but in the long term from the constriction of the rest of society.

Efficient systems have limited ability to deal with system-wide economic shocks. Those shocks are coming with increased frequency. They're caused by global pandemics, yes, but also by climate change, by financial crises, by political crises. If we want to be secure against these crises and more, we need to add inefficiency back into our systems.

I don't simply mean that we need to make our food production, or healthcare system, or supply chains sloppy and wasteful. We need a certain kind of inefficiency, and it depends on the system in question. Sometimes we need redundancy. Sometimes we need diversity. Sometimes we need overcapacity.

The market isn't going to supply any of these things, least of all in a strategic capacity that will result in resilience. What's necessary to make any of this work is regulation.

First, we need to enforce antitrust laws. Our meat supply chain is brittle because there are limited numbers of massive meatpacking plants -- now disease factories -- rather than lots of smaller slaughterhouses. Our retail supply chain is brittle because a few national companies and websites dominate. We need multiple companies offering alternatives to a single product or service. We need more competition, more niche players. We need more local companies, more domestic corporate players, and diversity in our international suppliers. Competition provides all of that, while monopolies suck that out of the system.

The second thing we need is specific regulations that require certain inefficiencies. This isn't anything new. Every safety system we have is, to some extent, an inefficiency. This is true for fire escapes on buildings, lifeboats on cruise ships, and multiple ways to deploy the landing gear on aircraft. Not having any of those things would make the underlying systems more efficient, but also less safe. It's also true for the internet itself, originally designed with extensive redundancy as a Cold War security measure.

With those two things in place, the market can work its magic to provide for these strategic inefficiencies as cheaply and as effectively as possible. As long as there are competitors who are vying with each other, and there aren't competitors who can reduce the inefficiencies and undercut the competition, these inefficiencies just become part of the price of whatever we're buying.

The government is the entity that steps in and enforces a level playing field instead of a race to the bottom. Smart regulation addresses the long-term need for security, and ensures it's not continuously sacrificed to short-term considerations.

We have largely been content to ignore the long term and let Wall Street run our economy as efficiently as it can. That's no longer sustainable. We need inefficiency -- the right kind in the right way -- to ensure our security. No, it's not free. But it's worth the cost.

This essay previously appeared in Quartz.

Planet DebianMike Gabriel: My Work on Debian LTS (June 2020)

In June 2020, I have worked on the Debian LTS project for 8 hours (of 8 hours planned).

LTS Work

  • frontdesk: CVE bug triaging for Debian jessie LTS: mailman, alpine, python3.4, redis, pound, pcre3, ngircd, mutt, lynis, libvncserver, cinder, bison, batik.
  • upload to jessie-security: libvncserver (DLA-2264-1 [1], 9 CVEs)
  • upload to jessie-security: mailman (DLA-2265-1 [2], 1 CVE)
  • upload to jessie-security: mutt (DLA-2268-1 [3] and DLA-2268-2 [4]), 2 CVEs)

Other security related work for Debian

  • make sure all security fixes for php-horde-* are also in Debian unstable
  • upload freerdp2 2.1.2+dfsg-1 to unstable (9 CVEs)

References

Planet DebianRussell Coker: Desklab Portable USB-C Monitor

I just got a 15.6″ 4K resolution Desklab portable touchscreen monitor [1]. It takes power via USB-C and video input via USB-C or mini HDMI, has touch screen input, and has speakers built in for USB or HDMI sound.

PC Use

I bought a mini-DisplayPort to HDMI adapter and for my first test ran it from my laptop, it was seen as a 1920*1080 DisplayPort monitor. The adaptor is specified as supporting 4K so I don’t know why I didn’t get 4K to work, my laptop has done 4K with other monitors.

The next thing I plan to get is a VGA to HDMI converter so I can use this on servers, it can be a real pain getting a monitor and power cable to a rack mounted server and this portable monitor can be powered by one of the USB ports in the server. A quick search indicates that such devices start at about $12US.

The Desklab monitor has no markings to indicate what resolution it supports, no part number, and no serial number. The only documentation I could find about how to recognise the difference between the FullHD and 4K versions is that the FullHD version supposedly draws 2A and the 4K version draws 4A. I connected my USB Ammeter and it reported that between 0.6 and 1.0A were drawn. If they meant to say 2W and 4W instead of 2A and 4A (I’ve seen worse errors in manuals) then the current drawn would indicate the 4K version. Otherwise the stated current requirements don’t come close to matching what I’ve measured.

Power

The promise of USB-C was power from anywhere to anywhere. I think that such power can theoretically be done with USB 3 and maybe USB 2, but asymmetric cables make it more challenging.

I can power my Desklab monitor from a USB battery, from my Thinkpad’s USB port (even when the Thinkpad isn’t on mains power), and from my phone (although the phone battery runs down fast as expected). When I have a mains powered USB charger (for a laptop and rated at 60W) connected to one USB-C port and my phone on the other the phone can be charged while giving a video signal to the display. This is how it’s supposed to work, but in my experience it’s rare to have new technology live up to it’s potential at the start!

One thing to note is that it doesn’t have a battery. I had imagined that it would have a battery (in spite of there being nothing on their web site to imply this) because I just couldn’t think of a touch screen device not having a battery. It would be nice if there was a version of this device with a big battery built in that could avoid needing separate cables for power and signal.

Phone Use

The first thing to note is that the Desklab monitor won’t work with all phones, whether a phone will take the option of an external display depends on it’s configuration and some phones may support an external display but not touchscreen. The Huawei Mate devices are specifically listed in the printed documentation as being supported for touchscreen as well as display. Surprisingly the Desklab web site has no mention of this unless you download the PDF of the manual, they really should have a list of confirmed supported devices and a forum for users to report on how it works.

My phone is a Huawei Mate 10 Pro so I guess I got lucky here. My phone has a “desktop mode” that can be enabled when I connect it to a USB-C device (not sure what criteria it uses to determine if the device is suitable). The desktop mode has something like a regular desktop layout and you can move windows around etc. There is also the option of having a copy of the phone’s screen, but it displays the image of the phone screen vertically in the middle of the landscape layout monitor which is ridiculous.

When desktop mode is enabled it’s independent of the phone interface so I had to find the icons for the programs I wanted to run in an unsorted list with no search usable (the search interface of the app list brings up the keyboard which obscures the list of matching apps). The keyboard takes up more than half the screen and there doesn’t seem to be a way to make it smaller. I’d like to try a portrait layout which would make the keyboard take something like 25% of the screen but that’s not supported.

It’s quite easy to type on a keyboard that’s slightly larger than a regular PC keyboard (a 15″ display with no numeric keypad or cursor control keys). The hackers keyboard app might work well with this as it has cursor control keys. The GUI has an option for full screen mode for an app which is really annoying to get out of (you have to use a drop down from the top of the screen), full screen doesn’t make sense for a display this large. Overall the GUI is a bit clunky, imagine Windows 3.1 with a start button and task bar. One interesting thing to note is that the desktop and phone GUIs can be run separately, so you can type on the Desklab (or any similar device) and look things up on the phone. Multiple monitors never really interested me for desktop PCs because switching between windows is fast and easy and it’s easy to resize windows to fit several on the desktop. Resizing windows on the Huawei GUI doesn’t seem easy (although I might be missing some things) and the keyboard takes up enough of the screen that having multiple windows open while typing isn’t viable.

I wrote the first draft of this post on my phone using the Desklab display. It’s not nearly as easy as writing on a laptop but much easier than writing on the phone screen.

Currently Desklab is offering 2 models for sale, 4K resolution for $399US and FullHD for $299US. I got the 4K version which is very expensive at the moment when converted to Australian dollars. There are significantly cheaper USB-C monitors available (such as this ASUS one from Kogan for $369AU), but I don’t think they have touch screens and therefore can’t be used with a phone unless you enable the phone screen as touch pad mode and have a mouse cursor on screen. I don’t know if all Android devices support that, it could be that a large part of the desktop experience I get is specific to Huawei devices.

One annoying feature is that if I use the phone power button to turn the screen off it shuts down the connection to the Desklab display, but the phone screen will turn off it I leave it alone for the screen timeout (which I have set to 10 minutes).

Caveats

When I ordered this I wanted the biggest screen possible. But now that I have it the fact that it doesn’t fit in the pocket of my Scott e Vest jacket [2] will limit what I can do with it. Maybe I’ll be buying a 13″ monitor in the near future, I expect that Desklab will do well and start selling them in a wide range of sizes. A 15.6″ portable device is inconvenient even if it is in the laptop format, a thin portable screen is inconvenient in many ways.

Netflix doesn’t display video on the Desklab screen, I suspect that Netflix is doing this deliberately as some misguided attempt at stopping piracy. It is really good for watching video as it has the speakers in good locations for stereo sound, it’s a pity that Netflix is difficult.

The functionality on phones from companies other than Huawei is unknown. It is likely to work on most Android phones, but if a particular phone is important to you then you want to Google for how it worked for others.

Planet DebianEmmanuel Kasper: Test a webcam from the command line on Linux with VLC

Since this info was too well hidden on the internet, here is the information:
cvlc v4l2:///dev/video0
and there you go.
If you have multiple cameras connected, you can try /dev/video0 up to /dev/video5

Planet DebianEvgeni Golov: Automatically renaming the default git branch to "devel"

It seems GitHub is planning to rename the default brach for newly created repositories from "master" to "main". It's incredible how much positive PR you can get with a one line configuration change, while still working together with the ICE.

However, this post is not about bashing GitHub.

Changing the default branch for newly created repositories is good. And you also should do that for the ones you create with git init locally. But what about all the repositories out there? GitHub surely won't force-rename those branches, but we can!

Ian will do this as he touches the individual repositories, but I tend to forget things unless I do them immediately…

Oh, so this is another "automate everything with an API" post? Yes, yes it is!

And yes, I am going to use GitHub here, but something similar should be implementable on any git hosting platform that has an API.

Of course, if you have SSH access to the repositories, you can also just edit HEAD in an for loop in bash, but that would be boring ;-)

I'm going with devel btw, as I'm already used to develop in the Foreman project and devel in Ansible.

acquire credentials

My GitHub account is 2FA enabled, so I can't just use my username and password in a basic HTTP API client. So the first step is to acquire a personal access token, that can be used instead. Of course I could also have implemented OAuth2 in my lousy script, but ain't nobody have time for that.

The token will require the "repo" permission to be able to change repositories.

And we'll need some boilerplate code (I'm using Python3 and requests, but anything else will work too):

#!/usr/bin/env python3
import requests
BASE='https://api.github.com'
USER='evgeni'
TOKEN='abcdef'
headers = {'User-Agent': '@{}'.format(USER)}
auth = (USER, TOKEN)
session = requests.Session()
session.auth = auth
session.headers.update(headers)
session.verify = True

This will store our username, token, and create a requests.Session so that we don't have to pass the same data all the time.

get a list of repositories to change

I want to change all my own repos that are not archived, not forks, and actually have the default branch set to master, YMMV.

As we're authenticated, we can just list the repositories of the currently authenticated user, and limit them to "owner" only.

GitHub uses pagination for their API, so we'll have to loop until we get to the end of the repository list.

repos_to_change = []
url = '{}/user/repos?type=owner'.format(BASE)
while url:
    r = session.get(url)
    if r.ok:
        repos = r.json()
        for repo in repos:
            if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'master':
                repos_to_change.append(repo['name'])
        if 'next' in r.links:
            url = r.links['next']['url']
        else:
            url = None
    else:
        url = None

create a new devel branch and mark it as default

Now that we know which repos to change, we need to fetch the SHA of the current master, create a new devel branch pointing at the same commit and then set that new branch as the default branch.

for repo in repos_to_change:
    master_data = session.get('{}/repos/evgeni/{}/git/ref/heads/master'.format(BASE, repo)).json()
    data = {'ref': 'refs/heads/devel', 'sha': master_data['object']['sha']}
    session.post('{}/repos/{}/{}/git/refs'.format(BASE, USER, repo), json=data)
    default_branch_data = {'default_branch': 'devel'}
    session.patch('{}/repos/{}/{}'.format(BASE, USER, repo), json=default_branch_data)
    session.delete('{}/repos/{}/{}/git/refs/heads/{}'.format(BASE, USER, repo, 'master'))

I've also opted in to actually delete the old master, as I think that's the safest way to let the users know that it's gone. Letting it rot in the repository would mean people can still pull and won't notice that there are no changes anymore as the default branch moved to devel.

So…

announcement

I've updated all my (those in the evgeni namespace) non-archived repositories to have devel instead of master as the default branch.

Have fun updating!

code

#!/usr/bin/env python3
import requests
BASE='https://api.github.com'
USER='evgeni'
TOKEN='abcd'
headers = {'User-Agent': '@{}'.format(USER)}
auth = (USER, TOKEN)
session = requests.Session()
session.auth = auth
session.headers.update(headers)
session.verify = True
repos_to_change = []
url = '{}/user/repos?type=owner'.format(BASE)
while url:
    r = session.get(url)
    if r.ok:
        repos = r.json()
        for repo in repos:
            if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'master':
                repos_to_change.append(repo['name'])
        if 'next' in r.links:
            url = r.links['next']['url']
        else:
            url = None
    else:
        url = None
for repo in repos_to_change:
    master_data = session.get('{}/repos/evgeni/{}/git/ref/heads/master'.format(BASE, repo)).json()
    data = {'ref': 'refs/heads/devel', 'sha': master_data['object']['sha']}
    session.post('{}/repos/{}/{}/git/refs'.format(BASE, USER, repo), json=data)
    default_branch_data = {'default_branch': 'devel'}
    session.patch('{}/repos/{}/{}'.format(BASE, USER, repo), json=default_branch_data)
    session.delete('{}/repos/{}/{}/git/refs/heads/{}'.format(BASE, USER, repo, 'master'))

Worse Than FailureABCD

As is fairly typical in our industry, Sebastian found himself working as a sub-contractor to a sub-contractor to a contractor to a big company. In this case, it was IniDrug, a pharmaceutical company.

Sebastian was building software that would be used at various steps in the process of manufacturing, which meant he needed to spend a fair bit of time in clean rooms, and on air-gapped networks, to prevent trade secrets from leaking out.

Like a lot of large companies, they had very formal document standards. Every document going out needed to have the company logo on it, somewhere. This meant all of the regular employees had the IniDrug logo in their email signatures, e.g.:

Bill Lumbergh
Senior Project Lead
  _____       _ _____                   
 |_   _|     (_|  __ \                  
   | |  _ __  _| |  | |_ __ _   _  __ _ 
   | | | '_ \| | |  | | '__| | | |/ _` |
  _| |_| | | | | |__| | |  | |_| | (_| |
 |_____|_| |_|_|_____/|_|   \__,_|\__, |
                                   __/ |
                                  |___/ 

At least, they did until Sebastian got an out of hours, emergency call. While they absolutely were not set up for remote work, Sebastian could get webmail access. And in the webmail client, he saw:

Bill Lumbergh
Senior Project Lead
ABCD

At first, Sebastian assumed Bill had screwed up his sigline. Or maybe the attachment broke? But as Sebastian hopped on an email chain, he noticed a lot of ABCDs. Then someone sent out a Word doc (because why wouldn’t you catalog your emergency response in a Word document?), and in the space where it usually had the IniDrug logo, it instead had “ABCD”.

The crisis resolved itself without any actual effort from Sebastian or his fellow contractors, but they had to reply to a few emails just to show that they were “pigs and not chickens”- they were committed to quality software. The next day, Sebastian mentioned the ABCD weirdness.

“I saw that too. I wonder what the deal was?” his co-worker Joanna said.

They pulled up the same document on his work computer, the logo displayed correctly. He clicked on it, and saw the insertion point blinking back at him. Then he glanced at the formatting toolbar and saw “IniDrug Logo” as the active font.

Puzzled, he selected the logo and changed the font. “ABCD” appeared.

IniDrug had a custom font made, hacked so that if you typed ABCD the resulting output would look like the IniDrug logo. That was great, if you were using a computer with the font installed, or if you remembered to make sure your word processor was embedding all your weird custom fonts.

Which also meant a bunch of outside folks were interacting with IniDrug employees, wondering why on Earth they all had “ABCD” in their siglines. Sebastian and Joanna got a big laugh about it, and shared the joke with their fellow contractors. Helping the new contractors discover this became a rite of passage. When contractors left for other contracts, they’d tell their peers, “It was great working at ABCD, but it’s time that I moved on.”

There were a lot of contractors, all chuckling about this, and one day in a shared break room, a bunch of T-Shirts appeared: plain white shirts with “ABCD” written on them in Arial.

That, as it turned out, was the bridge too far, and it got the attention of someone who was a regular IniDrug employee.

To the Contracting Team:
In the interests of maintaining a professional environment, we will be updating the company dress code. Shirts decorated with the text “ABCD” are prohibited, and should not be worn to work. If you do so, you will be asked to change or conceal the offending content.

Bill Lumbergh
Senior Project Lead
ABCD

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet DebianRussell Coker: Isolating PHP Web Sites

If you have multiple PHP web sites on a server in a default configuration they will all be able to read each other’s files in a default configuration. If you have multiple PHP web sites that have stored data or passwords for databases in configuration files then there are significant problems if they aren’t all trusted. Even if the sites are all trusted (IE the same person configures them all) if there is a security problem in one site it’s ideal to prevent that being used to immediately attack all sites.

mpm_itk

The first thing I tried was mpm_itk [1]. This is a version of the traditional “prefork” module for Apache that has one process for each HTTP connection. When it’s installed you just put the directive “AssignUserID USER GROUP” in your VirtualHost section and that virtual host runs as the user:group in question. It will work with any Apache module that works with mpm_prefork. In my experiment with mpm_itk I first tried running with a different UID for each site, but that conflicted with the pagespeed module [2]. The pagespeed module optimises HTML and CSS files to improve performance and it has a directory tree where it stores cached versions of some of the files. It doesn’t like working with copies of itself under different UIDs writing to that tree. This isn’t a real problem, setting up the different PHP files with database passwords to be read by the desired group is easy enough. So I just ran each site with a different GID but used the same UID for all of them.

The first problem with mpm_itk is that the mpm_prefork code that it’s based on is the slowest mpm that is available and which is also incompatible with HTTP/2. A minor issue of mpm_itk is that it makes Apache take ages to stop or restart, I don’t know why and can’t be certain it’s not a configuration error on my part. As an aside here is a site for testing your server’s support for HTTP/2 [3]. To enable HTTP/2 you have to be running mpm_event and enable the “http2” module. Then for every virtual host that is to support it (generally all https virtual hosts) put the line “Protocols h2 h2c http/1.1” in the virtual host configuration.

A good feature of mpm_itk is that it has everything for the site running under the same UID, all Apache modules and Apache itself. So there’s no issue of one thing getting access to a file and another not getting access.

After a trial I decided not to keep using mpm_itk because I want HTTP/2 support.

php-fpm Pools

The Apache PHP module depends on mpm_prefork so it also has the issues of not working with HTTP/2 and of causing the web server to be slow. The solution is php-fpm, a separate server for running PHP code that uses the fastcgi protocol to talk to Apache. Here’s a link to the upstream documentation for php-fpm [4]. In Debian this is in the php7.3-fpm package.

In Debian the directory /etc/php/7.3/fpm/pool.d has the configuration for “pools”. Below is an example of a configuration file for a pool:

# cat /etc/php/7.3/fpm/pool.d/example.com.conf
[example.com]
user = example.com
group = example.com
listen = /run/php/php7.3-example.com.sock
listen.owner = www-data
listen.group = www-data
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3

Here is the upstream documentation for fpm configuration [5].

Then for the Apache configuration for the site in question you could have something like the following:

ProxyPassMatch "^/(.*\.php(/.*)?)$" "unix:/run/php/php7.3-example.com.sock|fcgi://localhost/usr/share/wordpress/"

The “|fcgi://localhost” part is just part of the way of specifying a Unix domain socket. From the Apache Wiki it appears that the method for configuring the TCP connections is more obvious [6]. I chose Unix domain sockets because it allows putting the domain name in the socket address. Matching domains for the web server to port numbers is something that’s likely to be error prone while matching based on domain names is easier to check and also easier to put in Apache configuration macros.

There was some additional hassle with getting Apache to read the files created by PHP processes (the options include running PHP scripts with the www-data group, having SETGID directories for storing files, and having world-readable files). But this got things basically working.

Nginx

My Google searches for running multiple PHP sites under different UIDs didn’t turn up any good hits. It was only after I found the DigitalOcean page on doing this with Nginx [7] that I knew what to search for to find the way of doing it in Apache.

Krebs on SecurityRansomware Gangs Don’t Need PR Help

We’ve seen an ugly trend recently of tech news stories and cybersecurity firms trumpeting claims of ransomware attacks on companies large and small, apparently based on little more than the say-so of the ransomware gangs themselves. Such coverage is potentially quite harmful and plays deftly into the hands of organized crime.

Often the rationale behind couching these events as newsworthy is that the attacks involve publicly traded companies or recognizable brands, and that investors and the public have a right to know. But absent any additional information from the victim company or their partners who may be affected by the attack, these kinds of stories and blog posts look a great deal like ambulance chasing and sensationalism.

Currently, more than a dozen ransomware crime gangs have erected their own blogs to publish sensitive data from victims. A few of these blogs routinely issue self-serving press releases, some of which gallingly refer to victims as “clients” and cast themselves in a beneficent light. Usually, the blog posts that appear on ransom sites are little more than a teaser — screenshots of claimed access to computers, or a handful of documents that expose proprietary or financial information.

The goal behind the publication of these teasers is clear, and the ransomware gangs make no bones about it: To publicly pressure the victim company into paying up. Those that refuse to be extorted are told to expect that huge amounts of sensitive company data will be published online or sold on the dark web (or both).

Emboldened by their successes, several ransomware gangs recently have started demanding two ransoms: One payment to secure a digital key that can unlock files, folders and directories encrypted by their malware, and a second to avoid having any stolen information published or shared with others.

KrebsOnSecurity has sought to highlight ransomware incidents at companies whose core business involves providing technical services to others — particularly managed service providers that have done an exceptionally poor job communicating about the attack with their customers.

Overall, I’ve tried to use each story to call attention to key failures that frequently give rise to ransomware infections, and to offer information about how other companies can avoid a similar fate.

But simply parroting what professional extortionists have posted on their blog about victims of cybercrime smacks of providing aid and comfort to an enemy that needs and deserves neither.

Maybe you disagree, dear readers? Feel free to sound off in the comments below.

,

Rondam RamblingsI Will Remember Ricky Ray Rector

I've always been very proud of the fact that I came out in support of gay marriage before it was cool.   I have been correspondingly chagrined at my failure to speak out sooner and more vociferously about the shameful and systemic mistreatment of people of color, and black people in particular, in the U.S.  For what it's worth, I hereby confess my sins, acknowledge my white privilege, and

Planet DebianJoachim Breitner: Template Haskell recompilation

I was wondering: What happens if I have a Haskell module with Template Haskell that embeds some information from the environment (time, environment variables). Will such a module be reliable recompiled? And what if it gets recompiled, but the source code produced by Template Haskell is actually unchaned (e.g., because the environment variable has not changed), will all depending modules be recompiled (which would be bad)?

Here is a quick experiment, using GHC-8.8:

/tmp/th-recom-test $ cat Foo.hs
{-# LANGUAGE TemplateHaskell #-}
{-# OPTIONS_GHC -fforce-recomp #-}
module Foo where

import Language.Haskell.TH
import Language.Haskell.TH.Syntax
import System.Process

theMinute :: String
theMinute = $(runIO (readProcess "date" ["+%M"] "") >>= stringE)
[jojo@kirk:2] Mi, der 01.07.2020 um 17:18 Uhr ☺
/tmp/th-recom-test $ cat Main.hs
import Foo
main = putStrLn theMinute

Note that I had to set {-# OPTIONS_GHC -fforce-recomp #-} – by default, GHC will not recompile a module, even if it uses Template Haskell and runIO. If you are reading from a file you can use addDependentFile to tell the compiler about that depenency, but that does not help with reading from the environment.

So here is the test, and we get the desired behaviour: The Foo module is recompiled every time, but unless the minute has changed (see my prompt), Main is not recomipled:

/tmp/th-recom-test $ ghc --make -O2 Main.hs -o test
[1 of 2] Compiling Foo              ( Foo.hs, Foo.o )
[2 of 2] Compiling Main             ( Main.hs, Main.o )
Linking test ...
[jojo@kirk:2] Mi, der 01.07.2020 um 17:20 Uhr ☺
/tmp/th-recom-test $ ghc --make -O2 Main.hs -o test
[1 of 2] Compiling Foo              ( Foo.hs, Foo.o )
Linking test ...
[jojo@kirk:2] Mi, der 01.07.2020 um 17:20 Uhr ☺
/tmp/th-recom-test $ ghc --make -O2 Main.hs -o test
[1 of 2] Compiling Foo              ( Foo.hs, Foo.o )
[2 of 2] Compiling Main             ( Main.hs, Main.o ) [Foo changed]
Linking test ...

So all well!

Update: It seems that while this works with ghc --make, the -fforce-recom does not cause cabal build to rebuild the module. That’s unfortunate.

CryptogramSecuring the International IoT Supply Chain

Together with Nate Kim (former student) and Trey Herr (Atlantic Council Cyber Statecraft Initiative), I have written a paper on IoT supply chain security. The basic problem we try to solve is: how to you enforce IoT security regulations when most of the stuff is made in other countries? And our solution is: enforce the regulations on the domestic company that's selling the stuff to consumers. There's a lot of detail between here and there, though, and it's all in the paper.

We also wrote a Lawfare post:

...we propose to leverage these supply chains as part of the solution. Selling to U.S. consumers generally requires that IoT manufacturers sell through a U.S. subsidiary or, more commonly, a domestic distributor like Best Buy or Amazon. The Federal Trade Commission can apply regulatory pressure to this distributor to sell only products that meet the requirements of a security framework developed by U.S. cybersecurity agencies. That would put pressure on manufacturers to make sure their products are compliant with the standards set out in this security framework, including pressuring their component vendors and original device manufacturers to make sure they supply parts that meet the recognized security framework.

News article.

Planet DebianSylvain Beucler: Debian LTS and ELTS - June 2020

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

In June, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 30h for LTS (out of 30 max; all done) and 5.25h for ELTS (out of 20 max; all done).

While LTS is part of the Debian project, fellow contributors sometimes surprise me: suggestion to vote for sponsors-funded projects with concorcet was only met with overhead concerns, and there were requests for executive / business owner decisions (we're currently heading towards consultative vote); I heard concerns about discussing non-technical issues publicly (IRC team meetings are public though); the private mail infrastructure was moved from self-hosting straight to Google; when some got an issue with Debian Social for our first video conference, there were immediate suggestions to move to Zoom...
Well, we do need some people to make those LTS firmware updates in non-free :)

Also this was the last month before shifting suites: goodbye to Jessie LTS and Wheezy ELTS, welcome Stretch LTS and Jessie ELTS.

ELTS - Wheezy

  • mysql-connector-java: improve testsuite setup; prepare wheezy/jessie/stretch triple builds; coordinate versioning scheme with security-team; security upload ELA 234-1
  • ntp: wheezy+jessie triage: 1 ignored (too intrusive to backport); 1 postponed (hard to exploit, no patch)
  • Clean-up (ditch) wheezy VMs :)

LTS - Jessie

  • mysql-connector-java: see common work in ELTS
  • mysql-connector-java: security uploads DLA 2245-1 (LTS) and DSA 4703 (oldstable)
  • ntp: wheezy+jessie triage (see ELTS)
  • rails: global triage, backport 2 patches, security upload DLA 2251-1
  • rails: global security: prepare stretch/oldstable update
  • rails: new important CVE on unmaintained 4.x, fixes introduce several regressions, propose new fix to upstream, update stretch proposed update [and jessie, but rails will turn out unsupported in ELTS]
  • python3.4: prepare update to fix all pending non-criticial issues, 5/6 ready
  • private video^W^Wpublic IRC team meeting

Documentation/Scripts

Planet DebianJunichi Uekawa: Already July and still stuck at home.

Already July and still stuck at home.

Worse Than FailureCodeSOD: locurlicenseucesss

The past few weeks, I’ve been writing software for a recording device. This is good, because when I’m frustrated by the bugs I put in the code and I start cursing at it, it’s not venting, it’s testing.

There are all sorts of other little things we can do to vent. Imagine, if you will, you find yourself writing an if with an empty body, but an else clause that does work. You’d probably be upset at yourself. You might be stunned. You might be so tired it feels like a good idea at the time. You might be deep in the throes of “just. work. goddammit”. Regardless of the source of that strain, you need to let it out somewhere.

Emmanuelle found this is a legacy PHP codebase:

if(mynum($Q)){
    // Congratulations, you has locurlicenseucesss asdfghjk
} else {
    header("Location: feed.php");
}

I think being diagnosed with locurlicenseucesss should not be a cause for congratulations, but maybe I’m the one that’s confused.

Emmanuelle adds: “Honestly, I have no idea how this happened.”

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianUtkarsh Gupta: FOSS Activites in June 2020

Here’s my (ninth) monthly update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 16th month of contributing to Debian. I became a DM in late March last year and a DD last Christmas! \o/

This month was a little intense. I did a a lot of different kinds of things in Debian this month. Whilst most of my time went on doing security stuff, I also sponosred a bunch of packages.

Here are the following things I did this month:

Uploads and bug fixes:

Other $things:

  • Hosted Ruby team meeting. Logs here.
  • Mentoring for newcomers.
  • FTP Trainee reviewing.
  • Moderation of -project mailing list.
  • Sponsored ruby-ast for Abraham, libexif for Hugh, djangorestframework-gis and karlseguin-ccache for Nilesh, and twig-extensions, twig-i18n-extension, and mariadb-mysql-kbs for William.

GSoC Phase 1, Part 2!

Last month, I got selected as a Google Summer of Code student for Debian again! \o/
I am working on the Upstream-Downstream Cooperation in Ruby project.

The first half of the first month is blogged here, titled, GSoC Phase 1.
Also, I log daily updates at gsocwithutkarsh2102.tk.

Whilst the daily updates are available at the above site^, I’ll breakdown the important parts of the later half of the first month here:

  • Documented the first cop, GemspecGit via PR #2.
  • Made an initial release, v0.1.0! 💖
  • Spread the word/usage about this tool/library via adding them in the official RuboCop docs.
  • We had our third weekly meeting where we discussed the next steps and the things that are supposed to be done for the next set of cops.
  • Wrote more tests so as to cover different aspects of the GemspecGit cop.
  • Opened PR #4 for the next Cop, RequireRelativeToLib.
  • Introduced rubocop-packaging to the outer world and requested other upstream projects to use it! It is being used by 6 other projects already 😭💖
  • Had our fourth weekly meeting where we pair-programmed (and I sucked :P) and figured out a way to make the second cop work.
  • Found a bug, reported at issue #5 and raised PR #6 to fix it.
  • And finally, people loved the library/tool (and it’s outcome):


    (for those who don’t know, @bbatsov is the author of RuboCop, @lienvdsteen is an amazing fullstack engineer at GitLab, and @pboling is the author of some awesome Ruby tools and libraries!)

Whilst I have already mentioned it multiple times but it’s still not enough to stress how amazing Antonio Terceiro and David Rodríguez are! 💖
They’re more than just mentors to me!

Well, only they know how much I trouble them with different things, which are not only related to my GSoC project but also extends to the projects they maintain! :P
David maintains rubygems and bundler and Antonio maintains debci.

So on days when I decide to hack on rubygems or debci, only I know how kind and nice David and Anotonio are to me!
They very patiently walk me through with whatever I am stuck on, no matter what and no matter when.

Thus, with them around, I contributed to these two projects and more, with regards to working on rubocop-packaging.
Following are a few things that I raised:


Debian LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

This was my ninth month as a Debian LTS paid contributor. I was assigned 30.00 hours and worked on the following things:

CVE Fixes and Announcements:

Other LTS Work:

  • Triaged sympa, apache2, qemu, and coturn.
  • Add fix for CVE-2020-0198/libexif.
  • Requested CVE for bug#60251 against apache2 and prodded further.
  • Raised issue #947 against sympa reporting an incomplete patch for CVE-2020-10936. More discussions internally.
  • Created the LTS Survey on the self-hosted LimeSurvey instance.
  • Attended the third LTS meeting. Logs here.
  • General discussion on LTS private and public mailing list.

Other(s)

Sometimes it gets hard to categorize work/things into a particular category.
That’s why I am writing all of those things inside this category.
This includes two sub-categories and they are as follows.

Personal:

This month I did the following things:

  • Wrote and published v0.1.0 of rubocop-packaging on RubyGems! 💯
    It’s open-sourced and the repository is here.
    Bug reports and pull requests are welcomed! 😉
  • Integrated a tiny (yet a powerful) hack to align images in markdown for my blog.
    Commit here. 🚀
  • Released v0.4.0 of batalert on RubyGems! 🤗

Open Source:

Again, this contains all the things that I couldn’t categorize earlier.
Opened several issues and PRs:


Thank you for sticking along for so long :)

Until next time.
:wq for today.

Planet DebianPaul Wise: FLOSS Activities June 2020

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian BTS: usertags QA
  • Debian IRC channels: fixed a channel mode lock
  • Debian wiki: unblock IP addresses, approve accounts, ping folks with bouncing email

Communication

  • Respond to queries from Debian users and developers on the mailing lists and IRC

Sponsors

The ifenslave and apt-listchanges work was sponsored by my employer. All other work was done on a volunteer basis.

,

Planet DebianChris Lamb: Free software activities in June 2020

Here is my monthly update covering what I have been doing in the free software world during June 2020 (previous month):

  • Opened two pull requests against the Ghostwriter distraction-free Markdown editor to:

    • Persist whether "focus mode" is enabled across between sessions. (#522)
    • Correct the ordering of the MarkdownAST::toString() debugging output. (#520)
  • Will McGugan's "Rich" is a Python library to output formatted text, tables, syntax etc. to the terminal. I filed a pull request in order to allow for easy enabling and disabling of displaying the file path in Rich's logging handler. (#115)

  • As part of my duties of being on the board of directors of the Open Source Initiative and Software in the Public Interest I attended their respective monthly meetings and participated in various licensing and other discussions occurring on the internet, as well as the usual internal discussions regarding logistics and policy etc.

  • Filed a pull request against the PyQtGraph Scientific Graphics and graphical user interface library to make the documentation build reproducibly. (#1265)

  • Reviewed and merged a large number of changes by Pavel Dolecek to my Strava Enhancement Suite, a Chrome extension to improve the user experience on the Strava athletic tracker.

For Lintian, the static analysis tool for Debian packages:

  • Don't emit breakout-link for architecture-independent .jar files under /usr/lib. (#963939)
  • Correct a reference to override_dh_ in the long description of the excessive-debhelper-overrides tag. [...]
  • Update data/fields/perl-provides for Perl 5.030003. [...]
  • Check for execute_after and execute_before spelling mistakes just like override_*. [...]

§


Reproducible Builds

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:


§


Elsewhere in our tooling, I made the following changes to diffoscope including preparing and uploading versions 147, 148 and 149 to Debian:

  • New features:

    • Add output from strings(1) to ELF binaries. (#148)
    • Allow user to mask/filter diff output via --diff-mask=REGEX. (!51)
    • Dump PE32+ executables (such as EFI applications) using objdump(1). (#181)
    • Add support for Zsh shell completion. (#158)
  • Bug fixes:

    • Prevent a traceback when comparing PDF documents that did not contain metadata (ie. a PDF /Info stanza). (#150)
    • Fix compatibility with jsondiff version 1.2.0. (#159)
    • Fix an issue in GnuPG keybox file handling that left filenames in the diff. [...]
    • Correct detection of JSON files due to missing call to File.recognizes that checks candidates against file(1). [...]
  • Output improvements:

    • Use the CSS word-break property over manually adding U+200B zero-width spaces as these were making copy-pasting cumbersome. (!53)
    • Downgrade the tlsh warning message to an "info" level warning. (#29)
  • Logging improvements:

  • Testsuite improvements:

    • Update tests for file(1) version 5.39. (#179)
    • Drop accidentally-duplicated copy of the --diff-mask tests. [...]
    • Don't mask an existing test. [...]
  • Codebase improvements:

    • Replace obscure references to "WF" with "Wagner-Fischer" for clarity. [...]
    • Use a semantic AbstractMissingType type instead of remembering to check for both types of "missing" files. [...]
    • Add a comment regarding potential security issue in the .changes, .dsc and .buildinfo comparators. [...]
    • Drop a large number of unused imports. [...][...][...][...][...]
    • Make many code sections more Pythonic. [...][...][...][...]
    • Prevent some variable aliasing issues. [...][...][...]
    • Use some tactical f-strings to tidy up code [...][...] and remove explicit u"unicode" strings [...].
    • Refactor a large number of routines for clarity. [...][...][...][...]

trydiffoscope is the web-based version of diffoscope. This month, I specified a location for the celerybeat scheduler to ensure that the clean/tidy tasks are actually called which had caused an accidental resource exhaustion. (#12)


§


Debian

I filed three bugs against:

  • cmark: Please update the homepage URI. (#962576)
  • petitboot: Please update Vcs-Git urls. (#963123)
  • python-pauvre: FTBFS if the DISPLAY environment variable is exported. (#962698)

Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 5¼ hours on its sister Extended LTS project.

  • Investigated and triaged angular.js [...],icinga2 [...], intel-microcode [...], jquery [...], pdns-recursor [...], unbound [...] & wordpress [...].

  • Frontdesk duties, including responding to user/developer questions, reviewing others' packages, participating in mailing list discussions as well as attending our contributor meeting.

  • Issued DLA 2233-1 to fix two issues in the Django web development framework in order to fix a potential data leakage via malformed memcached keys (CVE-2020-13254) and to prevent a cross-site scripting attack in the Django administration system (CVE-2020-13596). This was followed by DLA 2233-2 to address a regression as well as uploads to Debian stretch (1.10.7-2+deb9u9) and buster (1.11.29-1~deb10u1). (More info)

  • Issued DLA 2235-1 to prevent a file descriptor leak in the D-Bus message bus (CVE-2020-12049).

  • Issued DLA 2239-1 for a security module for using the TACACS+ authentication service to prevent an issue where shared secrets such as private server keys were being added in plaintext to various logs.

  • Issued DLA 2244-1 to address an escaping issue in PHPMailer, an email generation utility class for the PHP programming language.

  • Issued DLA 2252-1 for the ngircd IRC server as it was discovered that there was an out-of-bounds access vulnerability in the server-to-server protocol.

  • Issued DLA 2253-1 to resolve a vulnerability in the Lynis a security auditing tool because a shared secret could be obtained by simple observation of the process list when a data upload is being performed.

You can find out more about the project via the following video:

Uploads

TEDBeauty everywhere: The talks of Session 6 of TED2020

We’re six weeks into TED2020! For session 6: a celebration of beauty on every level, from planet-trekking feats of engineering to art that deeply examines our past, present, future — and much more.

Planetary scientist Elizabeth “Zibi” Turtle shows off the work behind Dragonfly: a rotorcraft being developed to explore Titan, Saturn’s largest moon, by air. She speaks at TED2020: Uncharted on June 25, 2020. (Photo courtesy of TED)

Elizabeth “Zibi” Turtle, planetary scientist 

Big idea: The Dragonfly Mission, set to launch in 2026, will study Titan, the largest moon orbiting Saturn. Through this mission, scientists may discover the secrets of the solar system’s origin, the history of life on Earth — and even the potential for life beyond our planet.

How? Launched in 1997, the Cassini-Huygens Mission provided scientists with incredible information about Titan, a water-based moon with remarkable similarities to Earth. We learned that Titan’s geography includes sand dunes, craters and mountains, and that vast oceans of water — perhaps 10 times as large as Earth’s total supply — lie deep underneath Titan’s surface. In many ways, Titan is the closest parallel to pre-life, early Earth, Elizabeth Turtle explains. The Cassini-Huygens Mission ended in 2004, and now hundreds of scientists across the world are working on the Dragonfly Mission, which will dramatically expand our knowledge of Titan. Unlike the Cassini-Huygens spacecraft, Dragonfly will live within Titan’s atmosphere, flying across the moon to gather samples and study its chemical makeup, weather and geography. The data Dragonfly sends back may bring us closer to thrilling discoveries on the makeup of the solar system, the habitability of other planets and the beginnings of life itself. “Dragonfly is a search for greater understanding — not just of Titan and the mysteries of our solar system, but of our own origins,” Turtle says.


“Do you think human creativity matters?” asks actor, writer and director Ethan Hawke. He gives us his compelling answer at TED2020: Uncharted on June 25, 2020. (Photo courtesy of TED)

Ethan Hawke, actor, writer, director

Big idea: Creativity isn’t a luxury; it’s vital to the human experience.

How? We often struggle to give ourselves permission to be creative because we’re all a little suspect of our own talent, says Ethan Hawke. Recounting his own journey of creative discovery over a 30-year career in acting — along with the beauty he sees in everyday moments with his family — Hawke encourages us to reframe this counterproductive definition of human creativity. Creative expression has nothing to do with talent, he says, but rather is a process of learning who you are and how you connect to other people. Instead of giving in to the pull of old habits and avoiding new experiences — maybe you’re hesitant to enroll in that poetry course or cook that complicated 20-step recipe — Hawke urges us to engage in a rich variety of creative outlets and, most importantly, embrace feeling foolish along the way. “I think most of us really want to offer the world something of quality, something that the world will consider good or important — and that’s really the enemy,” Hawke says. “Because it’s not up to us whether what we do is any good. And if history has taught us anything, the world is an extremely unreliable critic. So, you have to ask yourself, do you think human creativity matters?”


Singer-songwriter and multiinstrumentalist Bob Schneider performs for TED2020: Uncharted on June 25, 2020. (Photo courtesy of TED)

Keeping the beauty of the session flowing, singer-songwriter Bob Schneider performs “Joey’s Song,” “The Other Side” and “Lorena.”


“We have thousands of years of ancient knowledge that we just need to listen to and allow it to expand our thinking about designing symbiotically with nature,” says architect Julia Watson. “By listening, we’ll only become wiser and ready for those 21st-century challenges that we know will endanger our people and our planet.” She speaks at TED2020: Uncharted on June 25, 2020. (Photo courtesy of TED)

Julia Watson, architect, landscape designer, author

Big idea: Ancient Indigenous technology can teach us how to design with nature, instead of against it, when facing challenges. We just need to look and listen. 

How? In her global search for ancient design systems and solutions, Julia Watson has encountered wondrous innovations to counter climate challenges that we all can learn from. “High-tech solutions are definitely going to help us solve some of these problems, but in our rush towards the future, we tend to forget about the past in other parts of the world,” she says. Watson takes us to the villages of Khasi, India, where people have built living bridges woven from ancient roots that strengthen over time to enable travel when monsoon season hits. She introduces us to a water-based civilization in the Mesopotamian Marshlands, where for 6,000 years, the Maʻdān people have lived on manmade islands built from harvested reeds. And she shows us a floating African city in Benin, where buildings are stilted above flooded land. “I’m an architect, and I’ve been trained to seek solutions in permanence, concrete, steel, glass. These are all used to build a fortress against nature,” Watson says. “But my search for ancient systems and Indigenous technologies has been different. It’s been inspired by an idea that we can seed creativity in crisis.”


TED Fellow and theater artist Daniel Alexander Jones lights up the stage at TED2020: Uncharted on June 25, 2020. (Photo courtesy of TED)

TED Fellow and theater artist Daniel Alexander Jones lights up the (virtual) stage by channeling Jomama Jones, a mystical alter ego who shares some much-needed wisdom. “What if I told you, ‘You will surprise yourself’?” Jomama asks. “What if I told you, ‘You will be brave enough’?”


“It takes creativity to be able to imagine a future that is so different from the one before you,” says artist Titus Kaphar. He speaks at TED2020: Uncharted on June 25, 2020. (Photo courtesy of TED)

Titus Kaphar, artist

Big idea: Beauty can open our hearts to difficult conversations.

How? A painting’s color, form or composition pulls you in, functioning as a kind of Trojan horse out of which difficult conversations can emerge, says artist Titus Kaphar. (See for yourself in his unforgettable live workshop from TED2017.) Two weeks after George Floyd’s death and the Movement for Black Lives protests that followed, Kaphar reflects on his evolution as an artist and takes us on a tour of his work — from The Jerome Project, which examines the US criminal justice system through the lens of 18th- and 19th-century American portraiture, to his newest series, From a Tropical Space, a haunting body of work about Black mothers whose children have disappeared. In addition to painting, Kaphar shares the work and idea behind NXTHVN, an arts incubator and creative community for young people in his hometown of Dixwell, Connecticut. “It takes creativity to be able to imagine a future that is so different from the one before you,” he says.

CryptogramAndroid Apps Stealing Facebook Credentials

Google has removed 25 Android apps from its store because they steal Facebook credentials:

Before being taken down, the 25 apps were collectively downloaded more than 2.34 million times.

The malicious apps were developed by the same threat group and despite offering different features, under the hood, all the apps worked the same.

According to a report from French cyber-security firm Evina shared with ZDNet today, the apps posed as step counters, image editors, video editors, wallpaper apps, flashlight applications, file managers, and mobile games.

The apps offered a legitimate functionality, but they also contained malicious code. Evina researchers say the apps contained code that detected what app a user recently opened and had in the phone's foreground.


Krebs on SecurityCOVID-19 ‘Breach Bubble’ Waiting to Pop?

The COVID-19 pandemic has made it harder for banks to trace the source of payment card data stolen from smaller, hacked online merchants. On the plus side, months of quarantine have massively decreased demand for account information that thieves buy and use to create physical counterfeit credit cards. But fraud experts say recent developments suggest both trends are about to change — and likely for the worse.

The economic laws of supply and demand hold just as true in the business world as they do in the cybercrime space. Global lockdowns from COVID-19 have resulted in far fewer fraudsters willing or able to visit retail stores to use their counterfeit cards, and the decreased demand has severely depressed prices in the underground for purloined card data.

An ad for a site selling stolen payment card data, circa March 2020.

That’s according to Gemini Advisory, a New York-based cyber intelligence firm that closely tracks the inventories of dark web stores trafficking in stolen payment card data.

Stas Alforov, Gemini’s director of research and development, said that since the beginning of 2020 the company has seen a steep drop in demand for compromised “card present” data — digits stolen from hacked brick-and-mortar merchants with the help of malicious software surreptitiously installed on point-of-sale (POS) devices.

Alforov said the median price for card-present data has dropped precipitously over the past few months.

“Gemini Advisory has seen over 50 percent decrease in demand for compromised card present data since the mandated COVID-19 quarantines in the United States as well as the majority of the world,” he told KrebsOnSecurity.

Meanwhile, the supply of card-present data has remained relatively steady. Gemini’s latest find — a 10-month-long card breach at dozens of Chicken Express locations throughout Texas and other southern states that the fast-food chain first publicly acknowledged today after being contacted by this author — saw an estimated 165,000 cards stolen from eatery locations recently go on sale at one of the dark web’s largest cybercrime bazaars.

“Card present data supply hasn’t wavered much during the COVID-19 period,” Alforov said. “This is likely due to the fact that most of the sold data is still coming from breaches that occurred in 2019 and early 2020.”

A lack of demand for and steady supply of stolen card-present data in the underground has severely depressed prices since the beginning of the COVID-19 pandemic. Image: Gemini Advisory

Naturally, crooks who ply their trade in credit card thievery also have been working from home more throughout the COVID-19 pandemic. That means demand for stolen “card-not-present” data — customer payment information extracted from hacked online merchants and typically used to defraud other e-commerce vendors — remains high. And so have prices for card-not-present data: Gemini found prices for this commodity actually increased slightly over the past few months.

Andrew Barratt is an investigator with Coalfire, the cyber forensics firm hired by Chicken Express to remediate the breach and help the company improve security going forward. Barratt said there’s another curious COVID-19 dynamic going on with e-commerce fraud recently that is making it more difficult for banks and card issuers to trace patterns in stolen card-not-present data back to hacked web merchants — particularly smaller e-commerce shops.

“One of the concerns that has been expressed to me is that we’re getting [fewer] overlapping hotspots,” Barratt said. “For a lot of the smaller, more frequently compromised merchants there has been a large drop off in transactions. Whilst big e-commerce has generally done okay during the COVID-19 pandemic, a number of more modest sized or specialty online retailers have not had the same access to their supply chain and so have had to close or drastically reduce the lines they’re selling.”

Banks routinely take groups of customer cards that have experienced fraudulent activity and try to see if some or all of them were used at the same merchant during a similar timeframe, a basic anti-fraud process known as “common point of purchase” or CPP analysis. But ironically, this analysis can become more challenging when there are fewer overall transactions going through a compromised merchant’s site, Barratt said.

“With a smaller transactional footprint means less Common Point of Purchase alerts and less data to work on to trigger a forensic investigation or fraud alert,” Barratt said. “It does also mean less fraud right now – which is a positive. But one of the big concerns that has been raised to us as investigators — literally asking if we have capacity for what’s coming — has been that merchants are getting compromised by ‘lie in wait’ type intruders.”

Barratt says there’s a suspicion that hackers may have established beachheads [breachheads?] in a number of these smaller online merchants and are simply biding their time. If and when transaction volumes for these merchants do pick up, the concern is then hackers may be in a better position to mix the sale of cards stolen from many hacked merchants and further confound CPP analysis efforts.

“These intruders may have a beachhead in a number of small and/or middle market e-commerce entities and they’re just waiting for the transaction volumes to go back up again and they’ve suddenly got the capability to have skimmers capturing lots of card data in the event of a sudden uptick in consumer spending,” he said. “They’d also have a diverse portfolio of compromise so could possibly even evade common point of purchase detection for a while too. Couple all of that with major shopping cart platforms going out of support (like Magento 1 this month) and furloughed IT and security staff, and there’s a potentially large COVID-19 breach bubble waiting to pop.”

With a majority of payment cards issued in the United States now equipped with a chip that makes the cards difficult and expensive for thieves to clone, cybercriminals have continued to focus on hacking smaller merchants that have not yet installed chip card readers and are still swiping the cards’ magnetic stripe at the register.

Barratt said his company has tied the source of the breach to malware known as “PwnPOS,” an ancient strain of point-of-sale malware that first surfaced more than seven years ago, if not earlier.

Chicken Express CEO Ricky Stuart told KrebsOnSecurity that apart from “a handful” of locations his family owns directly, most of his 250 stores are franchisees that decide on their own how to secure their payment operations. Nevertheless, the company is now forced to examine each store’s POS systems to remediate the breach.

Stuart blamed the major point-of-sale vendors for taking their time in supporting and validating chip-capable payment systems. But when asked how many of the company’s 250 stores had chip-capable readers installed, Stuart said he didn’t know. Ditto for the handful of stores he owns directly.

“I don’t know how many,” he said. “I would think it would be a majority. If not, I know they’re coming.”

Planet DebianEmmanuel Kasper: Learning openshift: a good moment to revisit awk too

I can’t believe I spent all these years using only grep.

Most of us know how to use awk to print the nth column of a file:

$ awk '{print $1}' /etc/hosts

will print all IP addresses from /etc/hosts

But you can also do filtering before printing the chosen column:

$ awk '$5 >= 2 {print $2}' /path/to/file

will print the second column of all lines, where the 5th column is greater than 2. That would have been hard with grep.

Now I can use that to find out all deployments on my openshift cluster, where the number of current replicas is greater than 2.

$ oc get deployments --all-namespaces | awk '$5 >= 2 {print $2}'
NAME
oauth-openshift
console
downloads
router-default
etcd-quorum-guard
prometheus-adapter
thanos-querier
packageserver

I know that openshift/kubernetes both have a powerful query selector syntax, but for the moment awk will do.

Worse Than FailureCodeSOD: The Data Class

There has been a glut of date-related code in the inbox lately, so it’s always a treat where TRWTF isn’t how they fail to handle dates, and instead, something else. For example, imagine you’re browsing a PHP codebase and see something like:

$fmtedDate = data::now();

You’d instantly know that something was up, just by seeing a class named data. That’s what got Vania’s attention. She dug in, and found a few things.

First, clearly, data is a terrible name for a class. It’d be a terrible name if it was a data access layer, but it has a method now, which tells us that it’s not just handling data.

But it’s not handling data at all. data is more precisely a utility class- the dumping ground for methods that the developer couldn’t come up with a way to organize. It contains 58 methods, 38 of which are 100% static methods, 7 of which should have been declared static but weren’t, and the remainder are actually interacting with $this. All in all, this class must be incredibly “fun”.

Let’s look at the now implementation:

class data
{
    // ...

    public static function now()
    {
        return date('Y', time())."-".date('m', time())."-".date('d')." ".date('H').":".	date('i').":".	date('s');
    }
}

Finally, we get to your traditional bad date handling code. Instead of just using a date format string to get the desired output, we manually construct the string by invoking date a bunch of times. There are some “interesting” choices here- you’ll note that the PHP date function accepts a date parameter- so you can format an arbitrary date- and sometimes they pass in the result of calling time() and sometimes they don’t. This is mostly not a problem, since date will invoke time itself if you don’t hand it one, so that’s just unnecessary.

But Vania adds some detail:

Because of the multiple calls to time() this code contains a subtle race condition. If it is called at, say, 2019-12-31 23:59:59.999, the date('Y', time()) part will evaluate to “2019”. If the time now ticks over to 2020-01-01 00:00:00.000, the next date() call will return a month value of “01” (and so on for the rest of the expression). The result is a timestamp of “2019–01–01 00:00:00”, which is off by a year. A similar issue happens at the end of every month, day, hour, and minute; i.e. every minute there is an opportunity for the result to be off by a minute.

It’s easy to fix, of course, you could just: return date('Y-m-d H:i:s');, which does exactly the same thing, but correctly. Unfortunately, Vania has this to add:

Unfortunately there is no budget for making this kind of change to the application. Also, its original authors seem to have been a fan of “code reuse” by copy/paste: There are four separate instances of this now() function in the codebase, all containing exactly the same code.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianRuss Allbery: Review: The Fifth Risk

Review: The Fifth Risk, by Michael Lewis

Publisher: W.W. Norton
Copyright: 2018
Printing: 2019
ISBN: 0-393-35745-7
Format: Kindle
Pages: 254

The Fifth Risk starts with the presidential transition. Max Stier, the first person profiled by Lewis in this book, is the founder of the Partnership for Public Service. That foundation helped push through laws to provide more resources and structure for the transition of the United States executive branch from one president to the next. The goal was to fight wasted effort, unnecessary churn, and pointless disruption in the face of each administration's skepticism about everyone who worked for the previous administration.

"It's Groundhog Day," said Max. "The new people come in and think that the previous administration and the civil service are lazy or stupid. Then they actually get to know the place they are managing. And when they leave, they say, 'This was a really hard job, and those are the best people I've ever worked with.' This happens over and over and over."

By 2016, Stier saw vast improvements, despite his frustration with other actions of the Obama administration. He believed their transition briefings were one of the best courses ever produced on how the federal government works. Then that transition process ran into Donald Trump.

Or, to be more accurate, that transition did not run into Donald Trump, because neither he nor anyone who worked for him were there. We'll never know how good the transition information was because no one ever listened to or read it. Meetings were never scheduled. No one showed up.

This book is not truly about the presidential transition, though, despite its presence as a continuing theme. The Fifth Risk is, at its heart, an examination of government work, the people who do it, why it matters, and why you should care about it. It's a study of the surprising and misunderstood responsibilities of the departments of the United States federal government. And it's a series of profiles of the people who choose this work as a career, not in the upper offices of political appointees, but deep in the civil service, attempting to keep that system running.

I will warn now that I am far too happy that this book exists to be entirely objective about it. The United States desperately needs basic education about the government at all levels, but particularly the federal civil service. The public impression of government employees is skewed heavily towards the small number of public-facing positions and towards paperwork frustrations, over which the agency usually has no control because they have been sabotaged by Congress (mostly by Republicans, although the Democrats get involved occasionally). Mental images of who works for the government are weirdly selective. The Coast Guard could say "I'm from the government and I'm here to help" every day, to the immense gratitude of the people they rescue, but Reagan was still able to use that as a cheap applause line in his attack on government programs.

Other countries have more functional and realistic social attitudes towards their government workers. The United States is trapped in a politically-fueled cycle of contempt and ignorance. It has to stop. And one way to help stop it is someone with Michael Lewis's story-telling skills writing a different narrative.

The Fifth Risk is divided into a prologue about presidential transitions, three main parts, and an afterword (added in current editions) about a remarkable government worker whom you likely otherwise would never hear about. Each of the main parts talks about a different federal department: the Department of Energy, the Department of Agriculture, and the Department of Commerce. In keeping with the theme of the book, the people Lewis profiles do not do what you might expect from the names of those departments.

Lewis's title comes from his discussion with John MacWilliams, a former Goldman Sachs banker who quit the industry in search of more personally meaningful work and became the chief risk officer for the Department of Energy. Lewis asks him for the top five risks he sees, and if you know that the DOE is responsible for safeguarding nuclear weapons, you will be able to guess several of them: nuclear weapons accidents, North Korea, and Iran. If you work in computer security, you may share his worry about the safety of the electrical grid. But his fifth risk was project management. Can the government follow through on long-term hazardous waste safety and cleanup projects, despite constant political turnover? Can it attract new scientists to the work of nuclear non-proliferation before everyone with the needed skills retires? Can it continue to lay the groundwork with basic science for innovation that we'll need in twenty or fifty years? This is what the Department of Energy is trying to do.

Lewis's profiles of other departments are similarly illuminating. The Department of Agriculture is responsible for food stamps, the most effective anti-poverty program in the United States with the possible exception of Social Security. The section on the Department of Commerce is about weather forecasting, specifically about NOAA (the National Oceanic and Atmospheric Administration). If you didn't know that all of the raw data and many of the forecasts you get from weather apps and web sites are the work of government employees, and that AccuWeather has lobbied Congress persistently for years to prohibit the NOAA from making their weather forecasts public so that AccuWeather can charge you more for data your taxes already paid for, you should read this book. The story of American contempt for government work is partly about ignorance, but it's also partly about corporations who claim all of the credit while selling taxpayer-funded resources back to you at absurd markups.

The afterword I'll leave for you to read for yourself, but it's the story of Art Allen, a government employee you likely have never heard of but whose work for the Coast Guard has saved more lives than we are able to measure. I found it deeply moving.

If you, like I, are a regular reader of long-form journalism and watch for new Michael Lewis essays in particular, you've probably already read long sections of this book. By the time I sat down with it, I think I'd read about a third in other forms on-line. But the profiles that I had already read were so good that I was happy to read them again, and the additional stories and elaboration around previously published material was more than worth the cost and time investment in the full book.

It was never obvious to me that anyone would want to read what had interested me about the United States government. Doug Stumpf, my magazine editor for the past decade, persuaded me that, at this strange moment in American history, others might share my enthusiasm.

I'll join Michael Lewis in thanking Doug Stumpf.

The Fifth Risk is not a proposal for how to fix government, or politics, or polarization. It's not even truly a book about the Trump presidency or about the transition. Lewis's goal is more basic: The United States government is full of hard-working people who are doing good and important work. They have effectively no public relations department. Achievements that would result in internal and external press releases in corporations, not to mention bonuses and promotions, go unnoticed and uncelebrated. If you are a United States citizen, this is your government and it does important work that you should care about. It deserves the respect of understanding and thoughtful engagement, both from the citizenry and from the politicians we elect.

Rating: 10 out of 10

Planet DebianCraig Sanders: Fuck Grey Text

fuck grey text on white backgrounds
fuck grey text on black backgrounds
fuck thin, spindly fonts
fuck 10px text
fuck any size of anything in px
fuck font-weight 300
fuck unreadable web pages
fuck themes that implement this unreadable idiocy
fuck sites that don’t work without javascript
fuck reactjs and everything like it

thank fuck for Stylus. and uBlock Origin. and uMatrix.

Fuck Grey Text is a post from: Errata

Planet DebianNorbert Preining: TeX Live Debian update 20200629

More than a month has passed since the last update of TeX Live packages in Debian, so here is a new checkout!

All arch all packages have been updated to the tlnet state as of 2020-06-29, see the detailed update list below.

Enjoy.

New packages

akshar, beamertheme-pure-minimalistic, biblatex-unified, biblatex-vancouver, bookshelf, commutative-diagrams, conditext, courierten, ektype-tanka, hvarabic, kpfonts-otf, marathi, menucard, namedef, pgf-pie, pwebmac, qrbill, semantex, shtthesis, tikz-lake-fig, tile-graphic, utf8add.

Updated packages

abnt, achemso, algolrevived, amiri, amscls, animate, antanilipsum, apa7, babel, bangtex, baskervillef, beamerappendixnote, beamerswitch, beamertheme-focus, bengali, bib2gls, biblatex-apa, biblatex-philosophy, biblatex-phys, biblatex-software, biblatex-swiss-legal, bibleref, bookshelf, bxjscls, caption, ccool, cellprops, changes, chemfig, circuitikz, cloze, cnltx, cochineal, commutative-diagrams, comprehensive, context, context-vim, cquthesis, crop, crossword, ctex, cweb, denisbdoc, dijkstra, doclicense, domitian, dps, draftwatermark, dvipdfmx, ebong, ellipsis, emoji, endofproofwd, eqexam, erewhon, erewhon-math, erw-l3, etbb, euflag, examplep, fancyvrb, fbb, fbox, fei, fira, fontools, fontsetup, fontsize, forest-quickstart, gbt7714, genealogytree, haranoaji, haranoaji-extra, hitszthesis, hvarabic, hyperxmp, icon-appr, kpfonts, kpfonts-otf, l3backend, l3build, l3experimental, l3kernel, latex-amsmath-dev, latexbangla, latex-base-dev, latexdemo, latexdiff, latex-graphics-dev, latexindent, latex-make, latexmp, latex-mr, latex-tools-dev, libertinus-fonts, libertinust1math, lion-msc, listings, logix, lshort-czech, lshort-german, lshort-polish, lshort-portuguese, lshort-russian, lshort-slovenian, lshort-thai, lshort-ukr, lshort-vietnamese, luamesh, lua-uca, luavlna, lwarp, marathi, memoir, mnras, moderntimeline, na-position, newcomputermodern, newpx, nicematrix, nodetree, ocgx2, oldstandard, optex, parskip, pdfcrop, pdfpc, pdftexcmds, pdfxup, pgf, pgfornament, pgf-pie, pgf-umlcd, pgf-umlsd, pict2e, plautopatch, poemscol, pst-circ, pst-eucl, pst-func, pstricks, pwebmac, pxjahyper, quran, rec-thy, reledmac, rest-api, sanskrit, sanskrit-t1, scholax, semantex, showexpl, shtthesis, suftesi, svg, tcolorbox, tex4ht, texinfo, thesis-ekf, thuthesis, tkz-doc, tlshell, toptesi, tuda-ci, tudscr, twemoji-colr, univie-ling, updmap-map, vancouver, velthuis, witharrows, wtref, xecjk, xepersian-hm, xetex-itrans, xfakebold, xindex, xindy, xltabular, yathesis, ydoc, yquant, zref.

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 08)

Here’s part eight of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

CryptogramiPhone Apps Stealing Clipboard Data

iOS apps are repeatedly reading clipboard data, which can include all sorts of sensitive information.

While Haj Bakry and Mysk published their research in March, the invasive apps made headlines again this week with the developer beta release of iOS 14. A novel feature Apple added provides a banner warning every time an app reads clipboard contents. As large numbers of people began testing the beta release, they quickly came to appreciate just how many apps engage in the practice and just how often they do it.

This YouTube video, which has racked up more than 87,000 views since it was posted on Tuesday, shows a small sample of the apps triggering the new warning.


Worse Than FailureAnother Immovable Spreadsheet

OrderStatistics.gif

Steve had been working as a web developer, but his background was in mathematics. Therefore, when a job opened up for internal transfer to the "Statistics" team, he jumped on it and was given the job without too much contest. Once there, he was able to meet the other "statisticians:" a group of well-meaning businessfolk with very little mathematical background who used The Spreadsheet to get their work done.

The Spreadsheet was Excel, of course. To enter data, you had to cut and paste columns from various tools into one of the many sheets, letting the complex array of formulas calculate the numbers needed for the quarterly report. Shortly before Steve's transfer, there had apparently been a push to automate some of the processes with SAS, a tool much more suited to this sort of work than a behemoth of an Excel spreadsheet.

A colleague named Stu showed Steve the ropes. Stu admitted there was indeed a SAS process that claimed to do the same functions as The Spreadsheet, but nobody was using it because nobody trusted the numbers that came out of it.

Never the biggest fan of Excel, Steve decided to throw his weight behind the SAS process. He ran the SAS algorithms multiple times, giving the outputs to Stu to compare against the Excel spreadsheet output. The first three iterations, everything seemed to match exactly. On the fourth, however, Stu told him that one of the outputs was off by 0.2.

To some, this was vindication of The Spreadsheet; after all, why would they need some fancy-schmancy SAS process when Excel worked just fine? Steve wasn't so sure. An error in the code might lead to a big discrepancy, but this sounded more like a rounding error than anything else.

Steve tracked down the relevant documentation for Excel and SAS, and found that both used 64-bit floating point numbers on the 32-bit Windows machines that the calculations were run on. Given that all the calculations were addition and multiplication with no exponents, the mistake had to be in either the Excel code or the SAS code.

Steve stepped through the SAS process, ensuring that the intermediate outputs in SAS matched the accompanying cells in the Excel sheet. When he'd just about given up hope, he found the issue: a ROUND command, right at the end of the chain where it didn't belong.

All of the SAS code in the building had been written by a guy named Brian. Even after Steve had taken over writing SAS, people still sought out Brian for updates and queries, despite his having other work to do.

Steve had no choice but to do the same. He stopped by Brian's cube, knocking perfunctorily before asking, "Why is there a ROUND command at the end of the SAS?"

"There isn't. What?" replied Brian, clearly startled out of his thinking trance.

"No, look, there is," replied Steve, waving a printout. "Why is it there?"

"Oh. That." Brian shrugged. "Excel was displaying only one decimal place for some godforsaken reason, and they wanted the SAS output to be exactly the same."

"I should've known," said Steve, disgustedly. "Stu told me it matched, but it can't have been matching exactly this whole time, not with rounding in there."

"Sure, man. Whatever."

Sadly, Steve was transferred again before the next quarterly run—this time to a company doing proper statistical analysis, not just calculating a few figures for the quarterly presentation. He instructed Stu how to check to fifteen decimal places, but didn't hold out much hope that SAS would come to replace the Excel sheet.

Steve later ran into Stu at a coffee hour. He asked about how the replacement was going.

"I haven't had time to check the figures from SAS," Stu replied. "I'm too busy with The Spreadsheet as-is."

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianNorbert Preining: Cinnamon 4.6 for Debian

After a few rounds of testing in experimental, I have uploaded Cinnamon 4.6 packages to Debian/unstable. Nothing spectacular new besides the usual stream of fixes. Enjoy the new Cinnamon!

Planet DebianHolger Levsen: 20200629-developers-reference-challenge

developer-reference challenge: get the bug count down to 0

As of now, the BTS counts 52 bugs for src:developers-reference and I'd like to get this down to zero, because src:developers-reference is just documenting best practices and this should be easy and done well.

So, I've been trying to keep the habit of fixing at least one bug per month and I also try to do one upload per month. And I very much welcome your patches, commits, MRs and bug reports, because else it will take more than 5 years. Though I'm hopeful, src:developers-reference is maintained by the debian group on salsa, which is like a thousand people.

Please help and join the fun and responsibility. Even if only once every three months (or less) ;-) And thank you, it's awesome to be in this together.

,

CryptogramCOVID-19 Risks of Flying

I fly a lot. Over the past five years, my average speed has been 32 miles an hour. That all changed mid-March. It's been 105 days since I've been on an airplane -- longer than any other time in my adult life -- and I have no future flights scheduled. This is all a prelude to saying that I have been paying a lot of attention to the COVID-related risks of flying.

We know a lot more about how COVID-19 spreads than we did in March. The "less than six feet, more than ten minutes" model has given way to a much more sophisticated model involving airflow, the level of virus in the room, and the viral load in the person who might be infected.

Regarding airplanes specifically: on the whole, they seem safer than many other group activities. Of all the research about contact tracing results I have read, I have seen no stories of a sick person on an airplane infecting other passengers. There are no superspreader events involving airplanes. (That did happen with SARS.) It seems that the airflow inside the cabin really helps.

Airlines are trying to make things better: blocking middle seats, serving less food and drink, trying to get people to wear masks. (This video is worth watching.) I've started to see airlines requiring masks and banning those who won't, and not just strongly encouraging them. (If mask wearing is treated the same as the seat belt wearing, it will make a huge difference.) Finally, there are a lot of dumb things that airlines are doing.

This article interviewed 511 epidemiologists, and the general consensus was that flying is riskier than getting a haircut but less risky than eating in a restaurant. I think that most of the risk is pre-flight, in the airport: crowds at the security checkpoints, gates, and so on. And that those are manageable with mask wearing and situational awareness. So while I am not flying yet, I might be willing to soon. (It doesn't help that I get a -1 on my COVID saving throw for type A blood, and another -1 for male pattern baldness. On the other hand, I think I get a +3 Constitution bonus. Maybe, instead of sky marshals we can have high-level clerics on the planes.)

And everyone: wear a mask, and wash your hands.

EDITED TO ADD (6/27): Airlines are starting to crowd their flights again.

Planet DebianRuss Allbery: PGP::Sign 1.00

This is the first new release of PGP::Sign in 13 years, so it's long-overdue. I have finally updated it in preparation for creating a new, more modern signing key for the Big Eight Usenet hierarchies and issuing control messages with both the old and new keys, using GnuPG v2 for the new key.

The biggest change in this release is that it drops support for all OpenPGP implementations other than GnuPG, and adds support for GnuPG v2. I think some of the other PGP implementations are still around, but I haven't seen them in years and have no way to test against them, so it didn't seem worthwhile to continue to support them. GnuPG v2 support is obviously long-overdue, given that we're getting close to the point where GnuPG v1 will start disappearing from distributions. The default backend is now GnuPG v2, although the module can be configured to use GnuPG v1 instead.

This release also adds a new object-oriented API. When I first wrote this module, it was common in the Perl community to have functional APIs configured with global variables. Subsequently we've learned this is a bad idea for a host of reasons, and I finally got around to redoing the API. It's still not perfect (in particular, the return value of the verify method is still a little silly), but it's much nicer. The old API is still supported, implemented as a shim in front of the new API.

A few other, more minor changes in this release:

  • Status output from GnuPG is now kept separate from human-readable log and error output for more reliable parsing.

  • Pass --allow-weak-digest-algos to GnuPG so it can use old keys and verify signatures from old keys, such as those created with PGP 2.6.2.

  • pgp_sign, when called in array context, now always returns "GnuPG" as the version string, and the version passed into pgp_verify is always ignored. Including the OpenPGP implementation version information in signatures is obsolete; GnuPG no longer does it by default and it serves no useful purpose.

  • When calling pgp_sign multiple times in the same process with whitespace munging enabled, trailing whitespace without a newline could have leaked into the next invocation of pgp_sign, resulting in an invalid signature. Clear any remembered whitespace between pgp_sign invocations.

It also now uses IPC::Run and File::Temp instead of hand-rolling equivalent functionality, and the module build system is now Module::Build.

You can get the latest version from CPAN or from the PGP::Sign distribution page.

,

Planet DebianSteinar H. Gunderson: Two Chinese video encoders

ENC1 and UHE265-1-Mini

I started a project recently (now on hold) that would involve streaming from mobile cameras over 4G. Since I wanted something with better picture quality than mobile phones on gimbals, the obvious question became: How can you take in HDMI (or SDI) and stream it out over SRT?

We looked at various SBCs coupled with video capture cards, but it didn't really work out; H.264 software encoding of 720p60/1080p60 (or even 1080i60 or 1080p30) is a tad too intensive for even the fastest ARM SoCs currently, USB3 is a pain on many of them, and the x86 SoCs are pretty expensive. Many of them have hardware encoding, but it's amazingly crappy quality-wise; borderline unusable even at 5 Mbit/sec. (Intel's Quick Sync is heaven compared to what the Raspberry Pi 4 has!)

So I started looking for all-in-one cheap encoder boxes; ideally with 4G or 802.11 out, but an Ethernet jack is OK to couple with a phone and an USB OTG adapter. I found three manufacturers that all make cheap gear, namely LinkPi, URayTech and Unisheen. Long story short, I ended up ordering one from each of the two former; namely their base HEVC models LinkPi ENC1 and URayTech's UHE265-1-Mini. (I wanted HEVC to offset some of the quality loss of going hardware encoding; my general sentiment is that the advantages of HEVC are somewhat more pronounced in cheap hardware than when realtime-encoding in software.)

The ENC1 is ostensibly 599 yuan (~$85, ~€75), but good luck actually getting it for that price. I spent an hour or so fiddling with their official store at Taobao (sort of like Aliexpress for Chinese domestic users), only to figure out in the very last step that they didn't ship from there outside of China. They have an international reselller, but they charge $130 plus shipping, on very shaky grounds (“we need to employ more people late at night to be able to respond to international support inquiries fast enough”). It's market segmentation, of course. There are supposedly companies that will help you buy from Taobao and bounce the goods on to you, but I didn't try them; I eventually went to eBay and got one for €120 (+VAT) with free (very slow!) shipping.

UHE265-1-Mini was pricier but easier; I bought one directly off of URayTech's Aliexpress store for $184.80, and it was shipped by FedEx in about five days.

Externally, they look fairly similar; the ENC1 (on top) is a bit thicker but generally smaller. The ENC1 also has a handy little OLED display that shows IP address, video input and a few other things (it's a few degrees uneven, though; true Chinese quality!). There are also two buttons, but they are fairly useless (you can't map them to anything except start/stop streaming and recording).

Both boxes can do up to 1080p60 input, but somehow, the ENC1 can only do 1080p30 or 720p60 encoding; it needs to either scale or decimate. (If you ask it to do 1080p60, it will make a valiant attempt even though it throws up a warning box, but it ends up lagging behind by a lot, so it seems to be useless.) The URayTech box, however, happily trods along with no issue. This is pretty bizarre when you consider that both are based on the same HiSilicon SoC (the HI3520DV400). On pure visual inspection on very limited test data, the HEVC output from the URayTech looks a bit better than the ENC1, too, but I might just be fooling myself here.

Both have web interfaces; LinkPi's looks a bit more polished, while URayTech's has a tiny bit more low-level control, such as the ability to upload an EDID. (In particular, after a bug fix, I can set SRT stream IDs on the URayTech, which is fairly important to me.) They let you change codecs, stream endpoints, protocols, bit rates, set up multiple independent encodes etc. easily enough.

Both of them also have Telnet interfaces (and the UHE265-1-Mini has SSH, too); neither included any information on what the username and password was, though. I asked URayTech's support what the password was, and their answer was that they didn't know (!) but I could try “unisheen” (!!). And lo and behold, “unisheen” didn't work, but “neworangetech”, Unisheen's default password, worked and gave me a HiLinux root shell with kernel 3.8.20. No matter the reason, I suppose this means I can assume the Unisheen devices are fairly similar…

LinkPi's support was less helpful; they refused to give me the password and suggested I talk to the seller. However, one of their update packages (which requires registration on Gitee, a Chinese GitHub clone—but at least in the open, unlike URayTech's, which you can only get from their support) contained an /etc/passwd with some DES password hashes. Three of them were simply the empty password, while the root password fell to some Hashcat brute-forcing after a few hours. It's “linkpi.c” (DES passwords are truncated after 8 characters, so I suppose they intended to have “linkpi.cn”.) It also presents a HiLinux root shell with kernel 3.8.20, but rather different userspace and 256 MB flash instead of 32 MB or so.

GPL violations are rampant, of course. Both ship Linux with no license text (even though UHE265-1-Mini comes with a paper manual) and no source in sight. Both ship FFmpeg, although URayTech has configured it in LGPL mode, unlike LinkPi, which has a full-on GPL build and links to it from various proprietary libraries. (URayTech ships busybox; I don't think LinkPi does, but I could be mistaken.) LinkPi's FFmpeg build also includes FDK-AAC, which is GPL-incompatible, so the entire shebang is non-redistributable. So both are bad, but the ENC1 is definitely worse. I've sent a request to support for source code, but I have zero belief that I'll receive any.

Both manufacturers have fancier, more expensive boxes, but they seem to diverge a bit. LinkPi is more oriented towards rack gear and also development boards (their main business seems to be selling chips to camera manufacturers; the ENC1 and similar looks most of all like a demo). URayTech, on the other hand, sells a bewildering amount of upgrades; they have variants that are battery-powered (using standard Sony camera battery form factors) and coldshoe mount, variants with Wi-Fi, variants with LTE, dual-battery options… (They also have some rack options.) I believe Unisheen also goes inthis direction. OTOH, the ENC1 has an USB port, which can seemingly be used fairly flexibly; you can attach a hard drive for recording (or video playback), a specific Wi-Fi dongle and even the ubiquitous Huawei E3276 LTE stick for 4G connectivity. I haven't tried any of this, but it's certainly a nice touch. It also has some IP input support, for transcoding, and HDMI out. I don't think there's much IPv6 support on either, sadly, although the kernel does support it on both, and perhaps some push support leaks in through FFmpeg (I haven't tested much).

Finally, both support various overlays, mixing of multiple sources and such. I haven't looked a lot at this.

So, which one is better? I guess that on paper, the ENC1 is smaller, cheaper (especially if you can get it at the Chinese price!), has the nice OLED display and more features. But if the price doesn't matter as much, the UHE265-1-Mini wins with the 1080p60 support, and I really ought to look more closely at what's going on with that quality difference (ie., whether it exists at all). It somehow just gives me better vibes, and I'm not really sure why; perhaps it's just because I received it so much earlier. Both are very interesting devices, and I'd love to be able to actually take them into the field some day.

Krebs on SecurityRussian Cybercrime Boss Burkov Gets 9 Years

A well-connected Russian hacker once described as “an asset of supreme importance” to Moscow was sentenced on Friday to nine years in a U.S. prison after pleading guilty to running a site that sold stolen payment card data, and to administering a highly secretive crime forum that counted among its members some of the most elite Russian cybercrooks.

Alexei Burkov, seated second from right, attends a hearing in Jerusalem in 2015. Photo: Andrei Shirokov / Tass via Getty Images.

Aleksei Burkov of St. Petersburg, Russia admitted to running CardPlanet, a site that sold more than 150,000 stolen credit card accounts, and to being a founder of DirectConnection — a closely guarded underground community that attracted some of the world’s most-wanted Russian hackers.

As KrebsOnSecurity noted in a November 2019 profile of Burkov’s hacker nickname ‘k0pa,’ “a deep dive into the various pseudonyms allegedly used by Burkov suggests this individual may be one of the most connected and skilled malicious hackers ever apprehended by U.S. authorities, and that the Russian government is probably concerned that he simply knows too much.”

Burkov was arrested in 2015 on an international warrant while visiting Israel, and over the ensuing four years the Russian government aggressively sought to keep him from being extradited to the United States.

When Israeli authorities turned down requests to send him back to Russia — supposedly to face separate hacking charges there — the Russians then imprisoned Israeli citizen Naama Issachar on trumped-up drug charges in a bid to trade prisoners. Nevertheless, Burkov was extradited to the United States in November 2019. Russian President Vladimir Putin pardoned Issachar in January 2020, just hours after Burkov pleaded guilty.

Arkady Bukh is a New York attorney who has represented a number of accused and convicted cybercriminals from Eastern Europe and Russia. Bukh said he suspects Burkov did not cooperate with Justice Department investigators apart from agreeing not to take the case to trial.

“Nine years is a huge sentence, and the government doesn’t give nine years to defendants who cooperate,” Bukh said. “Also, the time span [between Burkov’s guilty plea and sentencing] was very short.”

DirectConnection was something of a Who’s Who of major cybercriminals, and many of its most well-known members have likewise been extradited to and prosecuted by the United States. Those include Sergey “Fly” Vovnenko, who was sentenced to 41 months in prison for operating a botnet and stealing login and payment card data. Vovnenko also served as administrator of his own cybercrime forum, which he used in 2013 to carry out a plan to have Yours Truly framed for heroin possession.

As noted in last year’s profile of Burkov, an early and important member of DirectConnection was a hacker who went by the moniker “aqua” and ran the banking sub-forum on Burkov’s site. In December 2019, the FBI offered a $5 million bounty leading to the arrest and conviction of aqua, who’s been identified as Maksim Viktorovich Yakubets. The Justice Department says Yakubets/aqua ran a transnational cybercrime organization called “Evil Corp.” that stole roughly $100 million from victims.

In this 2011 screenshot of DirectConnection, we can see the nickname of “aqua,” who ran the “banking” sub-forum on DirectConecttion. Aqua, a.k.a. Maksim V. Yakubets of Russia, now has a $5 million bounty on his head from the FBI.

According to a statement of facts in Burkov’s case, the author of the infamous SpyEye banking trojan — Aleksandr “Gribodemon” Panin— was personally vouched for by Burkov. Panin was sentenced in 2016 to more than nine years in prison.

Other top DirectConnection members include convicted credit card fraudsters Vladislav “Badb” Horohorin and Sergey “zo0mer” Kozerev, as well as the infamous spammer and botnet master Peter “Severa” Levashov.

Also on Friday, the Justice Department said it obtained a guilty plea from another top cybercrime forum boss — Sergey “Stells” Medvedev — who admitted to administering the Infraud forum. The government says Infraud, whose slogan was “In Fraud We Trust,” attracted more than 10,000 members and inflicted more than $568 million in actual losses from the sale of stolen identity information, payment card data and malware.

A copy of the 108-month judgment entered against Burkov is available here (PDF).

Planet DebianRussell Coker: Links June 2020

Bruce Schneier wrote an informative post about Zoom security problems [1]. He recommends Jitsi which has a Debian package of their software and it’s free software.

Axel Beckert wrote an interesting post about keyboards with small numbers of keys, as few as 28 [2]. It’s not something I’d ever want to use, but interesting to read from a computer science and design perspective.

The Guardian has a disturbing article explaining why we might never get a good Covid19 vaccine [3]. If that happens it will change our society for years if not decades to come.

Matt Palmer wrote an informative blog post about private key redaction [4]. I learned a lot from that. Probably the simplest summary is that you should never publish sensitive data unless you are certain that all that you are publishing is suitable, if you don’t understand it then you don’t know if it’s suitable to be published!

This article by Umair Haque on eand.co has some interesting points about how Freedom is interpreted in the US [5].

This article by Umair Haque on eand.co has some good points about how messed up the US is economically [6]. I think that his analysis is seriously let down by omitting the savings that could be made by amending the US healthcare system without serious changes (EG by controlling drug prices) and by reducing the scale of the US military (there will never be another war like WW2 because any large scale war will be nuclear). If the US government could significantly cut spending in a couple of major areas they could then put the money towards fixing some of the structural problems and bootstrapping a first-world economic system.

The American Conservatrive has an insightful article “Seven Reasons Police Brutality is Systemic Not Anecdotal [7].

Scientific American has an informative article about how genetic engineering could be used to make a Covid-19 vaccine [8].

Rike wrote an insightful post about How Language Changes Our Concepts [9]. They cover the differences between the French, German, and English languages based on gender and on how the language limits thoughts. Then conclude with the need to remove terms like master/slave and blacklist/whitelist from our software, with a focus on Debian but it’s applicable to all software.

Gunnar Wolf also wrote an insightful post On Masters and Slaves, Whitelists and Blacklists [10], they started with why some people might not understand the importance of the issue and then explained some ways of addressing it. The list of suggested terms includes Primary-secondary, Leader-follower, and some other terms which have slightly different meanings and allow more precision in describing the computer science concepts used. We can be more precise when describing computer science while also not using terms that marginalise some groups of people, it’s a win-win!

Both Rike and Gunnar were responding to a LWN article about the plans to move away from Master/Slave and Blacklist/Whitelist in the Linux kernel [11]. One of the noteworthy points in the LWN article is that there are about 70,000 instances of words that need to be changed in the Linux kernel so this isn’t going to happen immediately. But it will happen eventually which is a good thing.

Planet Linux AustraliaDonna Benjamin: Vale Marcus de Rijk

Vale Marcus de Rijk kattekrab Sat, 27/06/2020 - 10:16

,

Planet DebianDirk Eddelbuettel: littler 0.3.11: docopt updates

max-heap image

The twelveth release of littler as a CRAN package is now available, following in the fourteen-ish year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only started to do in recent years.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default where a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH.

A few examples are highlighted at the Github repo, as well as in the examples vignette.

This release mostly responds to the recent docopt release 0.7.0 which brought a breaking change for quoted arguments. In short, it is for the better because an option --as-cran is now available parsed as opt$as_cran which is easier than the earlier form where we needed to back-tick protect as-cran containing an underscore. We also added a new portmanteau-ish option to roxy.r.

The NEWS file entry is below.

Changes in littler version 0.3.11 (2020-06-26)

  • Changes in examples

    • Scripts check.r and rcc.r updated to reflect updated docopt 0.7.0 behaviour of quoted arguments

    • The roxy.r script has a new ease-of-use option -f | --full regrouping two other options.

CRANberries provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianChris Lamb: On the pleasure of hating

People love to tell you that they "don't watch sports" but the story of Lance Armstrong provides a fascinating lens through which to observe our culture at large.

For example, even granting all that he did and all the context in which he did it, why do sports cheats act like a lightning rod for such an instinctive hatred? After all, the sheer level of distaste directed at people such as Lance eludes countless other criminals in our society, many of whom have taken a lot more with far fewer scruples. The question is not one of logic or rationality, but of proportionality.

In some ways it should be unsurprising. In all areas of life, we instinctively prefer binary judgements to moral ambiguities and the sports cheat is a cliché of moral bankruptcy — cheating at something so seemingly trivial as a sport actually makes it more, not less, offensive to us. But we then find ourselves strangely enthralled by them, drawn together in admiration of their outlaw-like tenacity, placing them strangely close to criminal folk heroes. Clearly, sport is not as unimportant as we like to claim it is. In Lance's case in particular though, there is undeniably a Shakespearean quality to the story and we are forced to let go of our strict ideas of right and wrong and appreciate all the nuance.


§


There is a lot of this nuance in Marina Zenovich's new documentary. In fact, there's a lot of everything. At just under four hours, ESPN's Lance combines the duration of a Tour de France stage with the depth of the peloton — an endurance event compared to the bite-sized hagiography of Michael Jordan's The Last Dance.

Even for those who follow Armstrong's story like a mini-sport in itself, Lance reveals new sides to this man for all seasons. For me, not only was this captured in his clumsy approximations at being a father figure but also in him being asked something I had not read in countless tell-all books: did his earlier experiments in drug-taking contribute to his cancer?

But even in 2020 there are questions that remain unanswered. By needlessly returning to the sport in 2009, did Lance subconsciously want to get caught? Why does he not admit he confessed to Betsy Andreu back in 1999 but will happily apologise to her today for slurring her publicly on this very point? And why does he remain so vindictive towards former-teammate Floyd Landis? In all of Armstrong's evasions and masterful control of the narrative, there is the gnawing feeling that we don't even know what questions we should be even asking. As ever, the questions are more interesting than the answers.


§


Lance also reminded me of how professional cycling's obsession with national identity. Although I was intuitively aware of it to some degree, I had not fully grasped how much this kind of stereotyping runs through the veins of the sport itself, just like the drugs themselves. Journalist Daniel Friebe first offers us the portrait of:

Spaniards tend to be modest, very humble. Very unpretentious. And the Italians are loud, vain and outrageous showmen.

Former directeur sportif Johan Bruyneel then asserts that "Belgians are hard workers... they are ambitious to a certain point, but not overly ambitious", and cyclist Jörg Jaksche concludes with:

The Germans are very organised and very structured. And then the French, now I have to be very careful because I am German, but the French are slightly superior.

This kind of lazy caricature is nothing new, especially for those brought up on a solid diet of Tintin and Asterix, but although all these examples are seemingly harmless, why does the underlying idea of ascribing moral, social or political significance to genetic lineage remain so durable in today's age of anti-racism? To be sure, culture is not quite the same thing as race, but being judged by the character of one's ancestors rather than the actions of an individual is, at its core, one of the many conflations at the heart of racism. There is certainly a large amount of cognitive dissonance at work, especially when Friebe elaborates:

East German athletes were like incredible robotic figures, fallen off a production line somewhere behind the Iron Curtain...

... but then Übermensch Jan Ullrich is immediately described as "emotional" and "struggled to live the life of a professional cyclist 365 days a year". We see the habit to stereotype is so ingrained that even in the face of this obvious contradiction, Friebe unironically excuses Ullrich's failure to live up his German roots due to him actually being "Mediterranean".


§


I mention all this as I am known within my circles for remarking on these national characters, even collecting stereotypical examples of Italians 'being Italian' and the French 'being French' at times. Contrary to evidence, I don't believe in this kind of innate quality but what I do suspect is that people generally behave how they think they ought to behave, perhaps out of sheer imitation or the simple pleasure of conformity. As the novelist Will Self put it:

It's quite a complicated collective imposture, people pretending to be British and people pretending to be French, and then they get really angry with each other over what they're pretending to be.

The really remarkable thing about this tendency is that even if we consciously notice it there is no seemingly no escape — even I could not smirk when I considered that a brash Texan winning the Tour de France actually combines two of America's cherished obsessions: winning... and annoying the French.

CryptogramFriday Squid Blogging: Fishing for Jumbo Squid

Interesting article on the rise of the jumbo squid industry as a result of climate change.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDConversations on capitalism and climate change: Week 6 of TED2020

For week 6 of TED2020, experts in the economy and climate put a future driven by sustainable transformation into focus. Below, a recap of insights shared throughout the week.

Economist Mariana Mazzucato talks about how to make sure the trillions we’re investing in COVID-19 recovery are actually put to good use — and explores how innovative public-private partnerships can drive change. She speaks with TED Global curator Bruno Giussani at TED2020: Uncharted on June 22, 2020. (Photo courtesy of TED)

Mariana Mazzucato, economist

Big idea: Government can (and should) play a bold, dynamic and proactive role in shaping markets and sparking innovation — working together with the private sector to drive deep structural change.

How? In the face of three simultaneous crises — health, finance and climate — we need to address underlying structural problems instead of hopping from one crisis to the next, says Mariana Mazzucato. She calls for us to rethink how government and financial systems work, shifting towards a system in which the public sector creates value and take risks. (Learn more about value creation in Mazzucato’s talk from 2019.) “We need a different [economic] framing, one that’s much more about market cocreation and market-shaping, not market fixing,” she says. How do you shape a market? Actively invest in essential systems like health care and public education, instead of justing responding once the system is already broken. Mazzucato calls for businesses and government to work together around a new social contract — one that bring purpose and stakeholders value to the center of the ecosystem. To motivate this, she makes the case for a mission-oriented approach, whereby public entities, corporations and small business focus their focus their various efforts towards a big problem like climate change or COVID-19. It starts with an inspirational challenge, Mazzucato says, leading to projects that galvanize innovation and catalyze bottom-up experimentation.


“When survival is at stake, and when our children and future generations are at stake, we’re capable of more than we sometimes allow ourselves to think we can do,” says climate advocate Al Gore. “This is such a time. I believe we will rise to the occasion and we will create a bright, clean, prosperous, just and fair future. I believe it with all my heart.” Al Gore speaks with head of TED Chris Anderson at TED2020: Uncharted on June 23, 2020. (Photo courtesy of TED)

Al Gore, climate advocate

Big idea: To continue lowering emissions, we must focus on transitioning manufacturing, transportation and agriculture to wind- and solar-powered electricity.

How? As coronavirus put much of the world on pause, carbon emissions dropped by five percent. But keeping those rates down to reach the Paris Climate Agreement goal of zero emissions by 2050 will require active change in our biggest industries, says climate advocate Al Gore. He discusses how the steadily declining cost of wind- and solar-generated electricity will transform transportation, manufacturing and agriculture, while creating millions of new jobs and offering a cleaner and cheaper alternative to fossil fuels and nuclear energy. He offers specific measures we can implement, such as retrofitting inefficient buildings, actively managing forests and oceans and adopting regenerative agriculture like sequestering carbon in topsoil. With serious national plans, a focused global effort and a new generation of young people putting pressure on their employers and political parties, Gore is optimistic about tackling climate change. “When survival is at stake, and when our children and future generations are at stake, we’re capable of more than we sometimes allow ourselves to think we can do,” he says. “This is such a time. I believe we will rise to the occasion and we will create a bright, clean, prosperous, just and fair future. I believe it with all my heart.” Watch the full conversation here.


“We collectively own the capital market, and we are all universal owners,” says financier Hiro Mizuno says. “So let’s work together to make the whole capital market and business more sustainable and protect our own investment and our own planet.” He speaks with TED business curator Corey Hajim at TED2020: Uncharted on June 24, 2020. (Photo courtesy of TED)

Hiro Mizuno, financier and former chief investment officer of Japan’s Government Investment Pension Fund

Big idea: For investors embracing ESG principles (responsible investing in ecology, social and governance), it’s not enough to “break up” with the bad actors in our portfolios. If we really want zero-carbon markets, we must also tilt towards the good global business citizens and incentivize sustainability for the market as a whole.

How? Hiro Mizuno believes that fund managers have two main tools at their disposal to help build a more sustainable market. First, steer funds towards businesses that are transforming to become more sustainable — because if we just punish those that aren’t, we’re merely allowing irresponsible investors to reap their profits. Second, fund managers must take a more active role in the governance of companies via proxy voting in order to lead the fight against climate change. “We collectively own the capital market, and we are all universal owners,” Mizuno says. “So let’s work together to make the whole capital market and business more sustainable and protect our own investment and our own planet.”


What would happen if we shifted our stock-market mindset to encompass decades, lifetimes or even generations? Michelle Greene, president of the Long-Term Stock Exchange, explores that idea in conversation with Chris Anderson and Corey Hajim as part of TED2020: Uncharted on June 24, 2020. (Photo courtesy of TED)

Michelle Greene, president of the Long-Term Stock Exchange

Big idea: In today’s markets, investors tend to think in daily, quarterly and yearly numbers — and as a result, we have a system that rewards short-term decisions that harm the long-term health of our economy and the planet. What would happen if we shifted that mindset to encompass decades, lifetimes or even generations?

How? In order to change how companies “show up” in the world, we need to change the playing field entirely. And since the stock exchange makes the rules that govern trading, why not create a new one? By holding companies to binding rules, the Long-Term Stock Exchange does just that, with mandatory listing standards built around core principles like diversity and inclusion, investment in employees and environmental responsibility. “What we’re trying to do is create a place where companies can maintain their focus on their long-term mission and vision, and at the same time be accountable for their impact on the broader world,” Greene says.

Planet DebianNorbert Preining: KDE/Plasma 5.19.2 for Debian

After the long wait (or should I say The Long Dark) finally Qt 5.14 has arrived in Debian/unstable, and thus the doors for KDE/Plasma 5.19 are finally open.

I have been preparing this release for quite some time, but due to Qt 5.12 I could only test it in a virtual machine using Debian/experimental. But now, finally, a full upgrade to Plasma 5.19(.2) has arrived.

Unfortunately, it turned out that the OBS build servers are either overloaded, incapable, or broken, but they do not properly build the necessary packages. Thus, I make the Plasma 5.19.2 (and framework) packages available via my server, for amd64. Please use the following apt line:

deb https://www.preining.info/debian/ unstable kde

(The source packages are also there, just in case you want to rebuild them for i386).

Not that this requires my GPG key to be imported, best is to put it into /etc/apt/trusted.gpg.d/preining-norbert.asc.

For KDE Applications please still use

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps/Debian_Unstable/ ./

but I might sooner or later move that to my server, too. OBS servers are too broken with respect to considerable big rebuild groups it seems.

So, enjoy your shiny new KDE/Plasma desktop on Debian, too!

CryptogramThe Unintended Harms of Cybersecurity

Interesting research: "Identifying Unintended Harms of Cybersecurity Countermeasures":

Abstract: Well-meaning cybersecurity risk owners will deploy countermeasures (technologies or procedures) to manage risks to their services or systems. In some cases, those countermeasures will produce unintended consequences, which must then be addressed. Unintended consequences can potentially induce harm, adversely affecting user behaviour, user inclusion, or the infrastructure itself (including other services or countermeasures). Here we propose a framework for preemptively identifying unintended harms of risk countermeasures in cybersecurity.The framework identifies a series of unintended harms which go beyond technology alone, to consider the cyberphysical and sociotechnical space: displacement, insecure norms, additional costs, misuse, misclassification, amplification, and disruption. We demonstrate our framework through application to the complex,multi-stakeholder challenges associated with the prevention of cyberbullying as an applied example. Our framework aims to illuminate harmful consequences, not to paralyze decision-making, but so that potential unintended harms can be more thoroughly considered in risk management strategies. The framework can support identification and preemptive planning to identify vulnerable populations and preemptively insulate them from harm. There are opportunities to use the framework in coordinating risk management strategy across stakeholders in complex cyberphysical environments.

Security is always a trade-off. I appreciate work that examines the details of that trade-off.

Planet DebianJonathan Dowland: Our Study

This is a fairly self-indulgent post, sorry!

Encouraged by Evgeni, Michael and others, given I'm spending a lot more time at my desk in my home office, here's a picture of it:

Fisheye shot of my home office

Fisheye shot of my home office

Near enough everything in the study is a work in progress.

The KALLAX behind my desk is a recent addition (under duress) because we have nowhere else to put it. Sadly I can't see it going away any time soon. On the up-side, since Christmas I've had my record player and collection in and on top of it, so I can listen to records whilst working.

The arm chair is a recent move from another room. It's a nice place to take some work calls, serving as a change of scene, and I hope to add a reading light sometime. The desk chair is some Ikea model I can't recall, which is sufficient, and just fits into the desk cavity. (I'm fairly sure my Aeron, inaccessible elsewhere, would not fit.)

I've had this old mahogany, leather-topped desk since I was a teenager and it's a blessing and a curse. Mostly a blessing: It's a lovely big desk. The main drawback is it's very much not height adjustable. At the back is a custom made, full-width monitor stand/shelf, a recent gift produced to specification by my Dad, a retired carpenter.

On the top: my work Thinkpad T470s laptop, geared more towards portable than powerful (my normal preference), although for the forseeable future it's going to remain in the exact same place; an Ikea desk lamp (I can't recall the model); a 27" 4K LG monitor, the cheapest such I could find when I bought it; an old LCD Analog TV, fantastic for vintage consoles and the like.

Underneath: An Alesis Micron 2½ octave analog-modelling synthesizer; various hubs and similar things; My Amiga 500.

Like Evgeni, my normal keyboard is a ThinkPad Compact USB Keyboard with TrackPoint. I've been using different generations of these styles of keyboards for a long time now, initially because I loved the trackpoint pointer. I'm very curious about trying out mechanical keyboards, as I have very fond memories of my first IBM Model M buckled-spring keyboard, but I haven't dipped my toes into that money-trap just yet. The Thinkpad keyboard is rubber-dome, but it's a good one.

Wedged between the right-hand bookcases are a stack of IT-related things: new printer; deprecated printer; old/spare/play laptops, docks and chargers; managed network switch; NAS.

Worse Than FailureError'd: The Exception

Alex A. wrote, "Vivaldi only has two words for you when you forget to switch back to your everyday browser for email after testing a website in Edge."

 

"So was my profile successfully created wrongly, wrongly created successfully?" writes Francesco A.

 

"I will personally wait for next year's show at undefined, undefined, given the pandemic and everything," writes Drew W.

 

Bill T. wrote, "I'm not sure if Adobe is confused about me being me, or if they think that I could be schizophrenic."

 

"FedEx? There's something a little bit off about your website for me. Are you guys okay in there?" wrote Martin P.

 

"Winn-Dixie's 'price per each' algorithm isn't exactly wrong but it definitely isn't write either," Mark S. writes.

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianMartin Michlmayr: ledger2beancount 2.3 released

I released version 2.3 of ledger2beancount, a ledger to beancount converter.

There are three notable changes with this release:

  1. Performance has significantly improved. One large, real-world test case has gone from around 160 seconds to 33 seconds. A smaller test case has gone from 11 seconds to ~3.5 seconds.
  2. The documentation is available online now (via Read the Docs).
  3. The repository has moved to the beancount GitHub organization.

Here are the changes in 2.3:

  • Improve speed of ledger2beancount significantly
  • Improve parsing of postings for accuracy and speed
  • Improve support for inline math
  • Handle lots without cost
  • Fix parsing of lot notes followed by a virtual price
  • Add support for lot value expressions
  • Make parsing of numbers more strict
  • Fix behaviour of dates without year
  • Accept default ledger date formats without configuration
  • Fix implicit conversions with negative prices
  • Convert implicit conversions in a more idiomatic way
  • Avoid introducing trailing whitespace with hledger input
  • Fix loading of config file
  • Skip ledger directive import
  • Convert documentation to mkdocs

Thanks to Colin Dean for some feedback. Thanks to Stefano Zacchiroli for prompting me into investigating performance issues (and thanks to the developers of the Devel::NYTProf profiler).

You can get ledger2beancount from GitHub

Planet DebianReproducible Builds (diffoscope): diffoscope 149 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 149. This version includes the following changes:

[ Chris Lamb ]
* Update tests for file 5.39. (Closes: reproducible-builds/diffoscope#179)
* Downgrade the tlsh warning message to an "info" level warning.
  (Closes: #888237, reproducible-builds/diffoscope#29)
* Use the CSS "word-break" property over manually adding U+200B zero-width
  spaces that make copy-pasting cumbersome.
  (Closes: reproducible-builds/diffoscope!53)

* Codebase improvements:
  - Drop some unused imports from the previous commit.
  - Prevent an unnecessary .format() when rendering difference comments.
  - Use a semantic "AbstractMissingType" type instead of remembering to check
    for both "missing" files and missing containers.

[ Jean-Romain Garnier ]
* Allow user to mask/filter reader output via --diff-mask=REGEX.
  (MR: reproducible-builds/diffoscope!51)
* Make --html-dir child pages open in new window to accommodate new web
  browser content security policies.
* Fix the --new-file option when comparing directories by merging
  DirectoryContainer.compare and Container.compare.
  (Closes: reproducible-builds/diffoscope#180)
* Fix zsh completion for --max-page-diff-block-lines.

[ Mattia Rizzolo ]
* Do not warn about missing tlsh during tests.

You find out more by visiting the project homepage.

,

Krebs on SecurityNew Charges, Sentencing in Satori IoT Botnet Conspiracy

The U.S. Justice Department today charged a Canadian and a Northern Ireland man for allegedly conspiring to build botnets that enslaved hundreds of thousands of routers and other Internet of Things (IoT) devices for use in large-scale distributed denial-of-service (DDoS) attacks. In addition, a defendant in the United States was sentenced today to drug treatment and 18 months community confinement for his admitted role in the botnet conspiracy.

Indictments unsealed by a federal court in Alaska today allege 20-year-old Aaron Sterritt from Larne, Northern Ireland, and 21-year-old Logan Shwydiuk of Saskatoon, Canada conspired to build, operate and improve their IoT crime machines over several years.

Prosecutors say Sterritt, using the hacker aliases “Vamp” and “Viktor,” was the brains behind the computer code that powered several potent and increasingly complex IoT botnet strains that became known by exotic names such as “Masuta,” “Satori,” “Okiru” and “Fbot.”

Shwydiuk, a.k.a. “Drake,” “Dingle, and “Chickenmelon,” is alleged to have taken the lead in managing sales and customer support for people who leased access to the IoT botnets to conduct their own DDoS attacks.

A third member of the botnet conspiracy — 22-year-old Kenneth Currin Schuchman of Vancouver, Wash. — pleaded guilty in Sept. 2019 to aiding and abetting computer intrusions in September 2019. Schuchman, whose role was to acquire software exploits that could be used to infect new IoT devices, was sentenced today by a judge in Alaska to 18 months of community confinement and drug treatment, followed by three years of supervised release.

Kenneth “Nexus-Zeta” Schuchman, in an undated photo.

The government says the defendants built and maintained their IoT botnets by constantly scanning the Web for insecure devices. That scanning primarily targeted devices that were placed online with weak, factory default settings and/or passwords. But the group also seized upon a series of newly-discovered security vulnerabilities in these IoT systems — commandeering devices that hadn’t yet been updated with the latest software patches.

Some of the IoT botnets enslaved hundreds of thousands of hacked devices. For example, by November 2017, Masuta had infected an estimated 700,000 systems, allegedly allowing the defendants to launch crippling DDoS attacks capable of hurling 100 gigabits of junk data per second at targets — enough firepower to take down many large websites.

In 2015, then 15-year-old Sterritt was involved in the high-profile hack against U.K. telecommunications provider TalkTalk. Sterritt later pleaded guilty to his part in the intrusion, and at his sentencing in 2018 was ordered to complete 50 hours of community service.

The indictments against Sterritt and Shwydiuk (PDF) do not mention specific DDoS attacks thought to have been carried out with the IoT botnets. In an interview today with KrebsOnSecurity, prosecutors in Alaska declined to discuss any of their alleged offenses beyond building, maintaining and selling the above-mentioned IoT botnets.

But multiple sources tell KrebsOnSecuirty Vamp was principally responsible for the 2016 massive denial-of-service attack that swamped Dyn — a company that provides core Internet services for a host of big-name Web sites. On October 21, 2016, an attack by a Mirai-based IoT botnet variant overwhelmed Dyn’s infrastructure, causing outages at a number of top Internet destinations, including Twitter, Spotify, Reddit and others.

In 2018, authorities with the U.K.’s National Crime Agency (NCA) interviewed a suspect in connection with the Dyn attack, but ultimately filed no charges against the youth because all of his digital devices had been encrypted.

“The principal suspect of this investigation is a UK national resident in Northern Ireland,” reads a June 2018 NCA brief on their investigation into the Dyn attack (PDF), dubbed Operation Midmonth. “In 2018 the subject returned for interview, however there was insufficient evidence against him to provide a realistic prospect of conviction.”

The login prompt for Nexus Zeta’s IoT botnet included the message “Masuta is powered and hosted on Brian Kreb’s [sic] 4head.” To be precise, it’s a 5head.

The unsealing of the indictments against Sterritt and Shwydiuk came just minutes after Schuchman was sentenced today. Schuchman has been confined to an Alaskan jail for the past 13 months, and Chief U.S. District Judge Timothy Burgess today ordered the sentence of 18 months community confinement to begin Aug. 1.

Community confinement in Schuchman’s case means he will spend most or all of that time in a drug treatment program. In a memo (PDF) released prior to Schuchman’s sentencing today, prosecutors detailed the defendant’s ongoing struggle with narcotics, noting that on multiple occasions he was discharged from treatment programs after testing positive for Suboxone — which is used to treat opiate addiction and is sometimes abused by addicts — and for possessing drug contraband.

The government’s sentencing memo also says Schuchman on multiple occasions absconded from pretrial supervision, and went right back to committing the botnet crimes for which he’d been arrested — even communicating with Sterritt about the details of the ongoing FBI investigation.

“Defendant’s performance on pretrial supervision has been spectacularly poor,” prosecutors explained. “Even after being interviewed by the FBI and put on restrictions, he continued to create and operate a DDoS botnet.”

Prosecutors told the judge that when he was ultimately re-arrested by U.S. Marshals, Schuchman was found at a computer in violation of the terms of his release. In that incident, Schuchman allegedly told his dad to trash his computer, before successfully encrypting his hard drive (which the Marshals service is still trying to decrypt). According to the memo, the defendant admitted to marshals that he had received and viewed videos of “juveniles engaged in sex acts with other juveniles.”

“The circumstances surrounding the defendant’s most recent re-arrest are troubling,” the memo recounts. “The management staff at the defendant’s father’s apartment complex, where the defendant was residing while on abscond status, reported numerous complaints against the defendant, including invitations to underage children to swim naked in the pool.”

Adam Alexander, assistant US attorney for the district of Alaska, declined to say whether the DOJ would seek extradition of Sterritt and Shwydiuk. Alexander said the success of these prosecutions is highly dependent on the assistance of domestic and international law enforcement partners, as well as a list of private and public entities named at the conclusion of the DOJ’s press release on the Schuchman sentencing (PDF).

However, a DOJ motion (PDF) to seal the case records filed back in September 2019 says the government is in fact seeking to extradite the defendants.

Chief Judge Burgess was the same magistrate who presided over the 2018 sentencing of the co-authors of Mirai, a highly disruptive IoT botnet strain whose source code was leaked online in 2016 and was built upon by the defendants in this case. Both Mirai co-authors were sentenced to community service and home confinement thanks to their considerable cooperation with the government’s ongoing IoT botnet investigations.

Asked whether he was satisfied with the sentence handed down against Schuchman, Alexander maintained it was more than just another slap on the wrist, noting that Schuchman has waived his right to appeal the conviction and faces additional confinement of two years if he absconds again or fails to complete his treatment.

“In every case the statutory factors have to do with the history of the defendants, who in these crimes tend to be extremely youthful offenders,” Alexander said. “In this case, we had a young man who struggles with mental health and really pronounced substance abuse issues. Contrary to what many people might think, the goal of the DOJ in cases like this is not to put people in jail for as long as possible but to try to achieve the best balance of safeguarding communities and affording the defendant the best possible chance of rehabilitation.”

William Walton, supervisory special agent for the FBI’s cybercrime investigation division in Anchorage, Ala., said he hopes today’s indictments and sentencing send a clear message to what he described as a relatively insular and small group of individuals who are still building, running and leasing IoT-based botnets to further a range of cybercrimes.

“One of the things we hope in our efforts here and in our partnerships with our international partners is when we identify these people, we want very much to hold them to account in a just but appropriate way,” Walton said. “Hopefully, any associates who are aspiring to fill the vacuum once we take some players off the board realize that there are going to be real consequences for doing that.”

Planet DebianDirk Eddelbuettel: RcppSimdJson 0.0.6: New Upstream, New Features!

A very exciting RcppSimdJson release with the updated upstream simdjson release 0.4.0 as well as a first set of new JSON parsing functions just hit CRAN. RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle use per byte parsed; see the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk). The very recent 0.4.0 release further improves the already impressive speed.

And this release brings a first set of actually user-facing functions thanks to Brendan which put in a series of PRs! The full NEWS entry follows.

Changes in version 0.0.6 (2020-06-25)

  • Created C++ integer-handling utilities for safe downcasting and integer return (Brendan in #16 closing #13).

  • New JSON functions .deserialize_json and .load_json (Brendan in #16, #17, #20, #21).

  • Upgrade Travis CI to 'bionic', extract package and version from DESCRIPTION (Dirk in #23).

  • Upgraded to simdjson 0.4.0 (Dirk in #25 closing #24).

Courtesy of CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

TEDThe Audacious Project expands its support of pandemic relief and recovery work

The Audacious Project, a funding initiative housed at TED, is continuing its support of solutions tailored to COVID-19 rapid response and long-term recovery. As the COVID-19 pandemic continues to develop, The Audacious Project is committed to targeting key systems in crisis and supporting them as they rebuild better and with greater resilience. In this phase, more than 55 million dollars have been catalyzed towards Fast Grants, an initiative to accelerate COVID-19 related scientific research; GiveDirectly, which distributes unconditional cash transfers to those most in need; and Harlem Children’s Zone, one of the pioneers in neighborhood-specific “cradle-to-career” programs in support of low-income children and communities in the United States.

Accelerating our scientific understanding of COVID-19 and providing immediate relief to communities hardest hit by the virus are just two of the myriad challenges we must address in the face of this pandemic,” said Anna Verghese, Executive Director of The Audacious Project. “With our COVID-19 rapid response cohort, we are supporting organizations with real-world solutions that are actionable now. But that aid should extend beyond recovery, which is why we look forward to the work these organizations will continue to do to ensure better systems for the future.” 

Funding directed toward these three new initiatives is in addition to the more than 30 million dollars that was dedicated to Partners In Health, Project ECHO and World Central Kitchen for their COVID-19 rapid response work earlier this year.


Announcing three new projects in the Audacious COVID-19 response cohort

Fast Grants aims to accelerate funding for the development of treatments and vaccines and to further scientific understanding of COVID-19. (Photo courtesy of Fast Grants)

Fast Grants

The big idea: Though significant inroads have been made to advance our understanding of how the novel coronavirus spreads, and encouraging progress is underway in the development of vaccines and treatments, there are still many unknowns. We need to speed up the pace of scientific discovery in this area now more than ever, but the current systems for funding research do not meet this urgent need. Fast Grants is looking to solve that problem. They have created and are deploying a highly credible model to accelerate funding for the development of treatments and vaccines and to further scientific understanding of COVID-19. By targeting projects that demand greater speed and flexibility than traditional funding methods can offer, they will accelerate scientific discovery that can have an immediate impact and provide follow-on funding for promising early-stage discoveries.

How they’ll do it: Since March, Fast Grants has distributed 22 million dollars to fund 127 projects, leveraging a diverse panel of 20 experts to vet and review projects across a broad range of scientific disciplines. With Audacious funding, they will catalyze 80 to 115 research projects, accelerate timelines by as much as six to nine months and, in many cases, support projects that would otherwise go unfunded. This will accelerate research in testing, treatments, vaccines and many other areas critically needed to save lives and safely reopen economies. They will also build a community to share results and track progress while also connecting scientists to other funding platforms and research teams that can further advance the work. 


GiveDirectly helps families living in extreme poverty by making unconditional cash transfers to them via mobile phones. (Photo courtesy of Shutterstock)

GiveDirectly

The big idea: The COVID-19 pandemic could push more than 140 million people globally into extreme poverty. As the pandemic hampers humanitarian systems that typically address crises, the ability to deliver aid in person has never been more complicated. Enter GiveDirectly. For nearly a decade, they have provided no-strings-attached cash transfers to the world’s poorest people. Now they are leveraging the growth in the adoption of mobile technologies across Sub-Saharan Africa to design and deploy a breakthrough, fully remote model of humanitarian relief to respond to the COVID-19 crisis.

How they’ll do it: Over the next 12 months, GiveDirectly will scale their current model to provide unconditional cash transfers to more than 300,000 people who need it most. GiveDirectly will enroll and identify recipients without in-person contact, first using the knowledge of community-based organizations to identify and target beneficiaries within their existing networks, and second by leveraging data from national telephone companies to target those most in need. GiveDirectly will also systematize the underlying processes and algorithms so that they can be deployed for future disasters, thereby demonstrating a new model for rapid humanitarian relief.


Harlem Children’s Zone, one of the leading evidence-based, Black-led organizations in the US, is supercharging efforts to address Black communities’ most urgent needs and support recovery from the COVID-19 crisis. (Photo courtesy of Harlem Children’s Zone)

Harlem Children’s Zone

The big idea: In the United States, the coronavirus pandemic is disproportionately affecting Black communities. Not only are Black people being infected by the virus and dying at greater rates, the effects of the economic crisis are also hitting hardest vulnerable communities that were already facing a shrinking social safety net. At the onset of the pandemic, Harlem Children’s Zone (HCZ), one of the leading evidence-based, Black-led organizations in the US, pioneered a comprehensive approach to emergency response and recovery. The model is focused on five urgent areas: bridging the digital divide, preventing learning loss, mitigating the mental health crisis and providing economic relief and recovery.

How they’ll do it: Based in Harlem, New York, HCZ is leveraging deeply rooted community trust and best-in-class partners to deploy vital emergency relief and wrap-around support, including: health care information and protective gear to keep communities safe from the virus; quality, developmentally appropriate distance-learning resources, while developing plans for safe school reentry; and the provision of cash relief. They are also equipping backbone organizations across the US with the capacity to execute on a community-driven vision of their model nationally. They will be working side-by-side with leading, anchor institutions in six cities — Minneapolis, Oakland, Newark, Detroit, Chicago and Atlanta — to supercharge efforts to address Black communities’ most urgent needs and support recovery.


To learn more about The Audacious Project, visit https://audaciousproject.org.

TEDThe bill has come due for the US’s legacy of racism: Week 3 of TED2020

In response to the historic moment of mourning and anger over the ongoing violence inflicted on Black communities by police in the United States, four leaders in the movement for civil rights — Dr. Phillip Atiba Goff, CEO of Center for Policing Equity; Rashad Robinson, president of Color Of Change; Dr. Bernice Albertine King, CEO of the King Center; and Anthony D. Romero, executive director of the American Civil Liberties Union — joined TED2020 to explore how we can dismantle the systems of oppression and racism. Watch the full discussion on TED.com, and read a recap below.

“The history that we have in this country is not just a history of vicious neglect and targeted abuse of Black communities. It’s also one where we lose our attention for it,” says Dr. Phillip Atiba Goff. He speaks at TED2020: Uncharted on June 3, 2020. (Photo courtesy of TED)

Dr. Phillip Atiba Goff, CEO of the Center for Policing Equity

Big idea: The bill has come due for the unpaid debts the United States owes to its Black residents. But we’re not going to get to where we need to go just by reforming police.

How? What we’re seeing now isn’t just the response to one gruesome, cruel, public execution — a lynching. And it’s not just the reaction to three of them: Ahmaud Arbery, Breonna Taylor and George Floyd. What we’re seeing is the bill come due for the unpaid debts that the US owes to its Black residents, says Dr. Phillip Atiba Goff, CEO of the Center for Policing Equity (CPE). In addition to the work that CPE is known for — working with police departments to use their own data to improve relationships with the communities they serve — Goff and his team are encouraging departments and cities to take money from police budgets and instead invest it directly in public resources for the community, so people don’t need the police for public safety in the first place. Learn more about how you can support the Center for Policing Equity »


“This is the time for white allies to stand up in new ways, to do the type of allyship that truly dismantles structures, not just provides charity,” says Rashad Robinson, president of Color of Change. He speaks at TED2020: Uncharted on June 3, 2020. (Photo courtesy of TED)

Rashad Robinson, president of Color Of Change

Big idea: In the wake of the murders of George Floyd, Breonna Taylor and Ahmaud Arbery, people are showing up day after day in support of the Movement for Black Lives and in protest of police brutality against Black communities. We need to channel that presence and energy into power and material change.

How? The presence and visibility of a movement can often lead us to believe that progress is inevitable. But building power and changing the system requires more than conversations and retweets. To create material change in the racist systems that enable and perpetuate violence against Black communities, we need to translate the energy of these global protests into specific demands and actions, says Robinson. We have to pass new laws and hold those in power — from our police chiefs to our city prosecutors to our representatives in Congress — accountable to them. If we want to disentangle these interlocking systems of violence and complicity, Robinson says, we need to get involved in local, tangible organizing and build the power necessary to change the rules. You can’t sing our songs, use our hashtags and march in our marches if you are on the other end supporting the structures that put us in harm’s way, that literally kill us,” Robinson says. “This is the time for white allies to stand up in new ways, to do the type of allyship that truly dismantles structures, not just provides charity.”


“We can do this,” says Dr. Bernice Albertine King. “We can make the right choice to ultimately build the beloved community.” She speaks at TED2020: Uncharted on June 3, 2020. (Photo courtesy of TED)

Dr. Bernice Albertine King, CEO of The King Center

Big idea: To move towards a United States rooted in benevolent coexistence, equity and love, we must destroy and replace systems of oppression and violence towards Black communities. Nonviolence, accountability and love must pave the way.

How? The US needs a course correction that involves both hard work and “heart work” — and no one is exempt from it, says Dr. Bernice Albertine King. King continues to spread and build upon the wisdom of her father, Dr. Martin Luther King Jr., and she believes the US can work towards unity and collective healing. To do so, racism, systemic oppression, militarism and violence must end. She calls for a revolution of values, allies that listen and engage and a world where anger is given space to be rechanneled into creating social and economic change. In this moment, as people have reached a boiling point and are being asked to restructure the nature of freedom, King encourages us to follow her father’s words of nonviolent coexistence, and not continue on the path of violent coannihilation. “You as a person may want to exempt yourself, but every generation is called,” King says. “And so I encourage corporations in America to start doing anti-racism work within corporate America. I encourage every industry to start doing anti-racism work and pick up the banner of understanding nonviolent change personally and from a social change perspective. We can do this. We can make the right choice to ultimately build the beloved community.”


“Can we really become an equal people, equally bound by law?” asks Anthony D. Romero, executive director of the ACLU. He speaks at TED2020: Uncharted on June 3, 2020. (Photo courtesy of TED)

Anthony D. Romero, executive director of the American Civil Liberties Union (ACLU)

Big idea: No matter how frightened we are by the current turmoil, we must stay positive, listen to and engage with unheard or silenced voices, and help answer what’s become the central question of democracy in the United States: Can we really become an equal people, equally bound by law, when so many of us are beaten down by racist institutions and their enforcers?

How? This is no time for allies to disconnect — it’s time for them to take a long look in the mirror, ponder viewpoints they may not agree with or understand and engage in efforts to dismantle institutional white supremacy, Romero says. Reform is not enough anymore. Among many other changes, the most acute challenge the ACLU is now tackling is how to defund militarized police forces that more often look like more standing armies than civil servants — and bring them under civilian control. “For allies in this struggle, and those of us who don’t live this experience every day, it is time for us to lean in,” Romero says. “You can’t change the channel, you can’t tune out, you can’t say, ‘This is too hard.’ It is not that hard for us to listen and learn and heed.”

Sociological ImagesThe Hidden Cost of Your New Wardrobe

More than ever, people desire to buy more clothes. Pushed by advertising, we look for novelty and variety, amassing personal wardrobes unimaginable in times past. With the advent of fast fashion, brands like H&M and ZARA are able to provide low prices by cutting costs where consumers can’t see—textile workers are maltreated, and our environment suffers.

In this film, I wanted to play with the idea of fast fashion advertising and turn it on its head. I edited two different H&M look books to show what really goes into the garments they advertise and the fast fashion industry as a whole. I made this film for my Introduction to Sociology class after being inspired by a reading we did earlier in the semester.

Robert Reich’s (2007) book, Supercapitalism, discusses how we have “two minds,” one as a consumer/investor and another as a citizen. He explains that as a consumer, we want to spend as little money as possible while finding the best deals—shopping at stores like H&M. On the other hand, our “citizen mind” wants workers to be treated fairly and our environment to be protected. This film highlights fast fashion as an example of Reich’s premise of the conflict between our two minds—a conflict that is all too prevalent in our modern world with giant brands like Walmart and Amazon taking over consumer markets. I hope that by thinking about the fast fashion industry with a sociological mindset, we can see past the bargain prices and address what really goes on behind the scenes.

Graham Nielsen is a Swedish student studying an interdisciplinary concentration titled “Theory and Practice: Digital Media and Society” at Hamilton college. He’s interested in advertising, marketing, video editing, fashion, as well as film and television culture and video editing.

Read More:

Links to the original look books: Men Women

Kant, R. (2012) “Textile dyeing industry an environmental hazard.” Natural Science4, 22-26. doi: 10.4236/ns.2012.41004.

Annamma J., J. F. Sherry Jr, A. Venkatesh, J. Wang & R. Chan (2012) “Fast Fashion, Sustainability, and the Ethical Appeal of Luxury Brands.” Fashion Theory, 16:3, 273-295, DOI: 10.2752/175174112X13340749707123

Aspers, P., and F. Godart (2013) “Sociology of Fashion: Order and Change.” Annual Review of Sociology  39:1, 171-192

(View original at https://thesocietypages.org/socimages)

CryptogramAnalyzing IoT Security Best Practices

New research: "Best Practices for IoT Security: What Does That Even Mean?" by Christopher Bellman and Paul C. van Oorschot:

Abstract: Best practices for Internet of Things (IoT) security have recently attracted considerable attention worldwide from industry and governments, while academic research has highlighted the failure of many IoT product manufacturers to follow accepted practices. We explore not the failure to follow best practices, but rather a surprising lack of understanding, and void in the literature, on what (generically) "best practice" means, independent of meaningfully identifying specific individual practices. Confusion is evident from guidelines that conflate desired outcomes with security practices to achieve those outcomes. How do best practices, good practices, and standard practices differ? Or guidelines, recommendations, and requirements? Can something be a best practice if it is not actionable? We consider categories of best practices, and how they apply over the lifecycle of IoT devices. For concreteness in our discussion, we analyze and categorize a set of 1014 IoT security best practices, recommendations, and guidelines from industrial, government, and academic sources. As one example result, we find that about 70\% of these practices or guidelines relate to early IoT device lifecycle stages, highlighting the critical position of manufacturers in addressing the security issues in question. We hope that our work provides a basis for the community to build on in order to better understand best practices, identify and reach consensus on specific practices, and then find ways to motivate relevant stakeholders to follow them.

Back in 2017, I catalogued nineteen security and privacy guideline documents for the Internet of Things. Our problem right now isn't that we don't know how to secure these devices, it's that there is no economic or regulatory incentive to do so.

Worse Than FailureCodeSOD: Classic WTF: Pointless Revenge

As we enjoy some summer weather, we should take a moment to reflect on how we communicate with our peers. We should always do it with kindness, even when we really want revenge. Original -- Kind regards, Remy

We write a lot about unhealthy workplaces. We, and many of our readers, have worked in such places. We know what it means to lose our gruntle (becoming disgruntled). Some of us, have even been tempted to do something vengeful or petty to “get back” at the hostile environment.

Milton from 'Office Space' does not receive any cake during the a birthday celebration. He looks on, forlornly, while everyone else in the office enjoys cake.

But none of us actually have done it (I hope ?). It’s self defeating, it doesn’t actually make anything better, and even if the place we’re working isn’t, we are professionals. While it’s a satisfying fantasy, the reality wouldn’t be good for anyone. We know better than that.

Well, most of us know better than that. Harris M’s company went through a round of layoffs while flirting with bankruptcy. It was a bad time to be at the company, no one knew if they’d have a job the next day. Management constantly issued new edicts, before just as quickly recanting them, in a panicked case of somebody-do-something-itis. “Bob” wasn’t too happy with the situation. He worked on a reporting system that displayed financial data. So he hid this line in one of the main include files:

#define double float
//Kind Regards, Bob

This created some subtle bugs. It was released, and it was months before anyone noticed that the reports weren’t exactly reconciling with the real data. Bob was long gone, by that point, and Harris had to clean up the mess. For a company struggling to survive, it didn’t help or improve anything. But I’m sure Bob felt better.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianRussell Coker: How Will the Pandemic Change Things?

The Bulwark has an interesting article on why they can’t “Reopen America” [1]. I wonder how many changes will be long term. According to the Wikipedia List of Epidemics [2] Covid-19 so far hasn’t had a high death toll when compared to other pandemics of the last 100 years. People’s reactions to this vary from doing nothing to significant isolation, the question is what changes in attitudes will be significant enough to change society.

Transport

One thing that has been happening recently is a transition in transport. It’s obvious that we need to reduce CO2 and while electric cars will address the transport part of the problem in the long term changing to electric public transport is the cheaper and faster way to do it in the short term. Before Covid-19 the peak hour public transport in my city was ridiculously overcrowded, having people unable to board trams due to overcrowding was really common. If the economy returns to it’s previous state then I predict less people on public transport, more traffic jams, and many more cars idling and polluting the atmosphere.

Can we have mass public transport that doesn’t give a significant disease risk? Maybe if we had significantly more trains and trams and better ventilation with more airflow designed to suck contaminated air out. But that would require significant engineering work to design new trams, trains, and buses as well as expense in refitting or replacing old ones.

Uber and similar companies have been taking over from taxi companies, one major feature of those companies is that the vehicles are not dedicated as taxis. Dedicated taxis could easily be designed to reduce the spread of disease, the famed Black Cab AKA Hackney Carriage [3] design in the UK has a separate compartment for passengers with little air flow to/from the driver compartment. It would be easy to design such taxis to have entirely separate airflow and if setup to only take EFTPOS and credit card payment could avoid all contact between the driver and passengers. I would prefer to have a Hackney Carriage design of vehicle instead of a regular taxi or Uber.

Autonomous cars have been shown to basically work. There are some concerns about safety issues as there are currently corner cases that car computers don’t handle as well as people, but of course there are also things computers do better than people. Having an autonomous taxi would be a benefit for anyone who wants to avoid other people. Maybe approval could be rushed through for autonomous cars that are limited to 40Km/h (the maximum collision speed at which a pedestrian is unlikely to die), in central city areas and inner suburbs you aren’t likely to drive much faster than that anyway.

Car share services have been becoming popular, for many people they are significantly cheaper than owning a car due to the costs of regular maintenance, insurance, and depreciation. As the full costs of car ownership aren’t obvious people may focus on the disease risk and keep buying cars.

Passenger jets are ridiculously cheap. But this relies on the airline companies being able to consistently fill the planes. If they were to add measures to reduce cross contamination between passengers which slightly reduces the capacity of planes then they need to increase ticket prices accordingly which then reduces demand. If passengers are just scared of flying in close proximity and they can’t fill planes then they will have to increase prices which again reduces demand and could lead to a death spiral. If in the long term there aren’t enough passengers to sustain the current number of planes in service then airline companies will have significant financial problems, planes are expensive assets that are expected to last for a long time, if they can’t use them all and can’t sell them then airline companies will go bankrupt.

It’s not reasonable to expect that the same number of people will be travelling internationally for years (if ever). Due to relying on economies of scale to provide low prices I don’t think it’s possible to keep prices the same no matter what they do. A new economic balance of flights costing 2-3 times more than we are used to while having significantly less passengers seems likely. Governments need to spend significant amounts of money to improve trains to take over from flights that are cancelled or too expensive.

Entertainment

The article on The Bulwark mentions Las Vegas as a city that will be hurt a lot by reductions in travel and crowds, the same thing will happen to tourist regions all around the world. Australia has a significant tourist industry that will be hurt a lot. But the mention of Las Vegas makes me wonder what will happen to the gambling in general. Will people avoid casinos and play poker with friends and relatives at home? It seems that small stakes poker games among friends will be much less socially damaging than casinos, will this be good for society?

The article also mentions cinemas which have been on the way out since the video rental stores all closed down. There’s lots of prime real estate used for cinemas and little potential for them to make enough money to cover the rent. Should we just assume that most uses of cinemas will be replaced by Netflix and other streaming services? What about teenage dates, will kissing in the back rows of cinemas be replaced by “Netflix and chill”? What will happen to all the prime real estate used by cinemas?

Professional sporting matches have been played for a TV-only audience during the pandemic. There’s no reason that they couldn’t make a return to live stadium audiences when there is a vaccine for the disease or the disease has been extinguished by social distancing. But I wonder if some fans will start to appreciate the merits of small groups watching large TVs and not want to go back to stadiums, can this change the typical behaviour of groups?

Restaurants and cafes are going to do really badly. I previously wrote about my experience running an Internet Cafe and why reopening businesses soon is a bad idea [4]. The question is how long this will go for and whether social norms about personal space will change things. If in the long term people expect 25% more space in a cafe or restaurant that’s enough to make a significant impact on profitability for many small businesses.

When I was young the standard thing was for people to have dinner at friends homes. Meeting friends for dinner at a restaurant was uncommon. Recently it seemed to be the most common practice for people to meet friends at a restaurant. There are real benefits to meeting at a restaurant in terms of effort and location. Maybe meeting friends at their home for a delivered dinner will become a common compromise, avoiding the effort of cooking while avoiding the extra expense and disease risk of eating out. Food delivery services will do well in the long term, it’s one of the few industry segments which might do better after the pandemic than before.

Work

Many companies are discovering the benefits of teleworking, getting it going effectively has required investing in faster Internet connections and hardware for employees. When we have a vaccine the equipment needed for teleworking will still be there and we will have a discussion about whether it should be used on a more routine basis. When employees spend more than 2 hours per day travelling to and from work (which is very common for people who work in major cities) that will obviously limit the amount of time per day that they can spend working. For the more enthusiastic permanent employees there seems to be a benefit to the employer to allow working from home. It’s obvious that some portion of the companies that were forced to try teleworking will find it effective enough to continue in some degree.

One company that I work for has quit their coworking space in part because they were concerned that the coworking company might go bankrupt due to the pandemic. They seem to have become a 100% work from home company for the office part of the work (only on site installation and stock management is done at corporate locations). Companies running coworking spaces and other shared offices will suffer first as their clients have short term leases. But all companies renting out office space in major cities will suffer due to teleworking. I wonder how this will affect the companies providing services to the office workers, the cafes and restaurants etc. Will there end up being so much unused space in central city areas that it’s not worth converting the city cinemas into useful space?

There’s been a lot of news about Zoom and similar technologies. Lots of other companies are trying to get into that business. One thing that isn’t getting much notice is remote access technologies for desktop support. If the IT people can’t visit your desk because you are working from home then they need to be able to remotely access it to fix things. When people make working from home a large part of their work time the issue of who owns peripherals and how they are tracked will get interesting. In a previous blog post I suggested that keyboards and mice not be treated as assets [5]. But what about monitors, 4G/Wifi access points, etc?

Some people have suggested that there will be business sectors benefiting from the pandemic, such as telecoms and e-commerce. If you have a bunch of people forced to stay home who aren’t broke (IE a large portion of the middle class in Australia) they will probably order delivery of stuff for entertainment. But in the long term e-commerce seems unlikely to change much, people will spend less due to economic uncertainty so while they may shift some purchasing to e-commerce apart from home delivery of groceries e-commerce probably won’t go up overall. Generally telecoms won’t gain anything from teleworking, the Internet access you need for good Netflix viewing is generally greater than that needed for good video-conferencing.

Money

I previously wrote about a Basic Income for Australia [6]. One of the most cited reasons for a Basic Income is to deal with robots replacing people. Now we are at the start of what could be a long term economic contraction caused by the pandemic which could reduce the scale of the economy by a similar degree while also improving the economic case for a robotic workforce. We should implement a Universal Basic Income now.

I previously wrote about the make-work jobs and how we could optimise society to achieve the worthwhile things with less work [7]. My ideas about optimising public transport and using more car share services may not work so well after the pandemic, but the rest should work well.

Business

There are a number of big companies that are not aiming for profitability in the short term. WeWork and Uber are well documented examples. Some of those companies will hopefully go bankrupt and make room for more responsible companies.

The co-working thing was always a precarious business. The companies renting out office space usually did so on a monthly basis as flexibility was one of their selling points, but they presumably rented buildings on an annual basis. As the profit margins weren’t particularly high having to pay rent on mostly empty buildings for a few months will hurt them badly. The long term trend in co-working spaces might be some sort of collaborative arrangement between the people who run them and the landlords similar to the way some of the hotel chains have profit sharing agreements with land owners to avoid both the capital outlay for buying land and the risk involved in renting. Also city hotels are very well equipped to run office space, they have the staff and the procedures for running such a business, most hotels also make significant profits from conventions and conferences.

The way the economy has been working in first world countries has been about being as competitive as possible. Just in time delivery to avoid using storage space and machines to package things in exactly the way that customers need and no more machines than needed for regular capacity. This means that there’s no spare capacity when things go wrong. A few years ago a company making bolts for the car industry went bankrupt because the car companies forced the prices down, then car manufacture stopped due to lack of bolts – this could have been a wake up call but was ignored. Now we have had problems with toilet paper shortages due to it being packaged in wholesale quantities for offices and schools not retail quantities for home use. Food was destroyed because it was created for restaurant packaging and couldn’t be packaged for home use in a reasonable amount of time.

Farmer’s markets alleviate some of the problems with packaging food etc. But they aren’t a good option when there’s a pandemic as disease risk makes them less appealing to customers and therefore less profitable for vendors.

Religion

Many religious groups have supported social distancing. Could this be the start of more decentralised religion? Maybe have people read the holy book of their religion and pray at home instead of being programmed at church? We can always hope.

,

Planet DebianIan Jackson: Renaming the primary git branch to "trunk"

I have been convinced by the arguments that it's not nice to keep using the word master for the default git branch. Regardless of the etymology (which is unclear), some people say they have negative associations for this word, Changing this upstream in git is complicated on a technical level and, sadly, contested.

But git is flexible enough that I can make this change in my own repositories. Doing so is not even so difficult.

So:

Announcement

I intend to rename master to trunk in all repositories owned by my personal hat. To avoid making things very complicated for myself I will just delete refs/heads/master when I make this change. So there may be a little disruption to downstreams.

I intend make this change everywhere eventually. But rather than front-loading the effort, I'm going to do this to repositories as I come across them anyway. That will allow me to update all the docs references, any automation, etc., at a point when I have those things in mind anyway. Also, doing it this way will allow me to focus my effort on the most active projects, and avoids me committing to a sudden large pile of fiddly clerical work. But: if you have an interest in any repository in particular that you want updated, please let me know so I can prioritise it.

Bikeshed

Why "trunk"? "Main" has been suggested elswewhere, and it is often a good replacement for "master" (for example, we can talk very sensibly about a disk's Main Boot Record, MBR). But "main" isn't quite right for the VCS case; for example a "main" branch ought to have better quality than is typical for the primary development branch.

Conversely, there is much precedent for "trunk". "Trunk" was used to refer to this concept by at least SVN, CVS, RCS and CSSC (and therefore probably SCCS) - at least in the documentation, although in some of these cases the command line API didn't have a name for it.

So "trunk" it is.

Aside: two other words - passlist, blocklist

People are (finally!) starting to replace "blacklist" and "whitelist". Seriously, why has it taken everyone this long?

I have been using "blocklist" and "passlist" for these concepts for some time. They are drop-in replacements. I have also heard "allowlist" and "denylist" suggested, but they are cumbersome and cacophonous.

Also "allow" and "deny" seem to more strongly imply an access control function than merely "pass" and "block", and the usefulness of passlists and blocklists extends well beyond access control: protocol compatibility and ABI filtering are a couple of other use cases.



comment count unavailable comments

Planet DebianFrançois Marier: Automated MythTV-related maintenance tasks

Here is the daily/weekly cronjob I put together over the years to perform MythTV-related maintenance tasks on my backend server.

The first part performs a database backup:

5 1 * * *  mythtv  /usr/share/mythtv/mythconverg_backup.pl

which I previously configured by putting the following in /home/mythtv/.mythtv/backuprc:

DBBackupDirectory=/var/backups/mythtv

and creating a new directory for it:

mkdir /var/backups/mythtv
chown mythtv:mythtv /var/backups/mythtv

The second part of /etc/cron.d/mythtv-maintenance runs a contrib script to optimize the database tables:

10 1 * * *  mythtv  /usr/bin/chronic /usr/share/doc/mythtv-backend/contrib/maintenance/optimize_mythdb.pl

once a day. It requires the libmythtv-perl and libxml-simple-perl packages to be installed on Debian-based systems.

It is quickly followed by a check of the recordings and automatic repair of the seektable (when possible):

20 1 * * *  mythtv  /usr/bin/chronic /usr/bin/mythutil --checkrecordings --fixseektable

Next, I force a scan of the music and video databases to pick up anything new that may have been added externally via NFS mounts:

30 1 * * *  mythtv  /usr/bin/mythutil --quiet --scanvideos
31 1 * * *  mythtv  /usr/bin/mythutil --quiet --scanmusic

Finally, I defragment the XFS partition for two hours every day except Friday:

45 1 * * 1-4,6-7  root  /usr/sbin/xfs_fsr

and resync the RAID-1 arrays once a week to ensure that they stay consistent and error-free:

15 3 * * 2  root  /usr/local/sbin/raid_parity_check md0
15 3 * * 4  root  /usr/local/sbin/raid_parity_check md2

using a trivial script.

In addition to that cronjob, I also have smartmontools run daily short and weekly long SMART tests via this blurb in /etc/smartd.conf:

/dev/sda -a -d ata -o on -S on -s (S/../.././04|L/../../6/05)
/dev/sdb -a -d ata -o on -S on -s (S/../.././04|L/../../6/05)

If there are any other automated maintenance tasks you do on your MythTV server, please leave a comment!

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, May 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, 198 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA did 18.0h (out of 14h assigned and 4h from April).
  • Anton Gladky gave back the assigned 10h and declared himself inactive.
  • Ben Hutchings did 19.75h (out of 17.25h assigned and 2.5h from April).
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 17.25h (out of 17.25h assigned).
  • Dylan Aïssi gave back the assigned 6h and declared himself inactive.
  • Emilio Pozuelo Monfort did not manage to work LTS in May and now reported 5h work in April (out of 17.25h assigned plus 46h from April), thus is carrying over 58.25h for June.
  • Markus Koschany did 25.0h (out of 17.25h assigned and 56h from April), thus carrying over 48.25h to June.
  • Mike Gabriel did 14.50h (out of 8h assigned and 6.5h from April).
  • Ola Lundqvist did 11.5h (out of 12h assigned and 7h from April), thus carrying over 7.5h to June.
  • Roberto C. Sánchez did 17.25h (out of 17.25h assigned).
  • Sylvain Beucler did 17.25h (out of 17.25h assigned).
  • Thorsten Alteholz did 17.25h (out of 17.25h assigned).
  • Utkarsh Gupta did 17.25h (out of 17.25h assigned).

Evolution of the situation

In May 2020 we had our second (virtual) contributors meeting on IRC, Logs and minutes are available online. Then we also moved our ToDo from the Debian wiki to the issue tracker on salsa.debian.org.
Sadly three contributors went inactive in May: Adrian Bunk, Anton Gladky and Dylan Aïssi. And while there are currently still enough active contributors to shoulder the existing work, we like to use this opportunity that we are always looking for new contributors. Please mail Holger if you are interested.
Finally, we like to remind you for a last time, that the end of Jessie LTS is coming in less than a month!
In case you missed it (or missed to act), please read this post about keeping Debian 8 Jessie alive for longer than 5 years. If you expect to have Debian 8 servers/devices running after June 30th 2020, and would like to have security updates for them, please get in touch with Freexian.

The security tracker currently lists 6 packages with a known CVE and the dla-needed.txt file has 30 packages needing an update.

Thanks to our sponsors

New sponsors are in bold. With the upcoming start of Jessie ELTS, we are welcoming a few new sponsors and others should join soon.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

CryptogramCryptocurrency Pump and Dump Scams

Really interesting research: "An examination of the cryptocurrency pump and dump ecosystem":

Abstract: The surge of interest in cryptocurrencies has been accompanied by a proliferation of fraud. This paper examines pump and dump schemes. The recent explosion of nearly 2,000 cryptocurrencies in an unregulated environment has expanded the scope for abuse. We quantify the scope of cryptocurrency pump and dump schemes on Discord and Telegram, two popular group-messaging platforms. We joined all relevant Telegram and Discord groups/channels and identified thousands of different pumps. Our findings provide the first measure of the scope of such pumps and empirically document important properties of this ecosystem.

Worse Than FailureTales from the Interview: Classic WTF: Slightly More Sociable

As we continue our vacation, this classic comes from the ancient year of 2007, when "used to being the only woman in my engineering and computer science classes" was a much more common phrase. Getting a job can feel competitive, but there are certain ways you can guarantee you're gonna lose that competition. Original --Remy

Today’s Tale from the Interview comes from Shanna...

Fresh out of college, and used to being the only woman in my engineering and computer science classes, I wasn't quite sure what to expect in the real world. I happily ended up finding a development job in a company which was nowhere near as unbalanced as my college classes had been. The company was EXTREMELY small and the entire staff, except the CEO, was in one office. I ended up sitting at a desk next to the office admin, another woman who was hired a month or two after me.

A few months after I was hired, we decided we needed another developer. I found out then that there was another person who had been up for my position who almost got it instead of me. He was very technically skilled, but ended up not getting the position because I was more friendly and sociable. My boss decided to call him back in for another interview because it had been a very close decision, and they wanted to give him another chance. He was still looking for a full-time job, so he said he'd be happy to come in again. The day he showed up, he looked around the office, and then he and my boss went in to the CEO's office to discuss the position again.

The interview seemed to be going well, and he was probably on track for getting the position. Then he asked my boss, "So, did you ever end up hiring a developer last time you were looking?" My boss answered "Oh, yeah, we did." The interviewee stopped for a second and, because he'd noticed that the only different faces in the main office were myself and the office admin, said "You mean, you hired one of those GIRLS out there?"

Needless to say, the interview ended quickly after that, and he didn't get the position.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianMatthew Garrett: Making my doorbell work

I recently moved house, and the new building has a Doorbird to act as a doorbell and open the entrance gate for people. There's a documented local control API (no cloud dependency!) and a Home Assistant integration, so this seemed pretty straightforward.

Unfortunately not. The Doorbird is on separate network that's shared across the building, provided by Monkeybrains. We're also a Monkeybrains customer, so our network connection is plugged into the same router and antenna as the Doorbird one. And, as is common, there's port isolation between the networks in order to avoid leakage of information between customers. Rather perversely, we are the only people with an internet connection who are unable to ping my doorbell.

I spent most of the past few weeks digging myself out from under a pile of boxes, but we'd finally reached the point where spending some time figuring out a solution to this seemed reasonable. I spent a while playing with port forwarding, but that wasn't ideal - the only server I run is in the UK, and having packets round trip almost 11,000 miles so I could speak to something a few metres away seemed like a bad plan. Then I tried tethering an old Android device with a data-only SIM, which worked fine but only in one direction (I could see what the doorbell could see, but I couldn't get notifications that someone had pushed a button, which was kind of the point here).

So I went with the obvious solution - I added a wifi access point to the doorbell network, and my home automation machine now exists on two networks simultaneously (nmcli device modify wlan0 ipv4.never-default true is the magic for "ignore the gateway that the DHCP server gives you" if you want to avoid this), and I could now do link local service discovery to find the doorbell if it changed addresses after a power cut or anything. And then, like magic, everything worked - I got notifications from the doorbell when someone hit our button.

But knowing that an event occurred without actually doing something in response seems fairly unhelpful. I have a bunch of Chromecast targets around the house (a mixture of Google Home devices and Chromecast Audios), so just pushing a message to them seemed like the easiest approach. Home Assistant has a text to speech integration that can call out to various services to turn some text into a sample, and then push that to a media player on the local network. You can group multiple Chromecast audio sinks into a group that then presents as a separate device on the network, so I could then write an automation to push audio to the speaker group in response to the button being pressed.

That's nice, but it'd also be nice to do something in response. The Doorbird exposes API control of the gate latch, and Home Assistant exposes that as a switch. I'm using Home Assistant's Google Assistant integration to expose devices Home Assistant knows about to voice control. Which means when I get a house-wide notification that someone's at the door I can just ask Google to open the door for them.

So. Someone pushes the doorbell. That sends a signal to a machine that's bridged onto that network via an access point. That machine then sends a protobuf command to speakers on a separate network, asking them to stream a sample it's providing. Those speakers call back to that machine, grab the sample and play it. At this point, multiple speakers in the house say "Someone is at the door". I then say "Hey Google, activate the front gate" - the device I'm closest to picks this up and sends it to Google, where something turns my speech back into text. It then looks at my home structure data and realises that the "Front Gate" device is associated with my Home Assistant integration. It then calls out to the home automation machine that received the notification in the first place, asking it to trigger the front gate relay. That device calls out to the Doorbird and asks it to open the gate. And now I have functionality equivalent to a doorbell that completes a circuit and rings a bell inside my home, and a button inside my home that completes a circuit and opens the gate, except it involves two networks inside my building, callouts to the cloud, at least 7 devices inside my home that are running Linux and I really don't want to know how many computational cycles.

The future is wonderful.

(I work for Google. I do not work on any of the products described in this post. Please god do not ask me how to integrate your IoT into any of this)

comment count unavailable comments

,

TEDHope, action, change: The talks of Session 5 of TED2020

Daring, bold, systems-disrupting change requires big dreams and an even bigger vision. For Session 5 of TED2020, the Audacious Project, a collaborative funding initiative housed at TED, highlighted bold plans for social change from Southern New Hampshire University, SIRUM, BRAC, Harlem Children’s Zone, Humanitarian OpenStreetMap Team (HOT), Project CETI and One Acre Fund. From aiding the ultra-poor to upending medicine pricing to ensuring all communities are visible on a map, these solutions are uniquely positioned to help us rebuild key systems and push the boundaries of what’s possible through breakthrough science and technology. Learn more about these thrilling projects and how you can help them change the world.

“We can create radical access to medications based on a fundamental belief that people who live in one of the wealthiest nations in the world can and should have access to medicine they need to survive and to thrive,” says Kiah Williams, cofounder of SIRUM. She speaks at TED2020: Uncharted on June 18, 2020. (Photo courtesy of TED)

Kiah Williams, cofounder of SIRUM

Big idea: No one should have to choose between paying bills or affording lifesaving medications. 

How? Every day in the US, people must make impossible health decisions at the intersection of life and livelihood. The result is upwards of ten thousand deaths annually — more than opioid overdoses and car accidents combined — due to the high prices of prescription drugs. Kiah Williams and her team at SIRUM are tapping into an alternative that circumvents the traditional medical supply chain while remaining budget-friendly to underserved communities: unused medication. Sourced from manufacturer surplus, health care facilities (like hospitals, pharmacies and nursing homes) and personal donations, Williams and her team partner with medical professionals to provide prescriptions for conditions, ranging from heart disease to mental health, at flat, transparent costs. They currently supply 150,000 people with access to medicine they need — and they’re ready to expand. In the next five years, SIRUM plans to reach one million people across 12 states with a billion dollars’ worth of unused medicine, with the hopes of driving down regional pricing in low-income communities. “We can create radical access to medications based on a fundamental belief that people who live in one of the wealthiest nations in the world can and should have access to medicine they need to survive and to thrive,” Williams says.


Shameran Abed, senior director of the Microfinance and Ultra-Poor Graduation Program at BRAC, shares his organization’s work lifting families out of ultra-poverty at TED2020: Uncharted on June 18, 2020. (Photo courtesy of TED)

Shameran Abed, senior director of the Microfinance and Ultra-Poor Graduation Program at BRAC

Big idea: Let’s stop imagining a world without ultra-poverty and start building it instead.

How? At the end of 2019, approximately 400 million people worldwide lived in ultra-poverty — a situation that goes beyond the familiar monetary definition, stripping individuals of their dignity, purpose, self-worth, community and ability to imagine a better future. When he founded BRAC in 1972, Shameran Abed’s father saw that for poverty reduction programs to work, a sense of hope and self-worth needs to be instilled alongside assets. He pioneered a graduation approach that, over the course of two years, addressed both the deficit of income and hope in four steps: (1) supporting the basic needs with food or cash, (2) guiding the individual towards a decent livelihood by providing an asset like livestock and training them to earn money from it, (3) training them to save, budget and invest the new wealth, (4) integrating the individual socially. Since starting this program in 2002, two million Bangladeshi women have lifted themselves and their families out of ultra-poverty. With BRAC at a proven and effective nationwide scale, the organization plans to aid other governments in adopting and scaling graduation programs themselves — helping another 21 million people lift themselves out of ultra-poverty across eight countries over the next six years, with BRAC teams onsite and embedded in each country to provide an obtainable, foreseeable future for all. “Throughout his life, [my father] saw optimism triumph over despair; that when you light the spark of self-belief in people, even the poorest can transform their lives,” Abed says.

Pop-soul singer Emily King performs her songs “Distance” and “Sides” at TED2020: Uncharted on June 18, 2020. (Photo courtesy of TED)

Lending her extraordinary voice to keep the session lively, singer and songwriter Emily King performs her songs “Distance” and “Sides” from her home in New York City.


Chrystina Russell, executive director of SNHU’s Global Education Movement, is helping displaced people earn college degrees. She speaks at TED2020: Uncharted on June 18, 2020. (Photo courtesy of TED)

Chrystina Russell, executive director of SNHU’s Global Education Movement

Big idea: Expand access to accredited, college-level education to marginalized populations by reaching learners wherever they are in the world.

How? Education empowers — and perhaps nowhere more so than in the lives of displaced people, says executive director of SNHU’s Global Education Movement (GEM) Chrystina Russell. Harnessing the power of education to improve the world lies at the foundation of GEM, a program that offers accredited bachelor’s degrees and pathways to employment for refugees in Lebanon, Kenya, Malawi, Rwanda and South Africa. Today, the humanitarian community understands that global displacement will be a permanent problem, and that traditional education models remain woefully inaccessible to these vulnerable populations. The magic of GEM, Russell says, is that it addresses refugee lives as they currently exist. Degrees are competency-based, and without classes, lectures, due dates or final exams, students choose where and when to learn. GEM has served more than 1,000 learners to date, helping them obtain bachelor’s degrees and earn incomes at twice the average of their peers. Only three percent of refugees have access to higher education; GEM is now testing its ability to scale competency-based online learning in an effort to empower greater numbers of marginalized people through higher education. “This is a model that really stops putting time and university policies and procedures at the center — and instead puts the student at the center,” Russell says.


David Gruber shares his mind-blowing work using AI to understand and communicate with sperm whales. He speaks at TED2020: Uncharted on June 18, 2020. (Photo courtesy of TED)

David Gruber, marine biologist, explorer, professor

Big idea: Through the innovations of machine learning, we may be able to translate the astounding languages of sperm whales and crack the interspecies communication code. 

How? Sperm whales are some of the most intelligent animals on the planet; they live in complex matriarchal societies and communicate with each other through a series of regionally specific click sequences called codas. These codas may be the key to unlocking interspecies communication, says David Gruber. He shares a bold prediction: with the help of machine learning technology, we will soon be able to understand the languages of sperm whales — and talk back to them. Researchers have developed a number of noninvasive robots to record an enormous archive of codas, focusing on the intimate relationship between mother and calf. Using this data, carefully trained algorithms will be able to decode these codas and map the sounds and logic of sperm whale communication. Gruber believes that by deeply listening to sperm whales, we can create a language blueprint that will enable us to communicate with countless other species around the world. “By listening deeply to nature, we can change our perspective of ourselves and reshape our relationship with all life on this planet,” he says.


“Farmers stand at the center of the world,” says Andrew Youn, sharing One Acre Fund’s work helping small-scale farmers in sub-Saharan Africa. He speaks at TED2020: Uncharted on June 18, 2020. (Photo courtesy of TED)

Andrew Youn, social entrepreneur

Big idea: By equipping small-scale farmers in sub-Saharan Africa with the tools and resources they need to expand their work, they will be able to upend cycles of poverty and materialize their innovation, knowledge and drive into success for their local communities and the world.

How? Most small-scale farmers in sub-Saharan Africa are women who nourish their families and communities and fortify their local economies. But they’re often not able to access the technology, resources or capital they need to streamline their farms, which leads to small harvests and cycles of poverty. The One Acre Fund, a two-time Audacious Project recipient, seeks to upend that cycle by providing resources like seeds and fertilizer, mentorship in the form of local support guides and training in modern agricultural practices. The One Acre Fund intends to reach three milestones by 2026: to serve 2.5 million families (which include 10 million children) every year through their direct full-service program; to serve an additional 4.3 million families per year with the help of local government and private sector partners; and to shape a sustainable green revolution by reimagining our food systems and launching a campaign to plant one billion trees in the next decade. The One Acre Fund enables farmers to transform their work, which vitalizes their families, larger communities and countries. “Farmers stand at the center of the world,” Youn says.


Rebecca Firth is helping map the earth’s most vulnerable populations using a free, open-source software tool. She speaks at TED2020: Uncharted on June 18, 2020. (Photo courtesy of TED)

Rebecca Firth, director of partnerships and community at Humanitarian OpenStreetMap Team (HOT)

Big idea: A new tool to add one billion people to the map, so first responders and aid organizations can save lives. 

How? Today, more than one billion people are literally not on the map, says Rebecca Firth, director of partnerships and community at Humanitarian OpenStreetMap Team (HOT), an organization that helps map the earth’s most vulnerable populations using a free, open-source software tool. The tool works in two stages: first, anyone anywhere can map buildings and roads using satellite images, then local community members fill in the map by identifying structures and adding place names. HOT’s maps help organizations on the ground save lives; they’ve been used by first responders after Haiti’s 2010 earthquake and in Puerto Rico after Hurricane Maria, by health care workers distributing polio vaccines in Nigeria and by refugee aid organizations in South Sudan, Syria and Venezuela. Now, HOT’s goal is to map areas in 94 countries that are home to one billion of the world’s most vulnerable populations — in just five years. To do this, they’re recruiting more than one million mapping volunteers, updating their tech and, importantly, raising awareness about the availability of their maps to local and international humanitarian organizations. “It’s about creating a foundation on which so many organizations will thrive,” Firth says. “With open, free, up-to-date maps, those programs will have more impact than they would otherwise, leading to a meaningful difference in lives saved or improved.”


“Our answer to COVID-19 — the despair and inequities plaguing our communities — is targeting neighborhoods with comprehensive services. We have certainly not lost hope, and we invite you to join us on the front lines of this war,” says Kwame Owusu-Kesse, COO of Harlem Children’s Zone. He speaks at TED2020: Uncharted on June 18, 2020. (Photo courtesy of TED)

Kwame Owusu-Kesse, COO, Harlem Children’s Zone

Big idea: In the midst of a pandemic that’s disproportionately devastating the Black community, how do we ensure that at-risk children can continue their education in a safe and healthy environment?

How? Kwame Owusu-Kesse understands that in order to surpass America’s racist economic, educational, health care and judicial institutions, a child must have a secure home and neighborhood. In the face of the coronavirus pandemic, Harlem Children’s Zone has taken on a comprehensive mission to provide uninterrupted, high-quality remote education, as well as food and financial security, unfettered online access and mental health services. Through these programs, Owusu-Kesse hopes to rescue a generation that risks losing months (or years) of education to the impacts of quarantine. “Our answer to COVID-19 — the despair and inequities plaguing our communities — is targeting neighborhoods with comprehensive services,” he says. “We have certainly not lost hope, and we invite you to join us on the front lines of this war.”

Sociological ImagesRacism & Hate Crimes in a Pandemic

Despite calls to physically distance, we are still seeing reports of racially motivated hate crimes in the media. Across the United States, anti-Asian discrimination is rearing its ugly head as people point fingers to blame the global pandemic on a distinct group rather than come to terms with underfunded healthcare systems and poorly-prepared governments.

Governments play a role in creating this rhetoric. Blaming racialized others for major societal problems is a feature of populist governments. One example is Donald Trump’s use of the phrase “Chinese virus” to describe COVID-19. Stirring up racialized resentment is a political tactic used to divert responsibility from the state by blaming racialized members of the community for a global virus. Anti-hate groups are increasingly concerned that the deliberate dehumanization of Asian populations by the President may also fuel hate as extremist groups take this opportunity to create social division.

Unfortunately, this is not new and it is not limited to the United States. During the SARS outbreak there were similar instances of institutional racism inherent in Canadian political and social structures as Chinese, Southeast and East Asian Canadians felt isolated at work, in hospitals, and even on public transportation. As of late May in Vancouver British Columbia Canada, there have been 29 cases of anti-Asian hate crimes.

In this crisis, it is easier for governments to use racialized people as scapegoats than to admit decisions to defund and underfund vital social and healthcare resources. By stoking racist sentiments already entrenched in the Global North, the Trump administration shifts the focus from the harm their policies will inevitably cause, such their willingness to cut funding for the CDC and NIH.

Sociological research shows how these tactics become more widespread as politicians use social media to communicate directly with citizens. creating instantaneous polarizing content. Research has also shown an association between hate crimes in the United States and anti-Muslim rhetoric expressed by Trump as early as 2015. Racist sentiments expressed by politicians have a major impact on the attitudes of the general population, because they excuse and even promote blame toward racialized people, while absolving blame from governments that have failed to act on social issues.     

Kayla Preston is an incoming PhD student in the department of sociology at the University of Toronto. Her research centers on right-wing extremism and deradicalization.


(View original at https://thesocietypages.org/socimages)

CryptogramNation-State Espionage Campaigns against Middle East Defense Contractors

Report on espionage attacks using LinkedIn as a vector for malware, with details and screenshots. They talk about "several hints suggesting a possible link" to the Lazarus group (aka North Korea), but that's by no means definite.

As part of the initial compromise phase, the Operation In(ter)ception attackers had created fake LinkedIn accounts posing as HR representatives of well-known companies in the aerospace and defense industries. In our investigation, we've seen profiles impersonating Collins Aerospace (formerly Rockwell Collins) and General Dynamics, both major US corporations in the field.

Detailed report.

Worse Than FailureClassic WTF: The Developmestuction Environment

We continue to enjoy a brief respite from mining horrible code and terrible workplaces. This classic includes this line: "It requires that… Adobe Indesign is installed on the web server." Original --Remy

Have you ever thought what it would take for you to leave a new job after only a few days? Here's a fun story from my colleague Jake Vinson, whose co-worker of three days would have strongly answered "this."

One of the nice thing about externalizing connection strings is that it's easy to duplicate a database, duplicate the application's files, change the connection string to point to the new database, and bam, you've got a test environment.

From the programmer made famous by tblCalendar and the query string parameter admin=false comes what I think is the most creatively stupid implementation of a test environment ever.

We needed a way to show changes during development of an e-commerce web site to our client. Did our programmer follow a normal method like the one listed above? It might surprise you to find out that no, he didn't.

Instead, we get this weird mutant development/test/production environment (or "developmestuction," as I call it) for not only the database, but the ASP pages. My challenge to you, dear readers, is to identify a step of the way in which there could've been a worse way to do things. I look forward to reading the comments on this post.

Take, for example, a page called productDetail.asp. Which is the test page? Why, productDetail2.asp, of course! Or perhaps test_productDetail.asp. Or perhaps if another change branched off of that, test_productDetail2.asp, or productDetail2_t.asp, or t_pD2.asp.

And the development page? productDetaildev.asp. When the changes were approved, instead of turning t_pD2.asp to productDetail.asp, the old filenames were kept, along with the old files. That means that all links to productDetail.asp became links to t_pD2.asp once they were approved. Except in places that were forgotten, or not included in the sitewide search-and-replace.

As you may've guessed, there are next to no comments, aside from the occasional ...

' Set strS2f to false
strS2f = false

Note: I've never seen more than one comment on any of the pages.

Additional note: I've never seen more than zero comments that are even remotely helpful on any of the pages

OK, so the file structure is next to impossible to understand, especially to the poor new developer we'd hired who just stopped showing up to work after 3 days on this particular project.

What was it that drove him over the edge? Well, if the arrangement of the pages wasn't enough, the database used the same conventions (as in, randomly using test_tblProducts, or tblTestProducts, tblTestProductsFinal, or all of them). Of course, what if we need a test column, rather than a whole table? Flip a coin, if it's heads, create another tblTestWhatever, or if it's tails, just add a column to the production table called test_ItemID. Oh, and there's another two copies of the complete test database, which may or may not be used.

You think I'm done, right? Wrong. All of these inconsistencies in naming are handled individually, in code on the ASP pages. For the table names that follow some remote semblance of a consistent naming convention, the table name is stored in an unencrypted cookie, which is read into dynamic SQL queries. I imagine it goes without saying that all SQL queries were dynamic with no attempts to validate the data or replace quotes.

Having personally seen this system, I want to share a fun little fact about it. It requires that, among a handful of other desktop applications, Adobe Indesign is installed on the web server. I'll leave it to your imagination as to what it could possibly require that for and the number of seconds between each interval that it opens and closes it.
[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet DebianRussell Coker: Squirrelmail vs Roundcube

For some years I’ve had SquirrelMail running on one of my servers for the people who like such things. It seems that the upstream support for SquirrelMail has ended (according to the SquirrelMail Wikipedia page there will be no new releases just Subversion updates to fix bugs). One problem with SquirrelMail that seems unlikely to get fixed is the lack of support for base64 encoded From and Subject fields which are becoming increasingly popular nowadays as people who’s names don’t fit US-ASCII are encoding them in their preferred manner.

I’ve recently installed Roundcube to provide an alternative. Of course one of the few important users of webmail didn’t like it (apparently it doesn’t display well on a recent Samsung Galaxy Note), so now I have to support two webmail systems.

Below is a little Perl script to convert a SquirrelMail abook file into the csv format used for importing a RoundCube contact list.

#!/usr/bin/perl

print "First Name,Last Name,Display Name,E-mail Address\n";
while(<STDIN>)
{
  chomp;
  my @fields = split(/\|/, $_);
  printf("%s,%s,%s %s,%s\n", $fields[1], $fields[2], $fields[0], $fields[4], $fields[3]);
}

,

Planet DebianDirk Eddelbuettel: #27: R and CRAN Binaries for Ubuntu

Welcome to the 27th post in the rationally regularized R revelations series, or R4 for short. This is a edited / updated version of yesterday’s T^4 post #7 as it really fits the R4 series as well as it fits the T4 series.

A new video in both our T^4 series of video lightning talks with tips, tricks, tools, and toys is also a video in the R^4 series as it revisits a topic previously covered in the latter: how to (more easily) get (binary) packages onto your Ubuntu system. In fact, we show it in three different ways.

The slides are here.

This repo at GitHub support the series: use it to open issues for comments, criticism, suggestions, or feedback.

Thanks to Iñaki Ucar who followed up on twitter with a Fedora version. We exchanged some more message, and concluded that complete comparison (from an empty Ubuntu or Fedora container) to a full system takes about the same time on either system.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianAndrew Cater: And a nice quick plug - Apple WWDC keynote

Demonstrating virtualisation on the new Apple to be powered by Apple silicon and ARM. Parallels virtualisation: Linux gets mentioned - two clear shots, one of Debian 9 and one of Debian 10. That's good publicity, any which way you cut it, with a worldwide tech audience hanging on every word from the presenters.

Now - developer hardware is due out this week - Apple A12 chip inside Mac mini format. Can we run Debian natively there?


Planet DebianEvgeni Golov: mass-migrating modules inside an Ansible Collection

Im the Foreman project, we've been maintaining a collection of Ansible modules to manage Foreman installations since 2017. That is, 2 years before Ansible had the concept of collections at all.

For that you had to set library (and later module_utils and doc_fragment_plugins) in ansible.cfg and effectively inject our modules, their helpers and documentation fragments into the main Ansible namespace. Not the cleanest solution, but it worked quiet well for us.

When Ansible started introducing Collections, we quickly joined, as the idea of namespaced, easily distributable and usable content units was great and exactly matched what we had in mind.

However, collections are only usable in Ansible 2.8, or actually 2.9 as 2.8 can consume them, but tooling around building and installing them is lacking. Because of that we've been keeping our modules usable outside of a collection.

Until recently, when we decided it's time to move on, drop that compatibility (which costed a few headaches over the time) and release a shiny 1.0.0.

One of the changes we wanted for 1.0.0 is renaming a few modules. Historically we had the module names prefixed with foreman_ and katello_, depending whether they were designed to work with Foreman (and plugins) or Katello (which is technically a Foreman plugin, but has a way more complicated deployment and currently can't be easily added to an existing Foreman setup). This made sense as long as we were injecting into the main Ansible namespace, but with collections the names be became theforemam.foreman.foreman_ <something> and while we all love Foreman, that was a bit too much. So we wanted to drop that prefix. And while at it, also change some other names (like ptable, which became partition_table) to be more readable.

But how? There is no tooling that would rename all files accordingly, adjust examples and tests. Well, bash to the rescue! I'm usually not a big fan of bash scripts, but renaming files, searching and replacing strings? That perfectly fits!

First of all we need a way map the old name to the new name. In most cases it's just "drop the prefix", for the others you can have some if/elif/fi:

prefixless_name=$(echo ${old_name}| sed -E 's/^(foreman|katello)_//')
if [[ ${old_name} == 'foreman_environment' ]]; then
  new_name='puppet_environment'
elif [[ ${old_name} == 'katello_sync' ]]; then
  new_name='repository_sync'
elif [[ ${old_name} == 'katello_upload' ]]; then
  new_name='content_upload'
elif [[ ${old_name} == 'foreman_ptable' ]]; then
  new_name='partition_table'
elif [[ ${old_name} == 'foreman_search_facts' ]]; then
  new_name='resource_info'
elif [[ ${old_name} == 'katello_manifest' ]]; then
  new_name='subscription_manifest'
elif [[ ${old_name} == 'foreman_model' ]]; then
  new_name='hardware_model'
else
  new_name=${prefixless_name}
fi

That defined, we need to actually have a ${old_name}. Well, that's a for loop over the modules, right?

for module in ${BASE}/foreman_*py ${BASE}/katello_*py; do
  old_name=$(basename ${module} .py)
  …
done

While we're looping over files, let's rename them and all the files that are associated with the module:

# rename the module
git mv ${BASE}/${old_name}.py ${BASE}/${new_name}.py
# rename the tests and test fixtures
git mv ${TESTS}/${old_name}.yml ${TESTS}/${new_name}.yml
git mv tests/fixtures/apidoc/${old_name}.json tests/fixtures/apidoc/${new_name}.json
for testfile in ${TESTS}/fixtures/${old_name}-*.yml; do
  git mv ${testfile} $(echo ${testfile}| sed "s/${old_name}/${new_name}/")
done

Now comes the really tricky part: search and replace. Let's see where we need to replace first:

  1. in the module file
    1. module key of the DOCUMENTATION stanza (e.g. module: foreman_example)
    2. all examples (e.g. foreman_example: …)
  2. in all test playbooks (e.g. foreman_example: …)
  3. in pytest's conftest.py and other files related to test execution
  4. in documentation
sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" ${BASE}/*.py
sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" tests/test_playbooks/tasks/*.yml tests/test_playbooks/*.yml
sed -E -i "/'${old_name}'/ s/${old_name}/${new_name}/" tests/conftest.py tests/test_crud.py
sed -E -i "/`${old_name}`/ s/${old_name}/${new_name}/g' README.md docs/*.md

You've probably noticed I used ${BASE} and ${TESTS} and never defined them… Lazy me.

But here is the full script, defining the variables and looping over all the modules.

#!/bin/bash
BASE=plugins/modules
TESTS=tests/test_playbooks
RUNTIME=meta/runtime.yml
echo "plugin_routing:" > ${RUNTIME}
echo "  modules:" >> ${RUNTIME}
for module in ${BASE}/foreman_*py ${BASE}/katello_*py; do
  old_name=$(basename ${module} .py)
  prefixless_name=$(echo ${old_name}| sed -E 's/^(foreman|katello)_//')
  if [[ ${old_name} == 'foreman_environment' ]]; then
    new_name='puppet_environment'
  elif [[ ${old_name} == 'katello_sync' ]]; then
    new_name='repository_sync'
  elif [[ ${old_name} == 'katello_upload' ]]; then
    new_name='content_upload'
  elif [[ ${old_name} == 'foreman_ptable' ]]; then
    new_name='partition_table'
  elif [[ ${old_name} == 'foreman_search_facts' ]]; then
    new_name='resource_info'
  elif [[ ${old_name} == 'katello_manifest' ]]; then
    new_name='subscription_manifest'
  elif [[ ${old_name} == 'foreman_model' ]]; then
    new_name='hardware_model'
  else
    new_name=${prefixless_name}
  fi
  echo "renaming ${old_name} to ${new_name}"
  git mv ${BASE}/${old_name}.py ${BASE}/${new_name}.py
  git mv ${TESTS}/${old_name}.yml ${TESTS}/${new_name}.yml
  git mv tests/fixtures/apidoc/${old_name}.json tests/fixtures/apidoc/${new_name}.json
  for testfile in ${TESTS}/fixtures/${old_name}-*.yml; do
    git mv ${testfile} $(echo ${testfile}| sed "s/${old_name}/${new_name}/")
  done
  sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" ${BASE}/*.py
  sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" tests/test_playbooks/tasks/*.yml tests/test_playbooks/*.yml
  sed -E -i "/'${old_name}'/ s/${old_name}/${new_name}/" tests/conftest.py tests/test_crud.py
  sed -E -i "/`${old_name}`/ s/${old_name}/${new_name}/g' README.md docs/*.md
  echo "    ${old_name}:" >> ${RUNTIME}
  echo "      redirect: ${new_name}" >> ${RUNTIME}
  git commit -m "rename ${old_name} to ${new_name}" ${BASE} tests/ README.md docs/ ${RUNTIME}
done

As a bonus, the script will also generate a meta/runtime.yml which can be used by Ansible 2.10+ to automatically use the new module names if the playbook contains the old ones.

Oh, and yes, this is probably not the nicest script you'll read this year. Maybe not even today. But it got the job nicely done and I don't intend to need it again anyways.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 07)

Here’s part seven of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

CryptogramIdentifying a Person Based on a Photo, LinkedIn and Etsy Profiles, and Other Internet Bread Crumbs

Interesting story of how the police can identify someone by following the evidence chain from website to website.

According to filings in Blumenthal's case, FBI agents had little more to go on when they started their investigation than the news helicopter footage of the woman setting the police car ablaze as it was broadcast live May 30.

It showed the woman, in flame-retardant gloves, grabbing a burning piece of a police barricade that had already been used to set one squad car on fire and tossing it into the police SUV parked nearby. Within seconds, that car was also engulfed in flames.

Investigators discovered other images depicting the same scene on Instagram and the video sharing website Vimeo. Those allowed agents to zoom in and identify a stylized tattoo of a peace sign on the woman's right forearm.

Scouring other images ­-- including a cache of roughly 500 photos of the Philly protest shared by an amateur photographer ­-- agents found shots of a woman with the same tattoo that gave a clear depiction of the slogan on her T-shirt.

[...]

That shirt, agents said, was found to have been sold only in one location: a shop on Etsy, the online marketplace for crafters, purveyors of custom-made clothing and jewelry, and other collectibles....

The top review on her page, dated just six days before the protest, was from a user identifying herself as "Xx Mv," who listed her location as Philadelphia and her username as "alleycatlore."

A Google search of that handle led agents to an account on Poshmark, the mobile fashion marketplace, with a user handle "lore-elisabeth." And subsequent searches for that name turned up Blumenthal's LinkedIn profile, where she identifies herself as a graduate of William Penn Charter School and several yoga and massage therapy training centers.

From there, they located Blumenthal's Jenkintown massage studio and its website, which featured videos demonstrating her at work. On her forearm, agents discovered, was the same distinctive tattoo that investigators first identified on the arsonist in the original TV video.

The obvious moral isn't a new one: don't have a distinctive tattoo. But more interesting is how different pieces of evidence can be strung together in order to identify someone. This particular chain was put together manually, but expect machine learning techniques to be able to do this sort of thing automatically -- and for organizations like the NSA to implement them on a broad scale.

Another article did a more detailed analysis, and concludes that the Etsy review was the linchpin.

Note to commenters: political commentary on the protesters or protests will be deleted. There are many other forums on the Internet to discuss that.

Planet DebianJunichi Uekawa: The situation is getting better in some ways because we are opening up in Tokyo.

The situation is getting better in some ways because we are opening up in Tokyo. However fear remains the same. The fear of going out vs the business as usual. I'm spending more time on home network than ever before and I am learning about ipv6 more than before. I can probably explain MAP-E or dslite to you like I never could before.

Worse Than FailureClassic WTF: A Gassed Pump

Wow, it's summer. Already? We're taking a short break this week at TDWTF, and reaching back through the archives for some classic stories. If you've cancelled your road trip this year, make a vicarious stop at a filthy gas station with this old story. Original --Remy

“Staff augmentation,” was a fancy way of saying, “hey, contractors get more per hour, but we don’t have to provide benefits so they are cheaper,” but Stuart T was happy to get more per hour, and even happier to know that he’d be on to his next gig within a few months. That was how he ended up working for a national chain of gas-station/convenience stores. His job was to build a “new mobile experience for customer loyalty” (aka, wrapping their website up as an app that can also interact with QR codes).

At least, that’s what he was working on before Miranda stormed into his cube. “Stuart, we need your help. ProdTrack is down, and I can’t fix it, because I’ve got to be at a mandatory meeting in ten minutes.”

A close-up of a gas pump

ProdTrack was their inventory system that powered their point-of-sale terminals. For it to be down company wide was a big problem, and essentially rendered most of their stores inoperable. “Geeze, what mandatory meeting is more important than that?”

“The annual team-building exercise,” Miranda said, using a string of profanity for punctuation. “They’ve got a ‘no excuses’ policy, so I have to go, ‘or else’, but we also need to get this fixed.”

Miranda knew exactly what was wrong. ProdTrack could only support 14 product categories. But one store- store number 924- had decided that they needed 15. So they added a 15th category to the database, threw a few products into the category, and crossed their fingers. Now, all the stores were crashing.

“You’ll need to look at the StoreSQLUpdates and the StoreSQLUpdateStatements tables,” Miranda said. “And probably dig into the ProductDataPump.exe app. Just do a quick fix- we’re releasing an update supports any number of categories in three weeks or so, we just need to hold this together till then.”

With that starting point, Stuart started digging in. First, he puzzled over the tables Miranda had mentioned. StoreSQLUpdates looked like this:

IDSTATEMENTSTATEMENT_ORDER
145938DELETE FROM SupplierInfo90
148939INSERT INTO ProductInfo VALUES(12348, 3, 6)112

Was this an audit tables? What was StoreSQLUpdateStatements then?

IDSTATEMENT
168597INSERT INTO StoreSQLUpdates(statement, statement_order) VALUES (‘DELETE FROM SupplierInfo’, 90)
168598INSERT INTO StoreSQLUpdates(statement, statement_order) VALUES (‘INSERT INTO ProductInfo VALUES(12348, 3, 6)’, 112)

Stuart stared at his screen, and started asking questions. Not questions about what he was looking at, but questions about the life choices that had brought him to this point, questions about whether it was really that bad an idea to start drinking at work, and questions about the true nature of madness- if the world was mad, and he was the only sane person left, didn’t that make him the most insane person of all?

He hoped the mandatory team building exercise was the worst experience of Miranda’s life, as he sent her a quick, “WTF?” email message. She obviously still had her cellphone handy, as she replied minutes later:

Oh, yeah, that’s for data-sync. Retail locations have flaky internet, and keep a local copy of the data. That’s what’s blowing up. Check ProductDataPump.exe.

Stuart did. ProductDataPump.exe was a VB.Net program in a single file, with one beautifully named method, RunIt, that contained nearly 2,000 lines of code. Some saintly soul had done him the favor of including a page of documentation at the top of the method, and it started with an apology, then explained the data flow.

Here’s what actually happened: a central database at corporate powered ProdTrack. When any data changed there, those changes got logged into StoreSQLUpdateStatements. A program called ProductDataShift.exe scanned that table, and when new rows appeared, it executed the statements in StoreSQLUpdateStatements (which placed the actual DML commands into StoreSQLUpdates).

Once an hour, ProductDataPump.exe would run. It would attempt to connect to each retail location. If it could, it would read the contents of the central StoreSQLUpdates and the local StoreSQLUpdates, sorting by the order column, and through a bit of faith and luck, would hopefully synchronize the two databases.

Buried in the 2,000 line method, at about line 1,751, was a block that actually executed the statements:

If bolUseSQL Then
    For Each sTmp As String In sProductsTableSQL
        sTmp = sTmp.Trim()
        If sTmp <> "" Then
            SQLUpdatesSQL(lngIDSQL, sTmp, dbQR5)
        End If
    Next sTmp
End If

Once he was done screaming at the insanity of the entire process, Stuart looked at the way product categories worked. Store 924 didn’t carry anything in the ALCOHOL category, due to state Blue Laws, but had added a PRODUCE category. None of the other stores had a PRODUCE category (if they carried any produce, they just put it in PREPARED_FOODS). Fixing the glitch that caused the application to crash when it had too many categories would take weeks, at least- and Miranda already told him a fix was coming. All he had to do was keep it from crashing until then.

Into the StoreSQLUpdates table, he added a DELETE statement that would delete every category that contained zero items. That would fix the immediate problem, but when the ProductDataPump.exe ran, it would just copy the broken categories back around. So Stuart patched the program with the worst fix he ever came up with.

If bolUseSQL Then
    For Each sTmp As String In sProductsTableSQL
        sTmp = sTmp.Trim()
        If sTmp <> "" Then
            If nStoreNumber = 924 And sTmp.Contains("ALCOHOL") Then
                Continue For
            ElseIf nStoreNumber <> 924 And sTmp.Contains("PRODUCE") Then
                Continue For
            Else
                SQLUpdatesSQL(lngIDSQL, sTmp, dbQR5)
            End If
        End If
    Next sTmp
End If
[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Krebs on Security‘BlueLeaks’ Exposes Files from Hundreds of Police Departments

Hundreds of thousands of potentially sensitive files from police departments across the United States were leaked online last week. The collection, dubbed “BlueLeaks” and made searchable online, stems from a security breach at a Texas web design and hosting company that maintains a number of state law enforcement data-sharing portals.

The collection — nearly 270 gigabytes in total — is the latest release from Distributed Denial of Secrets (DDoSecrets), an alternative to Wikileaks that publishes caches of previously secret data.

A partial screenshot of the BlueLeaks data cache.

In a post on Twitter, DDoSecrets said the BlueLeaks archive indexes “ten years of data from over 200 police departments, fusion centers and other law enforcement training and support resources,” and that “among the hundreds of thousands of documents are police and FBI reports, bulletins, guides and more.”

Fusion centers are state-owned and operated entities that gather and disseminate law enforcement and public safety information between state, local, tribal and territorial, federal and private sector partners.

KrebsOnSecurity obtained an internal June 20 analysis by the National Fusion Center Association (NFCA), which confirmed the validity of the leaked data. The NFCA alert noted that the dates of the files in the leak actually span nearly 24 years — from August 1996 through June 19, 2020 — and that the documents include names, email addresses, phone numbers, PDF documents, images, and a large number of text, video, CSV and ZIP files.

“Additionally, the data dump contains emails and associated attachments,” the alert reads. “Our initial analysis revealed that some of these files contain highly sensitive information such as ACH routing numbers, international bank account numbers (IBANs), and other financial data as well as personally identifiable information (PII) and images of suspects listed in Requests for Information (RFIs) and other law enforcement and government agency reports.”

The NFCA said it appears the data published by BlueLeaks was taken after a security breach at Netsential, a Houston-based web development firm.

“Preliminary analysis of the data contained in this leak suggests that Netsential, a web services company used by multiple fusion centers, law enforcement, and other government agencies across the United States, was the source of the compromise,” the NFCA wrote. “Netsential confirmed that this compromise was likely the result of a threat actor who leveraged a compromised Netsential customer user account and the web platform’s upload feature to introduce malicious content, allowing for the exfiltration of other Netsential customer data.”

Reached via phone Sunday evening, Netsential Director Stephen Gartrell declined to comment for this story.

The NFCA said a variety of cyber threat actors, including nation-states, hacktivists, and financially-motivated cybercriminals, might seek to exploit the data exposed in this breach to target fusion centers and associated agencies and their personnel in various cyber attacks and campaigns.

The BlueLeaks data set was released June 19, also known as “Juneteenth,” the oldest nationally celebrated commemoration of the ending of slavery in the United States. This year’s observance of the date has generated renewed public interest in the wake of widespread protests against police brutality and the filmed killing of George Floyd at the hands of Minneapolis police.

Stewart Baker, an attorney at the Washington, D.C. office of Steptoe & Johnson LLP and a former assistant secretary of policy at the U.S. Department of Homeland Security, said the BlueLeaks data is unlikely to shed much light on police misconduct, but could expose sensitive law enforcement investigations and even endanger lives.

“With this volume of material, there are bound to be compromises of sensitive operations and maybe even human sources or undercover police, so I fear it will put lives at risk,” Baker said. “Every organized crime operation in the country will likely have searched for their own names before law enforcement knows what’s in the files, so the damage could be done quickly. I’d also be surprised if the files produce much scandal or evidence of police misconduct. That’s not the kind of work the fusion centers do.”

,

Planet Linux AustraliaDavid Rowe: Open IP over VHF/UHF

I’ve recently done a lot of work on the Codec 2 FSK modem, here is the new README_fsk. It now works at lower SNRs, has been refactored, and is supported by a suite of automated tests.

There is some exciting work going on with Codec 2 modems and VHF/UHF IP links using TAP/TUN (thanks Tomas and Jeroen) – a Linux technology for building IP links from user space “data pumps” – like the Codec 2 modems.

My initial goal for this work is a “100 kbit/s IP link” for VHF/UHF using Codec 2 modems and SDR. One application is moving Ham Radio data past the 1995-era “9600 bits/s data port” paradigm to real time IP.

I’m also interested in IP over TV Whitespace (spare VHF/UHF spectrum) for emergency and developing world applications. I’m a judge for developing world IT grants and the “last 100km” problem comes up again and again. This solution requires just a Raspberry Pi and RTLSDR. At these frequencies antennas could be simply fabricated from wire (cut for the frequency of operation), and soldered directly to the Pi.

Results and Link Budget

As a first step, I’ve taken another look at using RpiTx for FSK, this time at VHF and starting at a modest 10 kbits/s. Over the weekend I performed some Minimum Detectable Signal (MDS) tests and confirmed the 2FSK modem built with RpiTx, a RTL-SDR, and the Codec 2 modem is right on theory at 10 kbits/s, with a MDS of -120dBm.

Putting this in context, a UHF signal has a path loss of 125dB over 100km. So if you have a line of site path, a 10mW (10dBm) signal will be 10-125 = -115dBm at your receiver (assuming basic 0dBi antennas). As -115dBm is greater than the -120dBm MDS, this means your data will be received error free (especially when we add forward error correction). We have sufficient “link margin” and the “link” is closed.

While our 10 kbits/s starting point doesn’t sound like much – even at that rate we get to send 10000*3600*24/8/140 = 771,000 140 byte text messages each day to another station on your horizon. That’s a lot of connectivity in an emergency or when the alternative where you live is nothing.

Method

I’m using the GitHub PR as a logbook for the work, I quite like GitHub and Markdown. This weekends MDS experiments start here.

I had the usual fun and games with attenuating the Rx signal from the Pi down to -120dBm. The transmit signal tries hard to leak around the attenuators via a RF path. I moved the unshielded Pi into another room, and built a “plastic bag and aluminium foil” Faraday cage which worked really well:


These are complex systems and many things can go wrong. Are your Tx/Rx sample clocks close enough? Is your rx signal clipping? Is the gain of your radio sufficient to reduce quantisation noise? Bug in your modem code? DC line in your RTLSDR signal? Loose SMA connector?

I’ve learnt the hard way to test very carefully at each step. First, I run off air samples through a non-real time modem Octave simulation to visualise what’s going on inside the modem. A software oscilloscope.

An Over the Cable (OTC) test is essential before trying Over the Air (OTA) as it gives you a controlled environment to spot issues. MDS tests that measure the Bit error Rate (BER) are also excellent, they effectively absorb every factor in the system and give you an overall score (the Bit Error Rate) you can compare to theory.

Spectral Purity

Here is the spectrum of the FSK signal for a …01010… sequence at 100 kbit/s, at two resolution bandwidths:

The Tx power is about 10dBm, this plot is after some attenuation. I haven’t carefully checked the spurious levels, but the above looks like around -40dBc (off a low 10mW EIRP) over this 1MHz span. If I am reading the Australian regulations correctly (Section 7A of the Amateur LCD) the requirement is 43+10log(P) = 43+10log10(0.01) = 23dBc, so we appear to pass.

Conclusion

This is “extreme open source”. The transmitter is software, the modem is software. All open source and free as in beer and speech. No chipsets or application specific radio hardware – just some CPU cycles and a down converter supplied by the Pi and RTLSDR. The only limits are those of physics – which we have reached with the MDS tests.

Reading Further

FSK modem support for TAP/TUN – GitHub PR for this work
Testing a RTL-SDR with FSK on HF
High Speed Balloon Data Links
Codec 2 FSK modem README – includes lots of links and sample applications.

Planet DebianEnrico Zini: Forced italianisation of Südtirol

Italianization (Italian: Italianizzazione; Croatian: talijanizacija; Slovene: poitaljančevanje; German: Italianisierung; Greek: Ιταλοποίηση) is the spread of Italian culture and language, either by integration or assimilation.[1][2]
In 1919, at the time of its annexation, the middle part of the County of Tyrol which is today called South Tyrol (in Italian Alto Adige) was inhabited by almost 90% German speakers.[1] Under the 1939 South Tyrol Option Agreement, Adolf Hitler and Benito Mussolini determined the status of the German and Ladin (Rhaeto-Romanic) ethnic groups living in the region. They could emigrate to Germany, or stay in Italy and accept their complete Italianization. As a consequence of this, the society of South Tyrol was deeply riven. Those who wanted to stay, the so-called Dableiber, were condemned as traitors while those who left (Optanten) were defamed as Nazis. Because of the outbreak of World War II, this agreement was never fully implemented. Illegal Katakombenschulen ("Catacomb schools") were set up to teach children the German language.
The Prontuario dei nomi locali dell'Alto Adige (Italian for Reference Work of Place Names of Alto Adige) is a list of Italianized toponyms for mostly German place names in South Tyrol (Alto Adige in Italian) which was published in 1916 by the Royal Italian Geographic Society (Reale Società Geografica Italiana). The list was called the Prontuario in short and later formed an important part of the Italianization campaign initiated by the fascist regime, as it became the basis for the official place and district names in the Italian-annexed southern part of the County of Tyrol.
Ettore Tolomei (16 August 1865, in Rovereto – 25 May 1952, in Rome) was an Italian nationalist and fascist. He was designated a Member of the Italian Senate in 1923, and ennobled as Conte della Vetta in 1937.
The South Tyrol Option Agreement (German: Option in Südtirol; Italian: Opzioni in Alto Adige) was an agreement in effect between 1939 and 1943, when the native German speaking people in South Tyrol and three communes in the province of Belluno were given the option of either emigrating to neighboring Nazi Germany (of which Austria was a part after the 1938 Anschluss) or remaining in Fascist Italy and being forcibly integrated into the mainstream Italian culture, losing their language and cultural heritage. Over 80% opted to move to Germany.

Planet DebianDirk Eddelbuettel: T^4 #7 and R^4 #5: R and CRAN Binaries for Ubuntu

A new video in both our T^4 series of video lightning talks with tips, tricks, tools, and toys is also a video in the R^4 series as it revisits a topic previously covered in the latter: how to (more easily) get (binary) packages onto your Ubuntu system. In fact, we show it in three different ways.

The slides are here.

This repo at GitHub support the series: use it to open issues for comments, criticism, suggestions, or feedback.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDaniel Lange: Upgrading Limesurvey with (near) zero downtime

Limesurvey is an online survey tool. It is very powerful and commonly used in academic environments because it is Free Software (GPLv2+), allows for local installations protecting the data of participants and allowing to comply with data protection regulations. This also means there are typically no load-balanced multi-server szenarios with HA databases. But simple VMs where Limesurvey runs and needs upgrading in place.

There's an LTS branch (currently 3.x) and a stable branch (currently 4.x). There's also a 2.06 LTS branch that is restricted to paying customers. The main developers behind Limesurvey offer many services from template design to custom development to support to hosting ("Cloud", "Limesurvey Pro"). Unfortunately they also charge for easy updates called "ComfortUpdate" (currently 39€ for three months) and the manual process is made a bit cumbersome to make the "ComfortUpdate" offer more attractive.

Due to Limesurvey being an old code base and UI elements not being clearly separated, most serious use cases will end up patching files and symlinking logos around template directories. That conflicts a bit with the opaque "ComfortUpdate" process where you push a button and then magic happens. Or you have downtime and a recovery case while surveys are running.

If you do not intend to use the "ComfortUpdate" offering, you can prevent Limesurvey from connecting to http://comfortupdate.limesurvey.org daily by adding the updatable stanza as in line 14 to limesurvey/application/config/config.php:

  1. return array(
  2.  [...]
  3.          // Use the following config variable to set modified optional settings copied from config-defaults.php
  4.         'config'=>array(
  5.         // debug: Set this to 1 if you are looking for errors. If you still get no errors after enabling this
  6.         // then please check your error-logs - either in your hosting provider admin panel or in some /logs directory
  7.         // on your webspace.
  8.         // LimeSurvey developers: Set this to 2 to additionally display STRICT PHP error messages and get full access to standard templates
  9.                 'debug'=>0,
  10.                 'debugsql'=>0, // Set this to 1 to enanble sql logging, only active when debug = 2
  11.                 // Mysql database engine (INNODB|MYISAM):
  12.                  'mysqlEngine' => 'MYISAM'
  13. ,               // Update default LimeSurvey config here
  14.                 'updatable' => false,
  15.         )
  16. );

The comma on line 13 is placed like that in the current default limesurvey config.php, don't let yourself get confused. Every item in a php array must end with a comma. It can be on the next line.

The basic principle of low risk, near-zero downtime, in-place upgrades is:

  1. Create a diff between the current release and the target release
  2. Inspect the diff
  3. Make backups of the application webroot
  4. Patch a copy of the application in-place
  5. (optional) stop the web server
  6. Make a backup of the production database
  7. Move the patched application to the production webroot
  8. (if 5) Start the webserver
  9. Upgrade the database (if needed)
  10. Check the application

So, in detail:

Continue reading "Upgrading Limesurvey with (near) zero downtime"

Planet DebianMolly de Blanc: Fire

The world is on fire.

I know many of you are either my parents friends or here for the free software thoughts, but rather than listen to me, I want you to listed to Black voices in these fields.

If you’re not Black, it’s your job to educate yourself on issues that affect Black people all over the world and the way systems are designed to benefit White Supremacy. It is our job to acknowledge that racism is a problem — whether it appears as White Supremacy, Colonialism, or something as seemingly banal as pay gaps.

We must make space for Black voices. We must make space for Black Women. We must make space for Black trans lives. We must do this in technology. We must build equity. We must listen.

I know I have a platform. It’s one I value highly because I’ve worked so hard for the past twelve years to build it against sexist attitudes in my field (and the world). However, it is time for me to use that platform for the voices of others.

Please pay attention to Black voices in tech and FOSS. Do not just expect them to explain diversity and inclusion, but go to them for their expertise. Pay them respectful salaries. Mentor Black youth and Black people who want to be involved. Volunteer or donate to groups like Black Girls Code, The Last Mile, and Resilient Coders.

If you’re looking for mentorship, especially around things like writing, speaking, community organizing, or getting your career going in open source, don’t hesitate to reach out to me. Mentorship can be a lasting relationship, or a brief exchange around a specific issue of event. If I can’t help you, I’ll try to connect you to someone who can.

We cannot build the techno-utopia unless everyone is involved.

Planet DebianDirk Eddelbuettel: RcppGSL 0.3.8: More fixes and polish

Release 0.3.8 of RcppGSL is now getting onto CRAN. The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

Peter Carbonetto let us know in issue #25 that the included example now showed linker errors on (everybody’s favourite CRAN platform) Slowlaris. Kidding aside, the added compiler variety really has benefits because we were indeed missing a good handful or two of inline statements in the headers—which our good friends g++ and clang++ apparently let us get away with. This has been fixed, and a little bit of the usual package polish and cleanup has been added; see the list of detailed changes below.

Changes in version 0.3.8 (2020-06-21)

  • A few missing inline statements were added to the headers fixing a (genuine) error that was seen only on Solaris (Dirk).

  • The nice colNorm example is now in a file by itself, the previous versions are off in a new file colNorm_old.cpp (Dirk).

  • The README.me now sports two new badges (Dirk).

  • Travis CI was updated to 'bionic' and R 4.0 (Dirk).

Special thanks also to CRAN for a super-smooth and fully automated processing of a package with both compiled code and two handful of reverse dependencies.

Courtesy of CRANberries, a summary of changes to the most recent release is also available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

TEDConversations on the future of vaccines, tech, government and art: Week 5 of TED2020

Week 5 of TED2020 featured wide-ranging discussions on the quest for a coronavirus vaccine, the future of the art world, what it’s like to lead a country during a pandemic and much more. Below, a recap of insights shared.

Jerome Kim, Director General of the International Vaccine Institute, shares an update on the quest for a coronavirus vaccine in conversation with TED science curator David Biello at TED2020: Uncharted on June 15, 2020. (Photo courtesy of TED)

Jerome Kim, Director General of the International Vaccine Institute

Big idea: There’s a lot of work still to be done, but the world is making progress on developing a COVID-19 vaccine. 

How? A normal vaccine takes five to 10 years to develop and costs about a billion dollars, with a failure rate of 93 percent. Under the pressure of the coronavirus pandemic, however, we’re being asked to speed things up to within a window of 12 to 18 months, says Jerome Kim. How are things going? He updates us on the varied field of vaccine candidates and approaches, from Moderna’s mRNA vaccine to AstraZeneca’s vectored vaccine to whole inactivated vaccines, and how these companies are innovating to develop and manufacture their products in record time. In addition to the challenge of making a sufficient amount of a safe, effective vaccine (at the right price), Kim says we must think about how to distribute it for the whole world — not just rich nations. The question of equity and access is the toughest one of all, he says, but the answer will ultimately lead us out of this pandemic.


Bioethicist Nir Eyal discusses the mechanism and ethics of human challenge trials in vaccine development with head of TED Chris Anderson at TED2020: Uncharted on June 15, 2020. (Photo courtesy of TED)

Nir Eyal, Bioethicist

Big idea: Testing vaccine efficacy is normally a slow, years-long process, but we can ethically accelerate COVID-19 vaccine development through human challenge trials.

How? Thousands of people continue to die every day from COVID-19 across the globe, and we risk greater death and displacement if we rely on conventional vaccine trials, says bioethicist Nir Eyal. While typical trials observe experimental and control groups over time until they see meaningful differences between the two, Eyal proposes using human challenge trials in our search for a vaccine — an approach that deliberately exposes test groups to the virus in order to quickly determine efficacy. Human challenge trials might sound ethically ambiguous or even immoral, but Eyal suggests the opposite is true. Patients already take informed risks by participating in drug trials and live organ donations; if we look at statistical risk and use the right bioethical framework, we can potentially hasten vaccine development while maintaining tolerable risks. The key, says Eyal, is the selection criteria: by selecting young participants who are free from risk factors like hypertension, for example, the search for a timely solution to this pandemic is possible. “The dramatic number of people who could be aided by a faster method of testing vaccines matters,” he says. “It’s not the case that we are violating the rights of individuals to maximize utility. We are both maximizing utility and respecting rights, and this marriage is very compelling in defending the use of these accelerated [vaccine trial] designs.”


“What is characteristic of our people is the will to overcome the past and to move forward. Poverty is real. Inequality is real. But we also have a very determined population that embraces the notion of the Republic and the notion of citizenship,” says Ashraf Ghani, president of Afghanistan. He speaks with head of TED Chris Anderson at TED2020: Uncharted on June 16, 2020. (Photo courtesy of TED)

Ashraf Ghani, President of Afghanistan

Big Idea: Peacemaking is a discipline that must be practiced daily, both in life and politics. 

How? Having initiated sweeping economic, trade and social reforms, Afghanistan president Ashraf Ghani shares key facets of peacemaking that he relies on to navigate politically sensitive relationships and the ongoing health crisis: mutual respect, listening and humanity. Giving us a glimpse of Afghanistan that goes beyond the impoverished, war-torn image painted in the media, he describes the aspirations, entrepreneurship and industry that’s very much alive there, especially in its youth and across all genders. “What I hear from all walks of life, men and women, girls and boys, [is] a quest for normalcy. We’re striving to be normal. It’s not we who are abnormal; it’s the circumstances in which we’ve been caught. And we are attempting to carve a way forward to overcome the types of turbulence that, in interaction with each other, provide an environment of continuous uncertainty. Our goal is to overcome this, and I think with the will of the people, we will be able to,” he says. President Ghani also shares perspective on Afghanistan’s relationship to China, the Taliban and Pakistan — expressing a commitment to his people and long term peace that fuels every conversation. “The ultimate goal is a sovereign, democratic, united Afghanistan at peace with itself in the world,” he says. 


“How do we make it so that if you’re having a conversation with someone and you have to be separated by thousands of miles, it feels as close to face-to-face?” asks Will Cathcart, head of WhatsApp. He speaks with head of TED Chris Anderson at TED2020: Uncharted on June 16, 2020. (Photo courtesy of TED)

Will Cathcart, head of WhatsApp

Big idea: Tech platforms have a responsibility to provide privacy and security to users.

Why? On WhatsApp, two billion users around the world send more than 100 billion messages every day. All of them are protected by end-to-end encryption, which means that the conversations aren’t stored and no one can access them — not governments, companies or even WhatsApp itself. Due to the COVID-19 pandemic, more and more of our conversations with family, friends and coworkers have to occur through digital means. This level of privacy is a fundamental right that has never been more important, says Cathcart. To ensure their encryption services aren’t misused to promote misinformation or conduct crime, WhatsApp has developed tools and protocols that keep users safe without disrupting the privacy of all of its users. “It’s so important that we match the security and privacy you have in-person, and not say, ‘This digital world is totally different: we should change all the ways human beings communicate and completely upend the rules.’ No, we should try to match that as best we can, because there’s something magical about people talking to each other privately.”


“Museums are among the few truly public democratic spaces for people to come together. We’re places of inspiration and learning, and we help expand empathy and moral thinking. We are places for difficult and courageous conversations. I believe we can, and must be, places in real service of community,” says Anne Pasternak, director of the Brooklyn Museum. She speaks with TED design curator Chee Pearlman at TED2020: Uncharted on June 17, 2020. (Photo courtesy of TED)

Anne Pasternak, Director of the Brooklyn Museum

Big idea: We need the arts to be able to document and reflect on what we’re living through, express our pain and joy and imagine a better future.

How? Museums are vital community institutions that reflect the memories, knowledge and dreams of a society. Located in a borough of more than 2.5 million people, the Brooklyn Museum is one of the largest and most influential museums in the world, and it serves a community that has been devastated by the COVID-19 pandemic. Pasternak calls on museums to take a leading role in manifesting community visions of a better world. In a time defined by dramatic turmoil and global suffering, artists will help ignite the radial imagination that leads to cultural, political and social change, she says. Museums also have a responsibility to uplift a wide variety of narratives, taking special care to highlight communities who have historically been erased from societal remembrance and artmaking. The world has been irreversibly changed and devastated by the pandemic. It’s time to look to art as a medium of collective memorializing, mourning, healing and transformation.


“Art changes minds, shifts mentalities, changes the behavior of people and the way they think and how they feel,” says Honor Hager. She speaks with TED current affairs curator Whitney Pennington Rodgers at TED2020: Uncharted on June 17, 2020. (Photo courtesy of TED)

Honor Hager, Executive Director of the ArtScience Museum

Big Idea: Cultural institutions can care for their communities by listening to and amplifying marginalized voices.

How: The doors of Singapore’s famed ArtScience Museum building are closed — but online, the museum is engaging with its community more deeply than ever. Executive director Honor Hager shares how the museum has moved online with ArtScience at Home, a program offering online talks, streamed performances and family workshops addressing COVID-19 and our future. Reflecting on the original meaning of “curator” (from the Latin curare, or “to care”), Hager shares how ArtScience at Home aims to care for its community by listening to underrepresented groups. The program seeks out marginalized voices and provides a global platform for them to tell their own stories, unmediated and unedited, she says. Notably, the program included a screening of Salary Day by Ramasamy Madhavan, the first film made by a migrant worker in Singapore. The programming will have long-lasting effects on the museum’s curation in the future and on its international audience, Hager says. “Art changes minds, shifts mentalities, changes the behavior of people and the way they think and how they feel,” she says. “We are seeing the power of culture and art to both heal and facilitate dramatic change.”

,

Planet DebianDima Kogan: OpenCV C API transition. A rant.

I just went through a debugging exercise that was so ridiculous, I just had to write it up. Some of this probably should go into a bug report instead of a rant, but I'm tired. And clearly I don't care anymore.

OK, so I'm doing computer vision work. OpenCV has been providing basic functions in this area, so I have been using them for a while. Just for really, really basic stuff, like projection. The C API was kinda weird, and their error handling is a bit ridiculous (if you give it arguments it doesn't like, it asserts!), but it has been working fine for a while.

At some point (around OpenCV 3.0) somebody over there decided that they didn't like their C API, and that this was now a C++ library. Except the docs still documented the C API, and the website said it supported C, and the code wasn't actually removed. They just kinda stopped testing it and thinking about it. So it would mostly continue to work, except some poor saps would see weird failures; like this and this, for instance. OpenCV 3.2 was the last version where it was mostly possible to keep using the old C code, even when compiling without optimizations. So I was doing that for years.

So now, in 2020, Debian is finally shipping a version of OpenCV that definitively does not work with the old code, so I had to do something. Over time I stopped using everything about OpenCV, except a few cvProjectPoints2() calls. So I decided to just write a small C++ shim to call the new version of that function, expose that with =extern "C"= to the rest of my world, and I'd be done. And normally I would be, but this is OpenCV we're talking about. I wrote the shim, and it didn't work. The code built and ran, but the results were wrong. After some pointless debugging, I boiled the problem down to this test program:

#include <opencv2/calib3d.hpp>
#include <stdio.h>

int main(void)
{
    double fx = 1000.0;
    double fy = 1000.0;
    double cx = 1000.0;
    double cy = 1000.0;
    double _camera_matrix[] =
        { fx,  0, cx,
          0,  fy, cy,
          0,   0,  1 };
    cv::Mat camera_matrix(3,3, CV_64FC1, _camera_matrix);

    double pp[3] = {1., 2., 10.};
    double qq[2] = {444, 555};

    int N=1;
    cv::Mat object_points(N,3, CV_64FC1, pp);
    cv::Mat image_points (N,2, CV_64FC1, qq);

    // rvec,tvec
    double _zero3[3] = {};
    cv::Mat zero3(1,3,CV_64FC1, _zero3);

    cv::projectPoints( object_points,
                       zero3,zero3,
                       camera_matrix,
                       cv::noArray(),
                       image_points,
                       cv::noArray(), 0.0);

    fprintf(stderr, "manually-projected no-distortion: %f %f\n",
            pp[0]/pp[2] * fx + cx,
            pp[1]/pp[2] * fy + cy);
    fprintf(stderr, "opencv says: %f %f\n", qq[0], qq[1]);

    return 0;
}

This is as trivial as it gets. I project one point through a pinhole camera, and print out the right answer (that I can easily compute, since this is trivial), and what OpenCV reports:

$ g++ -I/usr/include/opencv4 -o tst tst.cc -lopencv_calib3d -lopencv_core && ./tst

manually-projected no-distortion: 1100.000000 1200.000000
opencv says: 444.000000 555.000000

Well that's no good. The answer is wrong, but it looks like it didn't even write anything into the output array. Since this is supposed to be a thin shim to C code, I want this thing to be filling in C arrays, which is what I'm doing here:

double qq[2] = {444, 555};
int N=1;
cv::Mat image_points (N,2, CV_64FC1, qq);

This is how the C API has worked forever, and their C++ API works the same way, I thought. Nothing barfed, not at build time, or run time. Fine. So I went to figure this out. In the true spirit of C++, the new API is inscrutable. I'm passing in cv::Mat, but the API wants cv::InputArray for some arguments and cv::OutputArray for others. Clearly cv::Mat can be coerced into either of those types (and that's what you're supposed to do), but the details are not meant to be understood. You can read the snazzy C++-style documentation. Clicking on "OutputArray" in the doxygen gets you here. Then I guess you're supposed to click on "_OutputArray", and you get here. Understand what's going on now? Me neither.

Stepping through the code revealed the problem. cv::projectPoints() looks like this:

void cv::projectPoints( InputArray _opoints,
                        InputArray _rvec,
                        InputArray _tvec,
                        InputArray _cameraMatrix,
                        InputArray _distCoeffs,
                        OutputArray _ipoints,
                        OutputArray _jacobian,
                        double aspectRatio )
{
    ....
    _ipoints.create(npoints, 1, CV_MAKETYPE(depth, 2), -1, true);
    ....

I.e. they're allocating a new data buffer for the output, and giving it back to me via the OutputArray object. This object already had a buffer, and that's where I was expecting the output to go. Instead it went to the brand-new buffer I didn't want. Issues:

  • The OutputArray object knows it already has a buffer, and they could just use it instead of allocating a new one
  • If for some reason my buffer smells bad, they could complain to tell me they're ignoring it to save me the trouble of debugging, and then bitching about it on the internet
  • I think dynamic memory allocation smells bad
  • Doing it this way means the new function isn't a drop-in replacement for the old function

Well that's just super. I can call the C++ function, copy the data into the place it's supposed to go to, and then deallocate the extra buffer. Or I can pull out the meat of the function I want into my project, and then I can drop the OpenCV dependency entirely. Clearly that's the way to go.

So I go poking back into their code to grab what I need, and here's what I see:

static void cvProjectPoints2Internal( const CvMat* objectPoints,
                  const CvMat* r_vec,
                  const CvMat* t_vec,
                  const CvMat* A,
                  const CvMat* distCoeffs,
                  CvMat* imagePoints, CvMat* dpdr CV_DEFAULT(NULL),
                  CvMat* dpdt CV_DEFAULT(NULL), CvMat* dpdf CV_DEFAULT(NULL),
                  CvMat* dpdc CV_DEFAULT(NULL), CvMat* dpdk CV_DEFAULT(NULL),
                  CvMat* dpdo CV_DEFAULT(NULL),
                  double aspectRatio CV_DEFAULT(0) )
{
...
}

Looks familiar? It should. Because this is the original C-API function they replaced. So in their quest to move to C++, they left the original code intact, C API and everything, un-exposed it so you couldn't call it anymore, and made a new, shitty C++ wrapper for people to call instead. CvMat is still there. I have no words.

Yes, this is a massive library, and maybe other parts of it indeed did make some sort of non-token transition, but this thing is ridiculous. In the end, here's the function I ended up with (licensed as OpenCV; see the comment)

// The implementation of project_opencv is based on opencv. The sources have
// been heavily modified, but the opencv logic remains. This function is a
// cut-down cvProjectPoints2Internal() to keep only the functionality I want and
// to use my interfaces. Putting this here allows me to drop the C dependency on
// opencv. Which is a good thing, since opencv dropped their C API
//
// from opencv-4.2.0+dfsg/modules/calib3d/src/calibration.cpp
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
//   * Redistribution's of source code must retain the above copyright notice,
//     this list of conditions and the following disclaimer.
//
//   * Redistribution's in binary form must reproduce the above copyright notice,
//     this list of conditions and the following disclaimer in the documentation
//     and/or other materials provided with the distribution.
//
//   * The name of the copyright holders may not be used to endorse or promote products
//     derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
typedef union
{
    struct
    {
        double x,y;
    };
    double xy[2];
} point2_t;
typedef union
{
    struct
    {
        double x,y,z;
    };
    double xyz[3];
} point3_t;
void project_opencv( // outputs
                     point2_t* q,
                     point3_t* dq_dp,               // may be NULL
                     double* dq_dintrinsics_nocore, // may be NULL

                     // inputs
                     const point3_t* p,
                     int N,
                     const double* intrinsics,
                     int Nintrinsics)
{
    const double fx = intrinsics[0];
    const double fy = intrinsics[1];
    const double cx = intrinsics[2];
    const double cy = intrinsics[3];

    double k[12] = {};
    for(int i=0; i<Nintrinsics-4; i++)
        k[i] = intrinsics[i+4];

    for( int i = 0; i < N; i++ )
    {
        double z_recip = 1./p[i].z;
        double x = p[i].x * z_recip;
        double y = p[i].y * z_recip;

        double r2      = x*x + y*y;
        double r4      = r2*r2;
        double r6      = r4*r2;
        double a1      = 2*x*y;
        double a2      = r2 + 2*x*x;
        double a3      = r2 + 2*y*y;
        double cdist   = 1 + k[0]*r2 + k[1]*r4 + k[4]*r6;
        double icdist2 = 1./(1 + k[5]*r2 + k[6]*r4 + k[7]*r6);
        double xd      = x*cdist*icdist2 + k[2]*a1 + k[3]*a2 + k[8]*r2+k[9]*r4;
        double yd      = y*cdist*icdist2 + k[2]*a3 + k[3]*a1 + k[10]*r2+k[11]*r4;

        q[i].x = xd*fx + cx;
        q[i].y = yd*fy + cy;


        if( dq_dp )
        {
            double dx_dp[] = { z_recip, 0,       -x*z_recip };
            double dy_dp[] = { 0,       z_recip, -y*z_recip };
            for( int j = 0; j < 3; j++ )
            {
                double dr2_dp = 2*x*dx_dp[j] + 2*y*dy_dp[j];
                double dcdist_dp = k[0]*dr2_dp + 2*k[1]*r2*dr2_dp + 3*k[4]*r4*dr2_dp;
                double dicdist2_dp = -icdist2*icdist2*(k[5]*dr2_dp + 2*k[6]*r2*dr2_dp + 3*k[7]*r4*dr2_dp);
                double da1_dp = 2*(x*dy_dp[j] + y*dx_dp[j]);
                double dmx_dp = (dx_dp[j]*cdist*icdist2 + x*dcdist_dp*icdist2 + x*cdist*dicdist2_dp +
                                k[2]*da1_dp + k[3]*(dr2_dp + 4*x*dx_dp[j]) + k[8]*dr2_dp + 2*r2*k[9]*dr2_dp);
                double dmy_dp = (dy_dp[j]*cdist*icdist2 + y*dcdist_dp*icdist2 + y*cdist*dicdist2_dp +
                                k[2]*(dr2_dp + 4*y*dy_dp[j]) + k[3]*da1_dp + k[10]*dr2_dp + 2*r2*k[11]*dr2_dp);
                dq_dp[i*2 + 0].xyz[j] = fx*dmx_dp;
                dq_dp[i*2 + 1].xyz[j] = fy*dmy_dp;
            }
        }
        if( dq_dintrinsics_nocore )
        {
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 0] = fx*x*icdist2*r2;
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 0] = fy*(y*icdist2*r2);

            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 1] = fx*x*icdist2*r4;
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 1] = fy*y*icdist2*r4;

            if( Nintrinsics-4 > 2 )
            {
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 2] = fx*a1;
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 2] = fy*a3;
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 3] = fx*a2;
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 3] = fy*a1;
                if( Nintrinsics-4 > 4 )
                {
                    dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 4] = fx*x*icdist2*r6;
                    dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 4] = fy*y*icdist2*r6;

                    if( Nintrinsics-4 > 5 )
                    {
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 5] = fx*x*cdist*(-icdist2)*icdist2*r2;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 5] = fy*y*cdist*(-icdist2)*icdist2*r2;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 6] = fx*x*cdist*(-icdist2)*icdist2*r4;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 6] = fy*y*cdist*(-icdist2)*icdist2*r4;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 7] = fx*x*cdist*(-icdist2)*icdist2*r6;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 7] = fy*y*cdist*(-icdist2)*icdist2*r6;
                        if( Nintrinsics-4 > 8 )
                        {
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 8] = fx*r2; //s1
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 8] = fy*0; //s1
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 9] = fx*r4; //s2
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 9] = fy*0; //s2
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 10] = fx*0;//s3
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 10] = fy*r2; //s3
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 11] = fx*0;//s4
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 11] = fy*r4; //s4
                        }
                    }
                }
            }
        }
    }
}

This does only the stuff I need: projection only (no geometric transformation), and gradients in respect to the point coordinates and distortions only. Gradients in respect to fxy and cxy are trivial, and I don't bother reporting them.

So now I don't compile or link against OpenCV, my code builds and runs on Debian/sid and (surprisingly) it runs much faster than before. Apparently there was a lot of pointless overhead happening.

Alright. Rant over.

,

CryptogramFriday Squid Blogging: Giant Squid Washes Up on South African Beach

Fourteen feet long and 450 pounds. It was dead before it washed up.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityTurn on MFA Before Crooks Do It For You

Hundreds of popular websites now offer some form of multi-factor authentication (MFA), which can help users safeguard access to accounts when their password is breached or stolen. But people who don’t take advantage of these added safeguards may find it far more difficult to regain access when their account gets hacked, because increasingly thieves will enable multi-factor options and tie the account to a device they control. Here’s the story of one such incident.

As a career chief privacy officer for different organizations, Dennis Dayman has tried to instill in his twin boys the importance of securing their online identities against account takeovers. Both are avid gamers on Microsoft’s Xbox platform, and for years their father managed their accounts via his own Microsoft account. But when the boys turned 18, they converted their child accounts to adult, effectively taking themselves out from under their dad’s control.

On a recent morning, one of Dayman’s sons found he could no longer access his Xbox account. The younger Dayman admitted to his dad that he’d reused his Xbox profile password elsewhere, and that he hadn’t enabled multi-factor authentication for the account.

When the two of them sat down to reset his password, the screen displayed a notice saying there was a new Gmail address tied to his Xbox account. When they went to turn on multi-factor authentication for his son’s Xbox profile — which was tied to a non-Microsoft email address — the Xbox service said it would send a notification of the change to unauthorized Gmail account in his profile.

Wary of alerting the hackers that they were wise to their intrusion, Dennis tried contacting Microsoft Xbox support, but found he couldn’t open a support ticket from a non-Microsoft account. Using his other son’s Outlook account, he filed a ticket about the incident with Microsoft.

Dennis soon learned the unauthorized Gmail address added to his son’s hacked Xbox account also had enabled MFA. Meaning, his son would be unable to reset the account’s password without approval from the person in control of the Gmail account.

Luckily for Dayman’s son, he hadn’t re-used the same password for the email address tied to his Xbox profile. Nevertheless, the thieves began abusing their access to purchase games on Xbox and third-party sites.

“During this period, we started realizing that his bank account was being drawn down through purchases of games from Xbox and [Electronic Arts],” Dayman the elder recalled. “I pulled the recovery codes for his Xbox account out of the safe, but because the hacker came in and turned on multi-factor, those codes were useless to us.”

Microsoft support sent Dayman and his son a list of 20 questions to answer about their account, such as the serial number on the Xbox console originally tied to the account when it was created. But despite answering all of those questions successfully, Microsoft refused to let them reset the password, Dayman said.

“They said their policy was not to turn over accounts to someone who couldn’t provide the second factor,” he said.

Dayman’s case was eventually escalated to Tier 3 Support at Microsoft, which was able to walk him through creating a new Microsoft account, enabling MFA on it, and then migrating his son’s Xbox profile over to the new account.

Microsoft told KrebsOnSecurity that while users currently are not prompted to enable two-step verification upon sign-up, they always have the option to enable the feature.

“Users are also prompted shortly after account creation to add additional security information if they have not yet done so, which enables the customer to receive security alerts and security promotions when they login to their account,” the company said in a written statement. “When we notice an unusual sign-in attempt from a new location or device, we help protect the account by challenging the login and send the user a notification. If a customer’s account is ever compromised, we will take the necessary steps to help them recover the account.”

Certainly, not enabling MFA when it is offered is far more of a risk for people in the habit of reusing or recycling passwords across multiple sites. But any service to which you entrust sensitive information can get hacked, and enabling multi-factor authentication is a good hedge against having leaked or stolen credentials used to plunder your account.

What’s more, a great many online sites and services that do support multi-factor authentication are completely automated and extremely difficult to reach for help when account takeovers occur. This is doubly so if the attackers also can modify and/or remove the original email address associated with the account.

KrebsOnSecurity has long steered readers to the site twofactorauth.org, which details the various MFA options offered by popular websites. Currently, twofactorauth.org lists nearly 900 sites that have some form of MFA available. These range from authentication options like one-time codes sent via email, phone calls, SMS or mobile app, to more robust, true “2-factor authentication” or 2FA options (something you have and something you know), such as security keys or push-based 2FA such as Duo Security (an advertiser on this site and a service I have used for years).

Email, SMS and app-based one-time codes are considered less robust from a security perspective because they can be undermined by a variety of well-established attack scenarios, from SIM-swapping to mobile-based malware. So it makes sense to secure your accounts with the strongest form of MFA available. But please bear in mind that if the only added authentication options offered by a site you frequent are SMS and/or phone calls, this is still better than simply relying on a password to secure your account.

CryptogramSecurity and Human Behavior (SHB) 2020

Today is the second day of the thirteenth Workshop on Security and Human Behavior. It's being hosted by the University of Cambridge, which in today's world means we're all meeting on Zoom.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The forty or so attendees include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It's not just an interdisciplinary event; most of the people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. We've done pretty well translating this format to video chat, including using the random breakout feature to put people into small groups.

I invariably find this to be the most intellectually stimulating two days of my professional year. It influences my thinking in many different, and sometimes surprising, ways.

This year's schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, and twelfth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

Planet DebianIngo Juergensmann: Jitsi Meet and ejabberd

Since the more or less global lockdown caused by Covid-19 there was a lot talk about video conferencing solutions that can be used for e.g. those people that try to coordinate with coworkers while in home office. One of the solutions is Jitsi Meet, which is NOT packaged in Debian. But there are Debian packages provided by Jitsi itself.

Jitsi relies on an XMPP server. You can see the network overview in the docs. While Jitsi itself uses Prosody as XMPP and their docs only covers that one. But it's basically irrelevant which XMPP you want to use. Only thing is that you can't follow the official Jitsi documentation when you are not using Prosody but instead e.g. ejabberd. As always, it's sometimes difficult to find the correct/best non-official documentation or how-to, so I try to describe what helped me in configuring Jitsi Meet with ejabberd as XMPP server and my own coturn STUN/TURN server...

This is not a step-by-step description, but anyway... here we go with some links:

One of the first issues I stumpled across was that my Java was too old, but this can be quickly solved by update-alternatives:

There are 3 choices for the alternative java (providing /usr/bin/java).

Selection Path Priority Status
------------------------------------------------------------
* 0 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 auto mode
1 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 manual mode
2 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 1081 manual mode
3 /usr/lib/jvm/jre-7-oracle-x64/bin/java 316 manual mode

It was set to jre-7, but I guess this was from years ago when I ran OpenFire as XMPP server.

If something is not working with Jitsi Meet, it helps to not watch the log files only, but also to open the Debug Console in your web browser. That way I catched some XMPP Failures and saw that it tries to connect to some components where the DNS records were missing:

meet IN A yourIP
chat.meet IN A yourIP
focus.meet IN A yourIP
pubsub.meet IN A yourIP

Of course you also need to add some config to your ejabberd.yml:

host_config:
domain.net:
auth_password_format: scram
meet.domain.net:
## Disable s2s to prevent spam
s2s_access: none
auth_method: anonymous
allow_multiple_connections: true
anonymous_protocol: both
modules:
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
#mod_disco: {}
mod_stun_disco:
secret: "YOURSECRETTURNCREDENTIALS"
services:
-
host: yourIP # Your coturn's public address.
type: stun
transport: udp
-
host: yourIP # Your coturn's public address.
type: stun
transport: tcp
-
host: yourIP # Your coturn's public address.
type: turn
transport: udp
mod_muc:
access: all
access_create: local
access_persistent: local
access_admin: admin
host: "chat.@"
mod_muc_admin: {}
mod_ping: {}
mod_pubsub:
access_createnode: local
db_type: sql
host: "pubsub.@"
ignore_pep_from_offline: false
last_item_cache: true
max_items_node: 5000 # For Jappix this must be set to 1000000
plugins:
- "flat"
- "pep" # requires mod_caps
force_node_config:
"eu.siacs.conversations.axolotl.*":
access_model: open
## Avoid buggy clients to make their bookmarks public
"storage:bookmarks":
access_model: whitelist

There is more config that needs to be done, but one of the XMPP Failures I spotted from debug console in Firefox was that it tried to connect to conference.domain.net while I prefer to use chat.domain.net for its brevity. If you prefer conference instead of chat, then you shouldn't be affected by this. Just make just that your config is consistent with what Jitsi Meet wants to connect to. Maybe not all of the above lines are necessary, but this works for me.

Jitsi config is like this for me with short domains (/etc/jitsi/meet/meet.domain.net-config.js):

var config = {

hosts: {
domain: 'domain.net',
anonymousdomain: 'meet.domain.net',
authdomain: 'meet.domain.net',
focus: 'focus.meet.domain.net',
muc: 'chat.hookipa.net'
},

bosh: '//meet.domain.net:5280/http-bind',
//websocket: 'wss://meet.domain.net/ws',
clientNode: 'http://jitsi.org/jitsimeet',
focusUserJid: 'focus@domain.net',

useStunTurn: true,

p2p: {
// Enables peer to peer mode. When enabled the system will try to
// establish a direct connection when there are exactly 2 participants
// in the room. If that succeeds the conference will stop sending data
// through the JVB and use the peer to peer connection instead. When a
// 3rd participant joins the conference will be moved back to the JVB
// connection.
enabled: true,

// Use XEP-0215 to fetch STUN and TURN servers.
useStunTurn: true,

// The STUN servers that will be used in the peer to peer connections
stunServers: [
//{ urls: 'stun:meet-jit-si-turnrelay.jitsi.net:443' },
//{ urls: 'stun:stun.l.google.com:19302' },
//{ urls: 'stun:stun1.l.google.com:19302' },
//{ urls: 'stun:stun2.l.google.com:19302' },
{ urls: 'stun:turn.domain.net:5349' },
{ urls: 'stun:turn.domain.net:3478' }
],

....

In the above config snippet one of the issues solved was to add the port to the bosh setting. Of course you can also take care that your bosh requests get forwarded by your webserver as reverse proxy. Using the port in the config might be a quick way to test whether or not your config is working. It's easier to solve one issue after the other and make one config change at a time instead of needing to make changes in several places.

/etc/jitsi/jicofo/sip-communicator.properties:

org.jitsi.jicofo.auth.URL=XMPP:meet.domain.net
org.jitsi.jicofo.BRIDGE_MUC=jvbbrewery@chat.meet.domain.net

 /etc/jitsi/videobridge/sip-communicator.properties:

org.jitsi.videobridge.ENABLE_STATISTICS=true
org.jitsi.videobridge.STATISTICS_TRANSPORT=muc
org.jitsi.videobridge.STATISTICS_INTERVAL=5000

org.jitsi.videobridge.xmpp.user.shard.HOSTNAME=localhost
org.jitsi.videobridge.xmpp.user.shard.DOMAIN=domain.net
org.jitsi.videobridge.xmpp.user.shard.USERNAME=jvb
org.jitsi.videobridge.xmpp.user.shard.PASSWORD=SECRET
org.jitsi.videobridge.xmpp.user.shard.MUC_JIDS=JvbBrewery@chat.meet.domain.net
org.jitsi.videobridge.xmpp.user.shard.MUC_NICKNAME=videobridge1

org.jitsi.videobridge.DISABLE_TCP_HARVESTER=true
org.jitsi.videobridge.TCP_HARVESTER_PORT=4443
org.ice4j.ice.harvest.NAT_HARVESTER_LOCAL_ADDRESS=yourIP
org.ice4j.ice.harvest.NAT_HARVESTER_PUBLIC_ADDRESS=yourIP
org.ice4j.ice.harvest.DISABLE_AWS_HARVESTER=true
org.ice4j.ice.harvest.STUN_MAPPING_HARVESTER_ADDRESSES=turn.domain.net:3478
org.ice4j.ice.harvest.ALLOWED_INTERFACES=eth0

Sometimes there might be stupid errors like (in my case) wrong hostnames like "chat.meet..domain.net" (a double dot in the domain), but you can spot those easily in the debug console of your browser.

There tons of config options where you can easily make mistakes, but watching your logs and your debug console should really help you in sorting out these kind of errors. The other URLs above are helpful as well and more elaborate then my few lines here. Especially Mike Kuketz has some advanced configuration tips like disabling third party requests with "disableThirdPartyRequests: true" or limiting the number of video streams and such.

Hope this helps...

Kategorie: 

MEStorage Trends

In considering storage trends for the consumer side I’m looking at the current prices from MSY (where I usually buy computer parts). I know that other stores will have slightly different prices but they should be very similar as they all have low margins and wholesale prices are the main factor.

Small Hard Drives Aren’t Viable

The cheapest hard drive that MSY sells is $68 for 500G of storage. The cheapest SSD is $49 for 120G and the second cheapest is $59 for 240G. SSD is cheaper at the low end and significantly faster. If someone needed about 500G of storage there’s a 480G SSD for $97 which costs $29 more than a hard drive. With a modern PC if you have no hard drives you will notice that it’s quieter. For anyone who’s buying a new PC spending an extra $29 is definitely worthwhile for the performance, low power use, and silence.

The cheapest 1TB disk is $69 and the cheapest 1TB SSD is $159. Saving $90 on the cost of a new PC probably isn’t worth while.

For 2TB of storage the cheapest options are Samsung NVMe for $339, Crucial SSD for $335, or a hard drive for $95. Some people would choose to save $244 by getting a hard drive instead of NVMe, but if you are getting a whole system then allocating $244 to NVMe instead of a faster CPU would probably give more benefits overall.

Computer stores typically have small margins and computer parts tend to quickly either become cheaper or be obsoleted by better parts. So stores don’t want to stock parts unless they will sell quickly. Disks smaller than 2TB probably aren’t going to be profitable for stores for very long. The trend of SSD and NVMe becoming cheaper is going to make 2TB disks non-viable in the near future.

NVMe vs SSD

M.2 NVMe devices are at comparable prices to SATA SSDs. For some combinations of quality and capacity NVMe is about 50% more expensive and for some it’s slightly cheaper (EG Intel 1TB NVMe being cheaper than Samsung EVO 1TB SSD). Last time I checked about half the motherboards on sale had a single M.2 socket so for a new workstation that doesn’t need more than 2TB of storage (the largest NVMe that MSY sells) it wouldn’t make sense to use anything other than NVMe.

The benefit of NVMe is NOT throughput (even though NVMe devices can often sustain over 4GB/s), it’s low latency. Workstations can’t properly take advantage of this because RAM is so cheap ($198 for 32G of DDR4) that compiles etc mostly come from cache and because most filesystem writes on workstations aren’t synchronous. For servers a large portion of writes are synchronous, for example a mail server can’t acknowledge receiving mail until it knows that it’s really on disk, so there’s a lot of small writes that block server processes and the low latency of NVMe really improves performance. If you are doing a big compile on a workstation (the most common workstation task that uses a lot of disk IO) then the writes aren’t synchronised to disk and if the system crashes you will just do all the compilation again. While NVMe doesn’t give a lot of benefit over SSD for workstation use (I’ve uses laptops with SSD and NVMe and not noticed a great difference) of course I still want better performance. ;)

Last time I checked I couldn’t easily buy a PCIe card that supported 2*NVMe cards, I’m sure they are available somewhere but it would take longer to get and probably cost significantly more than twice as much. That means a RAID-1 of NVMe takes 2 PCIe slots if you don’t have an M.2 socket on the motherboard. This was OK when I installed 2*NVMe devices on a server that had 18 disks and lots of spare PCIe slots. But for some systems PCIe slots are an issue.

My home server has all PCIe slots used by a video card and Ethernet cards and the BIOS probably won’t support booting from NVMe. It’s a Dell server so I can’t just replace the motherboard with one that has more PCIe slots and M.2 on the motherboard. As it’s running nicely and doesn’t need replacing any time soon I won’t be using NVMe for home server stuff.

Small Servers

Most servers that I am responsible for have less than 2TB of storage. For my clients I now only recommend SSD storage for small servers and am recommending SSD for replacing any failed disks.

My home server has 2*500G SSDs in a BTRFS RAID-1 for the root filesystem, and 3*4TB disks in a BTRFS RAID-1 for storing big files. I bought the SSDs when 500G SSDs were about $250 each and bought 2*4TB disks when they were about $350 each. Currently that server has about 3.3TB of space used and I could probably get it down to about 2.5TB if I deleted things I don’t really need. If I was getting storage for that server now I’d use 2*2TB SSDs and 3*1TB hard drives for the stuff that doesn’t fit on SSDs (I have some spare 1TB disks that came with servers). If I didn’t have spare hard drives I’d get 3*2TB SSDs for that sort of server which would give 3TB of BTRFS RAID-1 storage.

Last time I checked Dell servers had a card for supporting M.2 as an optional extra so Dells probably won’t boot from NVMe without extra expense.

Ars Technica has an informative article about WD selling SMR disks as “NAS” disks [1]. The Shingled Magnetic Recording technology allows greater storage density on a platter which leads to either larger capacity or cheaper disks but at the cost of lower write performance and apparently extremely bad latency in some situations. NAS disks are supposed to be low latency as the expectation is that they will be used in a RAID array and kicked out of the array if they have problems. There are reports of ZFS kicking SMR disks from RAID sets. I think this will end the use of hard drives for small servers. For a server you don’t want to deal with this sort of thing, by definition when a server goes down multiple people will stop work (small server implies no clustering). Spending extra to get SSDs just to avoid the risk of unexpected SMR would be a good plan.

Medium Servers

The largest SSD and NVMe devices that are readily available are 2TB but 10TB disks are commodity items, there are reports of 20TB hard drives being available but I can’t find anyone in Australia selling them.

If you need to store dozens or hundreds of terabytes than hard drives have to be part of the mix at this time. There’s no technical reason why SSDs larger than 10TB can’t be made (the 2.5″ SATA form factor has more than 5* the volume of a 2TB M.2 card) and it’s likely that someone sells them outside the channels I buy from, but probably at a price higher than what my clients are willing to pay. If you want 100TB of affordable storage then a mid range server like the Dell PowerEdge T640 which can have up to 18*3.5″ disks is good. One of my clients has a PowerEdge T630 with 18*3.5″ disks in the 8TB-10TB range (we replace failed disks with the largest new commodity disks available, it used to have 6TB disks). ZFS version 0.8 introduced a “Special VDEV Class” which stores metadata and possibly small data blocks on faster media. So you could have some RAID-Z groups on hard drives for large storage and the metadata on a RAID-1 on NVMe for fast performance. For medium size arrays on hard drives having a “find /” operation take hours is not uncommon, for large arrays having it take days isn’t that uncommon. So far it seems that ZFS is the only filesystem to have taken the obvious step of storing metadata on SSD/NVMe while bulk data is on cheap large disks.

One problem with large arrays is that the vibration of disks can affect the performance and reliability of nearby disks. The ZFS server I run with 18 disks was originally setup with disks from smaller servers that never had ZFS checksum errors, but when disks from 2 small servers were put in one medium size server they started getting checksum errors presumably due to vibration. This alone is a sufficient reason for paying a premium for SSD storage.

Currently the cost of 2TB of SSD or NVMe is between the prices of 6TB and 8TB hard drives, and the ratio of price/capacity for SSD and NVMe is improving dramatically while the increase in hard drive capacity is slow. 4TB SSDs are available for $895 compared to a 10TB hard drive for $549, so it’s 4* more expensive on a price per TB. This is probably good for Windows systems, but for Linux systems where ZFS and “special VDEVs” is an option it’s probably not worth considering. Most Linux user cases where 4TB SSDs would work well would be better served by smaller NVMe and 10TB disks running ZFS. I don’t think that 4TB SSDs are at all popular at the moment (MSY doesn’t stock them), but prices will come down and they will become common soon enough. Probably by the end of the year SSDs will halve in price and no hard drives less than 4TB will be viable.

For rack mounted servers 2.5″ disks have been popular for a long time. It’s common for vendors to offer 2 versions of a rack mount server for 2.5″ and 3.5″ disks where the 2.5″ version takes twice as many disks. If the issue is total storage in a server 4TB SSDs can give the same capacity as 8TB HDDs.

SMR vs Regular Hard Drives

Rumour has it that you can buy 20TB SMR disks, I haven’t been able to find a reference to anyone who’s selling them in Australia (please comment if you know who sells them and especially if you know the price). I expect that the ZFS developers will soon develop a work-around to solve the problems with SMR disks. Then arrays of 20TB SMR disks with NVMe for “special VDEVs” will be an interesting possibility for storage. I expect that SMR disks will be the majority of the hard drive market by 2023 – if hard drives are still on the market. SSDs will be large enough and cheap enough that only SMR disks will offer enough capacity to be worth using.

I think that it is a possibility that hard drives won’t be manufactured in a few years. The volume of a 3.5″ disk is significantly greater than that of 10 M.2 devices so current technology obviously allows 20TB of NVMe or SSD storage in the space of a 3.5″ disk. If the price of 16TB NVMe and SSD devices comes down enough (to perhaps 3* the price of a 20TB hard drive) almost no-one would want the hard drive and it wouldn’t be viable to manufacture them.

It’s not impossible that in a few years time 3D XPoint and similar fast NVM technologies occupy the first level of storage (the ZFS “special VDEV”, OS swap device, log device for database servers, etc) and NVMe occupies the level for bulk storage with no space left in the market for spinning media.

Computer Cases

For servers I expect that models supporting 3.5″ storage devices will disappear. A 1RU server with 8*2.5″ storage devices or a 2RU server with 16*2.5″ storage devices will probably be of use to more people than a 1RU server with 4*3.5″ or a 2RU server with 8*3.5″.

My first IBM PC compatible system had a 5.25″ hard drive, a 5.25″ floppy drive, and a 3.5″ floppy drive in 1988. My current PC is almost a similar size and has a DVD drive (that I almost never use) 5 other 5.25″ drive bays that have never been used, and 5*3.5″ drive bays that I have never used (I have only used 2.5″ SSDs). It would make more sense to have PC cases designed around 2.5″ and maybe 3.5″ drives with no more than one 5.25″ drive bay.

The Intel NUC SFF PCs are going in the right direction. Many of them only have a single storage device but some of them have 2*M.2 sockets allowing RAID-1 of NVMe and some of them support ECC RAM so they could be used as small servers.

A USB DVD drive costs $36, it doesn’t make sense to have every PC designed around the size of an internal DVD drive that will probably only be used to install the OS when a $36 USB DVD drive can be used for every PC you own.

The only reason I don’t have a NUC for my personal workstation is that I get my workstations from e-waste. If I was going to pay for a PC then a NUC is the sort of thing I’d pay to have on my desk.

CryptogramNew Hacking-for-Hire Company in India

Citizen Lab has a new report on Dark Basin, a large hacking-for-hire company in India.

Key Findings:

  • Dark Basin is a hack-for-hire group that has targeted thousands of individuals and hundreds of institutions on six continents. Targets include advocacy groups and journalists, elected and senior government officials, hedge funds, and multiple industries.

  • Dark Basin extensively targeted American nonprofits, including organisations working on a campaign called #ExxonKnew, which asserted that ExxonMobil hid information about climate change for decades.

  • We also identify Dark Basin as the group behind the phishing of organizations working on net neutrality advocacy, previously reported by the Electronic Frontier Foundation.

  • We link Dark Basin with high confidence to an Indian company, BellTroX InfoTech Services, and related entities.

  • Citizen Lab has notified hundreds of targeted individuals and institutions and, where possible, provided them with assistance in tracking and identifying the campaign. At the request of several targets, Citizen Lab shared information about their targeting with the US Department of Justice (DOJ). We are in the process of notifying additional targets.

BellTroX InfoTech Services has assisted clients in spying on over 10,000 email accounts around the world, including accounts of politicians, investors, journalists and activists.

News article. Boing Boing post

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV June 2020 Workshop: Emergency Security Discussion

Jun 20 2020 12:30
Jun 20 2020 14:30
Jun 20 2020 12:30
Jun 20 2020 14:30
Location: 
Online event (TBA)

On Friday morning, our prime minister held an unprecedented press conference to warn Australia (Governments, Industry & Individuals) about a sophisticated cyber attack that is currently underway.

 

 

Linux Users of Victoria is a subcommittee of Linux Australia.

June 20, 2020 - 12:30

read more

CryptogramZoom Will Be End-to-End Encrypted for All Users

Zoom is doing the right thing: it's making end-to-end encryption available to all users, paid and unpaid. (This is a change; I wrote about the initial decision here.)

...we have identified a path forward that balances the legitimate right of all users to privacy and the safety of users on our platform. This will enable us to offer E2EE as an advanced add-on feature for all of our users around the globe -- free and paid -- while maintaining the ability to prevent and fight abuse on our platform.

To make this possible, Free/Basic users seeking access to E2EE will participate in a one-time process that will prompt the user for additional pieces of information, such as verifying a phone number via a text message. Many leading companies perform similar steps on account creation to reduce the mass creation of abusive accounts. We are confident that by implementing risk-based authentication, in combination with our current mix of tools -- including our Report a User function -- we can continue to prevent and fight abuse.

Thank you, Zoom, for coming around to the right answer.

And thank you to everyone for commenting on this issue. We are learning -- in so many areas -- the power of continued public pressure to change corporate behavior.

EDITED TO ADD (6/18): Let's do Apple next.

Worse Than FailureError'd: Fast Hail and Round Wind

"He's not wrong. With wind and hail like this, an isolated tornado definitely ranks third in severity," Rob K. writes.

 

"Upon linking my Days of Wonder account with Steam, I was initially told that I had 7 days to verify my email before account deletion and then I was told something else..." Ian writes.

 

Harvey wrote, "Great. Thanks for the warm welcome to your site ${AUCTION_WEBSITE}"

 

Peter G. writes, "In this case, I imagine the art department did something like 'OK Google, find image of Pentagon, insert into document'."

 

"I'm happy with my efforts but I feel for Terri. 1,400km in 21 days, 200km in the lead and she's barely overcome by this 'NaN' individual," wrote Roger G.

 

Sam writes, "While I admire the honesty of this particular scammer, I do rather think they missed the point."

 

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet DebianReproducible Builds (diffoscope): diffoscope 148 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 148. This version includes the following changes:

[ Daniel Fullmer ]
* Fix a regression in the CBFS comparator due to changes in our_check_output.

[ Chris Lamb ]
* Add a remark in the deb822 handling re. potential security issue in the
  .changes, .dsc, .buildinfo comparator.

You find out more by visiting the project homepage.

,

Krebs on SecurityFEMA IT Specialist Charged in ID Theft, Tax Refund Fraud Conspiracy

An information technology specialist at the Federal Emergency Management Agency (FEMA) was arrested this week on suspicion of hacking into the human resource databases of University of Pittsburgh Medical Center (UPMC) in 2014, stealing personal data on more than 65,000 UPMC employees, and selling the data on the dark web.

On June 16, authorities in Michigan arrested 29-year-old Justin Sean Johnson in connection with a 43-count indictment on charges of conspiracy, wire fraud and aggravated identity theft.

Federal prosecutors in Pittsburgh allege that in 2013 and 2014 Johnson hacked into the Oracle PeopleSoft databases for UPMC, a $21 billion nonprofit health enterprise that includes more than 40 hospitals.

According to the indictment, Johnson stole employee information on all 65,000 then current and former employees, including their names, dates of birth, Social Security numbers, and salaries.

The stolen data also included federal form W-2 data that contained income tax and withholding information, records that prosecutors say Johnson sold on dark web marketplaces to identity thieves engaged in tax refund fraud and other financial crimes. The fraudulent tax refund claims made in the names of UPMC identity theft victims caused the IRS to issue $1.7 million in phony refunds in 2014.

“The information was sold by Johnson on dark web forums for use by conspirators, who promptly filed hundreds of false form 1040 tax returns in 2014 using UPMC employee PII,” reads a statement from U.S. Attorney Scott Brady. “These false 1040 filings claimed hundreds of thousands of dollars of false tax refunds, which they converted into Amazon.com gift cards, which were then used to purchase Amazon merchandise which was shipped to Venezuela.”

Johnson could not be reached for comment. At a court hearing in Pittsburgh this week, a judge ordered the defendant to be detained pending trial. Johnson’s attorney declined to comment on the charges.

Prosecutors allege Johnson’s intrusion into UPMC was not an isolated occurrence, and that for several years after the UPMC hack he sold personally identifiable information (PII) to buyers on dark web forums.

The indictment says Johnson used the hacker aliases “DS and “TDS” to market the stolen records to identity thieves on the Evolution and AlphaBay dark web marketplaces. However, archived copies of the now-defunct dark web forums indicate those aliases are merely abbreviations that stand for “DearthStar” and “TheDearthStar,” respectively.

“You can expect good things come tax time as I will have lots of profiles with verified prior year AGIs to make your refund filing 10x easier,” TheDearthStar advertised in an August 2015 message to AlphaBay members.

In some cases, it appears these DearthStar identities were actively involved in not just selling PII and tax refund fraud, but also stealing directly from corporate payrolls.

In an Aug. 2015 post to AlphaBay titled “I’d like to stage a heist but…,” TheDearthStar solicited people to help him cash out access he had to the payroll systems of several different companies:

“… I have nowhere to send the money. I’d like to leverage the access I have to payroll systems of a few companies and swipe a chunk of their payroll. Ideally, I’d like to find somebody who has a network of trusted individuals who can receive ACH deposits.”

When another AlphaBay member asks how much he can get, TheDearthStar responds, “Depends on how many people end up having their payroll records ‘adjusted.’ Could be $1,000 could be $100,000.”

2014 and 2015 were particularly bad years for tax refund fraud, a form of identity theft which cost taxpayers and the U.S. Treasury billions of dollars. In April 2014, KrebsOnSecurity wrote about a spike in tax refund fraud perpetrated against medical professionals that caused many to speculate that one or more major healthcare providers had been hacked.

A follow-up story that same month examined the work of a cybercrime gang that was hacking into HR departments at healthcare organizations across the country and filing fraudulent tax refund requests with the IRS on employees of those victim firms.

The Justice Department’s indictment quotes from Johnson’s online resume as stating that he is proficient at installing and administering Oracle PeopleSoft systems. A LinkedIn resume for a Justin Johnson from Detroit says the same, and that for the past five months he has served as an information technology specialist at FEMA. A Facebook profile with the same photo belongs to a Justin S. Johnson from Detroit.

Johnson’s resume also says he was self-employed for seven years as a “cyber security researcher / bug bounty hunter” who was ranked in the top 1,000 by reputation on Hacker One, a program that rewards security researchers who find and report vulnerabilities in software and web applications.

Planet DebianEnrico Zini: Missing Qt5 designer library in cross-build development

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

The problem

While testing the cross-compiler, we noticed that the designer library was not being built.

The designer library is needed to build designer plugins, which allow loading, dynamically at runtime, .ui interface files that use custom widgets.

The error the customer got at runtime is: QFormBuilder was unable to create a custom widget of the class '…'; defaulting to base class 'QWidget'.

The library with the custom widget implementation was correctly linked, and indeed the same custom widget was used by the application in other parts of its interface not loaded via .ui files.

It turns out that it is not sufficient, and to load custom widgets automatically, QUiLoader wants to read their metadata from plugin libraries containing objects that implement the QDesignerCustomWidgetInterface interface.

Sadly, building such a library requires using QT += designer, and the designer library, that was not being built by Qt5's build system. This looks very much like a Qt5 bug.

A work around would be to subclass QUiLoader extending createWidget to teach it how to create the custom widgets we need. Unfortunately, the customer has many custom widgets.

The investigation

To find out why designer was not being built, I added -d to the qmake invocation at the end of qtbase/configure, and trawled through the 3.1G build output.

The needle in the haystack seems to be here:

DEBUG 1: /home/build/armhf/qt-everywhere-src-5.15.0/qttools/src/designer/src/src.pro:18: SUBDIRS := uiplugin uitools lib components designer plugins
DEBUG 1: /home/build/armhf/qt-everywhere-src-5.15.0/qttools/src/designer/src/src.pro:23: calling qtNomakeTools(lib components designer plugins)

As far as I can understand, qtNomakeTools seems to be intended to disable building those components if QT_BUILD_PARTS doesn't contain tools. For cross-building, QT_BUILD_PARTS is libs examples, so designer does not get built.

However, designer contains the library part needed for QDesignerCustomWidgetInterface and that really needs to be built. I assume that part should really be built as part of libs, not tools.

The fixes/workarounds

I tried removing designer from the qtNomakeTools invocation at qttools/src/designer/src/src.pro:23, to see if qttools/src/designer/src/designer/ would get built.

It did get built, but then build failed with designer/src/designer and designer/src/uitools both claiming the designer plugin.

I tried editing qttools/src/designer/src/uitools/uitools.pro not to claim the designer plugin when tools is not a build part.

I added the tweaks to the Qt5 build system as debian/patches.

2 hours of build time later...

make check is broken:

make[6]: Leaving directory '/home/build/armhf/qt-everywhere-src-5.15.0/qttools/src/designer/src/uitools'
make[5]: *** No rule to make target 'sub-components-check', needed by 'sub-designer-check'.  Stop.

But since make check doesn't do anything in this build, we can simply override dh_auto_test to skip that step.

Finally, this patch builds a new executable, of an architecture that makes dh_shlibdeps struggle:

dpkg-shlibdeps: error: cannot find library libQt5DesignerComponentssystem.so.5 needed by debian/qtbase5system-armhf-dev/opt/qt5system-armhf/bin/designer (ELF format: 'elf32-little' abi: '0101002800000000'; RPATH: '')
dpkg-shlibdeps: error: cannot find library libQt5Designersystem.so.5 needed by debian/qtbase5system-armhf-dev/opt/qt5system-armhf/bin/designer (ELF format: 'elf32-little' abi: '0101002800000000'; RPATH: '')

And we can just skip running dh_shlibdeps on the designer executable.

The result is in the qt5custom git repository.

CryptogramTheft of CIA's "Vault Seven" Hacking Tools Due to Its Own Lousy Security

The Washington Post is reporting on an internal CIA report about its "Vault 7" security breach:

The breach -- allegedly committed by a CIA employee -- was discovered a year after it happened, when the information was published by WikiLeaks, in March 2017. The anti-secrecy group dubbed the release "Vault 7," and U.S. officials have said it was the biggest unauthorized disclosure of classified information in the CIA's history, causing the agency to shut down some intelligence operations and alerting foreign adversaries to the spy agency's techniques.

The October 2017 report by the CIA's WikiLeaks Task Force, several pages of which were missing or redacted, portrays an agency more concerned with bulking up its cyber arsenal than keeping those tools secure. Security procedures were "woefully lax" within the special unit that designed and built the tools, the report said.

Without the WikiLeaks disclosure, the CIA might never have known the tools had been stolen, according to the report. "Had the data been stolen for the benefit of a state adversary and not published, we might still be unaware of the loss," the task force concluded.

The task force report was provided to The Washington Post by the office of Sen. Ron Wyden (D-Ore.), a member of the Senate Intelligence Committee, who has pressed for stronger cybersecurity in the intelligence community. He obtained the redacted, incomplete copy from the Justice Department.

It's all still up on WikiLeaks.

Planet DebianIan Jackson: BountySource have turned evil - alternatives ?

I need an alternative to BountySource, who have done an evil thing. Please post recommendations in the comments.

From: Ian Jackson <*****>
To: support@bountysource.com
Subject: Re: Update to our Terms of Service
Date: Wed, 17 Jun 2020 16:26:46 +0100

Bountysource writes ("Update to our Terms of Service"):
> You are receiving this email because we are updating the Bountysource Terms of
> Service, effective 1st July 2020.
>
> What's changing?
> We have added a Time-Out clause to the Bounties section of the agreement:
>
> 2.13 Bounty Time-Out.
> If no Solution is accepted within two years after a Bounty is posted, then the
> Bounty will be withdrawn and the amount posted for the Bounty will be retained
> by Bountysource. For Bounties posted before June 30, 2018, the Backer may
> redeploy their Bounty to a new Issue by contacting support@bountysource.com
> before July 1, 2020. If the Backer does not redeploy their Bounty by the
> deadline, the Bounty will be withdrawn and the amount posted for the Bounty
> will be retained by Bountysource.
>
> You can read the full Terms of Service here
>
> What do I need to do?
> If you agree to the new terms, you don't have to do anything.
>
> If you have a bounty posted prior to June 30, 2018 that is not currently being
> solved, email us at support@bountysource.com to redeploy your bounty.  Or, if
> you do not agree with the new terms, please discontinue using Bountysource.

I do not agree to this change to the Terms and Conditions.
Accordingly, I will not post any more bounties on BountySource.

I currently have one outstanding bounty of $200 on
   https://www.bountysource.com/issues/86138921-rfe-add-a-frontend-for-the-rust-programming-language

That was posted in December 2019.  It is not clear now whether that
bounty will be claimed within your 2-year timeout period.

Since I have not accepted the T&C change, please can you confirm that

(i) My bounty will not be retained by BountySource even if no solution
    is accepted by December 2021.

(ii) As a backer, you will permit me to vote on acceptance of that
    bounty should a solution be proposed before then.

I suspect that you intend to rely on the term in the previous T&C
giving you unlimited ability to modify the terms and conditions.  Of
course such a term is an unfair contract term, because if it were
effective it would give you the power to do whatever you like.  So it
is not binding on me.

I look forward to hearing from you by the 8th of July.  If I do not
hear from you I will take the matter up with my credit card company.

Thank you for your attention.

Ian.

They will try to say "oh it's all governed by US law" but of course section 75 of the Consumer Credit Act makes the card company jointly liable for Bountysource's breach of contract and a UK court will apply UK consumer protection law even to a contract which says it is to be governed by US law - because you can't contract out of consumer protection. So the card company are on the hook and I can use them as a lever.

Update - BountySource have changed their mind

From: Bountysource <support@bountysource.com>
To: *****
Subject: Re: Update to our Terms of Service
Date: Wed, 17 Jun 2020 18:51:11 -0700

Hi Ian

The new terms of service has with withdrawn.

This is not the end of the matter, I'm sure. They will want to get long-unclaimed bounties off their books (and having the cash sat forever at BountySource is not ideal for backers either). Hopefully they can engage in a dialogue and find a way that is fair, and that doesn't incentivise BountySource to sabotage bounty claims(!) I think that means that whatever it is, BountySource mustn't keep the money. There are established ways of dealing with similar problems (eg ancient charitable trusts; unclaimed bank accounts).

I remain wary. That BountySource is now owned by cryptocurrency company is not encouraging. That they would even try what they just did is a really bad sign.

Edited 2020-06-17 16:28 for a typo in Bountysource's email address
Update added 2020-06-18 11:40 for BountySource's change of mind.



comment count unavailable comments

Planet DebianGunnar Wolf: On masters and slaves, whitelists and blacklists...

LWN published today yet another great piece of writing, Loaded terms in free software. I am sorry, the content will not be immediately available to anybody following at home, as LWN is based on a subscription model — But a week from now, the article will be open for anybody to read. Or you can ask me (you most likely can find my contact addresses, as they are basically everywhere) for a subscriber link, I will happily provide it.

In consonance with the current mood that started with the killing of George Floyd and sparked worldwide revolts against police brutality, racism (mostly related to police and law enforcement forces, but social as well) and the like, the debate that already started some months ago in technical communities has re-sparked:

We have many terms that come with long histories attached to them, and we are usually oblivious to their obvious meaning. We? Yes, we, the main users and creators of technology. I never felt using master and slave to refer to different points of a protocol, bus, clock or whatever (do refer to the Wikipedia article for a fuller explanation) had any negative connotations — but then again, those terms have never tainted my personal family. That is, I understand I speak from a position of privilege.

A similar –although less heated– issue goes around the blacklist and whitelist terms, or other uses that use white to refer to good, law-abiding citizens, and black to refer to somewhat antisocial uses (i.e. the white hat and black hat hackers).

For several years, this debate has been sparking and dying off. Some important changes have been made — Particularly, in 2017 the Internet Software Consortium started recommending Primary and Secondary, Python dropped master/slave pairs after a quite thorough and deep review throughout 2018, GitHub changed the default branch from master to main earlier this week. The Internet Engineering Task Force has a draft (that lapsed and thus sadly didn’t become an RFC, but still, is archived), Terminology, Power and Oppressive Language that lists suggested alternatives:

There are also many other relationships that can be used as metaphors, Eglash’s research calls into question the accuracy of the master-slave metaphor. Fortunately, there are ample alternatives for the master-slave relationship. Several options are suggested here and should be chosen based on the pairing that is most clear in context:

  • Primary-secondary
  • Leader-follower
  • Active-standby
  • Primary-replica
  • Writer-reader
  • Coordinator-worker
  • Parent-helper

I’ll add that I think we Spanish-speakers are not fully aware of the issue’s importance, because the most common translation I have seen for master/slave is maestro/esclavo: Maestro is the word for teacher (although we do keep our slaves in place). But think whether it sounds any worse if you refer to device pairs, or members of a database high-availability cluster, or whatever as Amo and Esclavo. It does sound much worse…

I cannot add much of value to this debate. I am just happy issues like this are being recognized and dealt with. If the topic interests you, do refer to the LWN article! Some excrepts:

I consider the following to be the core of Jonathan Corbet’s writeup:

Recent events, though, have made it clear — even to those of us who were happy to not question this view — that the story of slavery and the wider racist systems around it is not yet finished. There are many people who are still living in the middle of it, and it is not a nice place to be. We are not so enlightened as we like to think we are.

If there is no other lesson from the events of the last few weeks, we should certainly take to heart the point that we need to be listening to the people who have been saying, for many years, that they are still suffering. If there are people who are telling us that terms like “slave” or “blacklist” are a hurtful reminder of the inequities that persist in our society, we need to accept that as the truth and act upon it. Etymological discussions on what, say, “master” really means may be interesting, but they miss the point and are irrelevant to this discussion.

Part of a comment by user yokem_55:

Often, it seems to me that the replacement words are much more descriptive and precise than the old language. ‘Allowlist’ is far more obviously a list of explicitly authorized entities than ‘whitelist’. ‘Mainline’ has a more obvious meaning of a core stream of development than ‘master’.

The benefit of moving past this language is more than just changing cultural norms, it’s better, more precise communication across the board.

Another spot-on comment, by user alan:

From my perspective as a Black American male, I think that it’s nice to see people willing to see and address racism in various spheres. I am concerned that some of these steps will be more performative than substantial. Terminology changes in software so as to be more welcoming is a nice thing. Ensuring that oppressed minorities have access to the tools and resources to help reduce inequity and ensuring equal protection under the laws is better. We’ll get there one day I’m sure. The current ask is much simpler, its just to stop randomly killing and terrorizing us. Please and thank you.

So… Maybe the protests of this year caught special notoriety because the society is reacting after (or during, for many of us) the lockdown. In any case, I hope for their success in changing the planet’s culture of oppression.

Comments

Tomas Janousek 2020-06-19 10:04:32 +0200

In the blog post “On masters and slaves, whitelists and blacklists…” you claim that “GitHub changed the default branch from master to main earlier this week” but I don’t think that change is in effect yet. When you create a repo, the default branch is still named “master”.

Gunnar Wolf 2020-06-19 11:52:30 -0500

Umh, seems you are right. Well, what can I say? I’m reporting only what I have been able to find / read…

Now, given that said master branch does not carry any Git-specific meaning and is just a commonly used configuration… I hope people start picking it up.

No, I have not renamed master branches in any of my repos… but intend to do so soonish.

Tomas Janousek 2020-06-19 20:01:52 +0200

Yeah, don’t worry. I just find it sad that so much inaccurate news is spreading from a single CEO tweet, and I wanted to help stop that. I’m sure some change will happen eventually, but until it does, we shouldn’t speak about it in the past tense. :-)

Worse Than FailureCodeSOD: Rings False

There are times when a code block needs a lot of setup, and there are some where it mostly speaks for itself. Today’s anonymous submitter found this JavaScript in a React application, coded by one of the senior team-members.

if (false === false){
    startSingleBasedApp();
} else {
    startTabNavigation();
}

Look, I know how this code got there. At some point, they planned to check a configuration or a feature flag, but during development, it was just faster to do it this way. Then they forgot, and then it got released to production.

Had our submitter not gone poking, it would have sat there in production until someone tried to flip the flag and nothing happened.

This is why you do code reviews.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

CryptogramSecurity Analysis of the Democracy Live Online Voting System

New research: "Security Analysis of the Democracy Live Online Voting System":

Abstract: Democracy Live's OmniBallot platform is a web-based system for blank ballot delivery, ballot marking, and (optionally) online voting. Three states -- Delaware, West Virginia, and New Jersey -- recently announced that they will allow certain voters to cast votes online using OmniBallot, but, despite the well established risks of Internet voting, the system has never been the subject of a public, independent security review.

We reverse engineered the client-side portion of OmniBallot, as used in Delaware, in order to detail the system's operation and analyze its security.We find that OmniBallot uses a simplistic approach to Internet voting that is vulnerable to vote manipulation by malware on the voter's device and by insiders or other attackers who can compromise Democracy Live, Amazon,Google, or Cloudflare. In addition, Democracy Live, which appears to have no privacy policy, receives sensitive personally identifiable information­ -- including the voter's identity, ballot selections, and browser fingerprint­ -- that could be used to target political ads or disinformation campaigns.Even when OmniBallot is used to mark ballots that will be printed and returned in the mail, the software sends the voter's identity and ballot choices to Democracy Live, an unnecessary security risk that jeopardizes the secret ballot. We recommend changes to make the platform safer for ballot delivery and marking. However, we conclude that using OmniBallot for electronic ballot return represents a severe risk to election security and could allow attackers to alter election results without detection.

News story.

EDITED TO ADD: This post has been translated into Portuguese.

CryptogramFacebook Helped Develop a Tails Exploit

This is a weird story:

Hernandez was able to evade capture for so long because he used Tails, a version of Linux designed for users at high risk of surveillance and which routes all inbound and outbound connections through the open-source Tor network to anonymize it. According to Vice, the FBI had tried to hack into Hernandez's computer but failed, as the approach they used "was not tailored for Tails." Hernandez then proceeded to mock the FBI in subsequent messages, two Facebook employees told Vice.

Facebook had tasked a dedicated employee to unmasking Hernandez, developed an automated system to flag recently created accounts that messaged minors, and made catching Hernandez a priority for its security teams, according to Vice. They also paid a third party contractor "six figures" to help develop a zero-day exploit in Tails: a bug in its video player that enabled them to retrieve the real I.P. address of a person viewing a clip. Three sources told Vice that an intermediary passed the tool onto the FBI, who then obtained a search warrant to have one of the victims send a modified video file to Hernandez (a tactic the agency has used before).

[...]

Facebook also never notified the Tails team of the flaw -- breaking with a long industry tradition of disclosure in which the relevant developers are notified of vulnerabilities in advance of them becoming public so they have a chance at implementing a fix. Sources told Vice that since an upcoming Tails update was slated to strip the vulnerable code, Facebook didn't bother to do so, though the social media company had no reason to believe Tails developers had ever discovered the bug.

[...]

"The only acceptable outcome to us was Buster Hernandez facing accountability for his abuse of young girls," a Facebook spokesperson told Vice. "This was a unique case, because he was using such sophisticated methods to hide his identity, that we took the extraordinary steps of working with security experts to help the FBI bring him to justice."

I agree with that last paragraph. I'm fine with the FBI using vulnerabilities: lawful hacking, it's called. I'm less okay with Facebook paying for a Tails exploit, giving it to the FBI, and then keeping its existence secret.

Another article.

EDITED TO ADD: This post has been translated into Portuguese.

,

Krebs on SecurityWhen Security Takes a Backseat to Productivity

“We must care as much about securing our systems as we care about running them if we are to make the necessary revolutionary change.” -CIA’s Wikileaks Task Force.

So ends a key section of a report the U.S. Central Intelligence Agency produced in the wake of a mammoth data breach in 2016 that led to Wikileaks publishing thousands of classified documents stolen from the agency’s offensive cyber operations division. The analysis highlights a shocking series of security failures at one of the world’s most secretive entities, but the underlying weaknesses that gave rise to the breach also unfortunately are all too common in many organizations today.

The CIA produced the report in October 2017, roughly seven months after Wikileaks began publishing Vault 7 — reams of classified data detailing the CIA’s capabilities to perform electronic surveillance and cyber warfare. But the report’s contents remained shrouded from public view until earlier this week, when heavily redacted portions of it were included in a letter by Sen. Ron Wyden (D-Ore.) to the Director of National Intelligence.

The CIA acknowledged its security processes were so “woefully lax” that the agency probably would never have known about the data theft had Wikileaks not published the stolen documents online. What kind of security failures created an environment that allegedly allowed a former CIA employee to exfiltrate so much sensitive data? Here are a few, in no particular order:

  • Failing to rapidly detect security incidents.
  • Failing to act on warning signs about potentially risky employees.
  • Moving too slowly to enact key security safeguards.
  • A lack of user activity monitoring or robust server audit capability.
  • No effective removable media controls.
  • No single person empowered to ensure IT systems are built and maintained securely throughout their lifecycle.
  • Historical data available to all users indefinitely.

Substitute the phrase “cyber weapons” with “productivity” or just “IT systems” in the CIA’s report and you might be reading the post-mortem produced by a security firm hired to help a company recover from a highly damaging data breach.

A redacted portion of the CIA’s report on the Wikileaks breach.

DIVIDED WE STAND, UNITED WE FALL

A key phrase in the CIA’s report references deficiencies in “compartmentalizing” cybersecurity risk. At a high level (not necessarily specific to the CIA), compartmentalizing IT environments involves important concepts such as:

  • Segmenting one’s network so that malware infections or breaches in one part of the network can’t spill over into other areas.
  • Not allowing multiple users to share administrative-level passwords
  • Developing baselines for user and network activity so that deviations from the norm stand out more prominently.
  • Continuously inventorying, auditing, logging and monitoring all devices and user accounts connected to the organization’s IT network.

“The Agency for years has developed and operated IT mission systems outside the purview and governance of enterprise IT, citing the need for mission functionality and speed,” the CIA observed. “While often fulfilling a valid purpose, this ‘shadow IT’ exemplifies a broader cultural issue that separates enterprise IT from mission IT, has allowed mission system owners to determine how or if they will police themselves.”

All organizations experience intrusions, security failures and oversights of key weaknesses. In large enough enterprises, these failures likely happen multiple times each day. But by far the biggest factor that allows small intrusions to morph into a full-on data breach is a lack of ability to quickly detect and respond to security incidents.

Also, because employees tend to be the most abundant security weakness in any organization, instituting some kind of continuing security awareness training for all employees is a good idea. Some security experts I know and respect dismiss security awareness programs as a waste of time and money, observing that no matter how much training a company does, there will always be some percentage of users who will click on anything.

That may or may not be accurate, but even if it is, at least the organization then has a much better idea which employees probably need more granular security controls (i.e. more compartmentalizing) to keep them from becoming a serious security liability.

Sen. Wyden’s letter (PDF), first reported on by The Washington Post, is worth reading because it points to a series of continuing security weaknesses at the CIA, many of which have already been addressed by other federal agencies, including multi-factor authentication for domain names and access to classified/sensitive systems, and anti-spam protections like DMARC.

Sociological ImagesWhat’s Trending? The Happiness Drop

One important lesson from political science and sociology is that public opinion often holds steady. This is because it is difficult to get individual people to change their minds. Instead, people tend to keep consistent views as “settled dispositions” over time, and mass opinion changes slowly as new people age into taking surveys and older people age out.

Sometimes public opinion does change quickly, though, and these rapid changes are worth our attention precisely because they are rare. For example, one of the most notable recent changes is the swing toward majority support for same-sex marriage in the United States in just the last decade.

That’s why a new finding is so interesting and so troubling: NORC is reporting a pretty big swing in self-reported happiness since the pandemic broke out using a new 2020 survey conducted in late May. Compared to earlier trends from the General Social Survey, fewer people are reporting they are “very happy,” optimism about the future is down, and feelings of isolation and loneliness are up. The Associated Press has dynamic charts here, and I made an open-access, creative commons version of one visualization using GSS data and NORC’s estimates:

As with any survey trend, we will need more data to get the true shape of the change and see whether it will persist over time. Despite this, one important point here is the consistency before the new 2020 data. Think about all the times aggregated happiness reports didn’t really change: we don’t see major shifts around September 11th, 2001, and there are only small changes around the Gulf War in 1990 or the 2008 financial crisis.

There is something reassuring about such a dramatic drop now, given this past resilience. If you’re feeling bad, you’re not alone. We have to remember that emotions are social. People have a remarkable ability to persist through all kinds of trying times, but that is often because they can connect with others for support. The unprecedented isolation of physical distancing and quarantine has a unique impact on our social relationships and, in turn, it could have a dramatic impact on our collective wellbeing. The first step to fixing this problem is facing it honestly.

Inspired by demographic facts you should know cold, “What’s Trending?” is a post series at Sociological Images featuring quick looks at what’s up, what’s down, and what sociologists have to say about it.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramBank Card "Master Key" Stolen

South Africa's Postbank experienced a catastrophic security failure. The bank's master PIN key was stolen, forcing it to cancel and replace 12 million bank cards.

The breach resulted from the printing of the bank's encrypted master key in plain, unencrypted digital language at the Postbank's old data centre in the Pretoria city centre.

According to a number of internal Postbank reports, which the Sunday Times obtained, the master key was then stolen by employees.

One of the reports said that the cards would cost about R1bn to replace. The master key, a 36-digit code, allows anyone who has it to gain unfettered access to the bank's systems, and allows them to read and rewrite account balances, and change information and data on any of the bank's 12-million cards.

The bank lost $3.2 million in fraudulent transactions before the theft was discovered. Replacing all the cards will cost an estimated $58 million.

Worse Than FailureCodeSOD: Going on an Exceptional Date

Here’s a puzzler for you: someone has written bad date handling code, but honestly, the bad date handling isn’t the real WTF. I mean, it’s bad, but it highlights something worse.

Cid inherited this method, along with a few others which we’ll probably look at in the future. It’s Java, so let’s just start with the method signature.

public static void checkTimestamp(String timestamp, String name)
  throws IOException

Honestly, that pretty much covers it. What, you don’t see it? Well, let’s break out the whole method:

public static void checkTimestamp(String timestamp, String name)
  throws IOException {
    if (timestamp == null) {
      return;
    }
    String msg = new String(
        "Wrong date or time. (" + name + "=\"" + timestamp + "\")");
    int len = timestamp.length();
    if (len != 15) {
      throw new IOException(msg);
    }
    for (int i = 0; i < (len - 1); i++) {
      if (! Character.isDigit(timestamp.charAt(i))) {
        throw new IOException(msg);
      }
    }
    if (timestamp.charAt(len - 1) != 'Z') {
      throw new IOException(msg);
    }
    int year = Integer.parseInt(timestamp.substring(0,4));
    int month = Integer.parseInt(timestamp.substring(4,6));
    int day = Integer.parseInt(timestamp.substring(6,8));
    int hour = Integer.parseInt(timestamp.substring(8,10));
    int minute = Integer.parseInt(timestamp.substring(10,12));
    int second = Integer.parseInt(timestamp.substring(12,14));
    if (day < 1) {
      throw new IOException(msg);
    }
    if ((month < 1) || (month > 12)) {
      throw new IOException(msg);
    }
    if (month == 2) {
      if ((year %4 == 0 && year%100 != 0) || year%400 == 0) {
        if (day > 29) {
          throw new IOException(msg);
        }
      }
      else {
        if (day > 28) {
          throw new IOException(msg);
    	}
      }
    }
    if (month == 1 || month == 3 || month == 5 || month == 7
    || month == 8 || month == 10 || month == 12) {
      if (day > 31) {
        throw new IOException(msg);
      }
    }
    if (month == 4 || month == 6 || month == 9 || month == 11) {
      if (day > 30) {
        throw new IOException(msg);
      }
    }
    if ((hour < 0) || (hour > 24)) {
      throw new IOException(msg);
    }
    if ((minute < 0) || (minute > 59)) {
      throw new IOException(msg);
    }
    if ((second < 0) || (second > 59)) {
      throw new IOException(msg);
    }
  }

Now, one of Java’s “interesting” ideas was adding checked exceptions to the language. If a method could throw an exception, it needs to announce what that exception is. This lets the compiler check and make sure that any exception which might be thrown is caught.

It’s also a pain for developers.

This developer felt that pain, and spent about three seconds think about it. "Well, something gave this method a timestamp as input, and if that input is wrong… we should throw an IOException.

Which is a choice, I guess. Better than Exception.

To “help” the calling code decide what to do, the exception helpfully sets the same exact message regardless of what went wrong: “Wrong date or time”.

But the real icing on this particular soggy pie is really the method name: checkTimestamp. From that name, we know that we don’t expect it to have a valid timestamp, we need to check, so “an incorrectly formatted timestamp” isn’t an exceptional condition, it’s an expected behavior. This method should return a boolean value.

Oh, and also, it should just use built-in date handling, but that really seems secondary to this abuse of exceptions.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

TEDConversations on rebuilding society: Week 4 of TED2020

For week 4 of TED2020, leaders in international development, history, architecture and public policy explored how we might rebuild during the COVID-19 pandemic and the ongoing protests against racial injustice in the United States. Below, a recap of their insights.

Achim Steiner, head of the UNDP, discusses how the COVID-19 pandemic is leading people to reexamine the future of society. He speaks at TED2020: Uncharted on June 8, 2020. (Photo courtesy of TED)

Achim Steiner, head of the United National Development Programme

Big idea: The public and private sectors must work together to rebuild communities and economies from the COVID-19 pandemic.

Why? When the coronavirus hit, many governments and organizations were unprepared and ill-equipped to respond effectively, says Achim Steiner. He details the ways the UNDP is partnering with both private companies and state governments to help developing countries rebuild, including delivering medicine and supplies, setting up Zoom accounts for governing bodies and building virus tracking systems. Now that countries are beginning to think broadly about life after COVID-19, Steiner says that widespread disenchantment with the state is leading people to question the future of society. They’re rethinking the relationship between the state and its citizens, the role of the private sector and the definition of a public good. He believes that CEOs and business leaders need to step forward and forge alliances with the public sector in order to address societal inequalities and shape the future of economies. “It is not that the state regulates all the problems and the private sector is essentially best off if it can just focus on its own shareholders or entrepreneurial success,” he says. “We need both.”


“The heartbeat of antiracism is confession,” says author and historian Ibram X. Kendi. He speaks at TED2020: Uncharted on June 9, 2020. (Photo courtesy of TED)

Ibram X. Kendi, Author and historian

Big idea: To create a more just society, we need to make antiracism part of our everyday lives.

How? There is no such thing as being “not racist,” says Ibram X. Kendi. He explains that an idea, behavior or policy is either racist (suggesting that any racial group is superior or inferior in any way) or antiracist (suggesting that the racial groups are equals in all their apparent differences). In this sense, “racist” isn’t a fixed identity — a bad, evil person — but rather a descriptive term, highlighting what someone is doing in a particular moment. Anyone can be racist or antiracist; the difference is found in how we choose to see ourselves and others. Antiracism is vulnerable work, Kendi says, and it requires persistent self-awareness, self-examination and self-criticism, grounded in a willingness to concede your privileges and admit when you’re wrong. As we learn to more clearly recognize, take responsibility for and reject prejudices in our public policies, workplaces and personal beliefs, we can actively use this awareness to uproot injustice and inequality in the world — and replace it with love. “The heartbeat of racism itself has always been denial,” he says. “The heartbeat of antiracism is confession.” Watch the full discussion on TED.com.


What’s the connection between poetry and policy? Aaron Maniam explains at TED2020: Uncharted on June 10, 2020. (Photo courtesy of TED)

Aaron Maniam, Poet and policymaker

Big idea: By crafting a range of imaginative, interlocking metaphors, we can better understand COVID-19, its real-time impacts and how the pandemic continues to change our world.

How? As a poet and a policymaker in Singapore, Maniam knows the importance of language to capture and evoke the state of the world — and to envision our future. As people across the world share their stories of the pandemic’s impact, a number of leading metaphors have emerged. In one lens, humanity has “declared war” on COVID-19 — but that angle erases any positive effects of the pandemic, like how many have been able to spend more time with loved ones. In another lens, COVID-19 has been a global “journey” — but that perspective can simplify the way class, race and location severely impact how people move through this time. Maniam offers another lens: that the pandemic has introduced a new, constantly evolving “ecology” to the world, irrevocably changing how we live on local, national and global levels. But even the ecology metaphor doesn’t quite encompass the entirety of this era, he admits. Maniam instead encourages us to examine and reflect on the pandemic across a number of angles, noting that none of these lenses, or any others, are mutually exclusive. Our individual and collective experiences of this unprecedented time deserve to be told and remembered in expansive, robust and inclusive ways. “Each of us is never going to have a monopoly on truth,” he says. “We have to value the diversity that others bring by recognizing their identity diversity … and their competent diversity — the importance of people coming from disciplines like engineering, history, public health, etc. — all contributing to a much richer understanding and totality of the situation we’re in.”


Vishaan Chakrabarti explores how the coronavirus pandemic might reshape life in cities. He speaks at TED2020: Uncharted on June 10, 2020. (Photo courtesy of TED)

Vishaan Chakrabarti, Architect

Big idea: Cities are facing a crisis of inequity and a crisis in health. To recover and heal, we need to plan our urban areas around inclusion and equality. 

How? In order to implement a new urban agenda rooted in equity, Vishaan Chakrabarti says that we need to consider three components: affordable housing and accessible health care; sustainable urban mobility; and attainable social and cultural resources. Chakrabarti shatters the false narrative of having to choose between an impoverished city or a prosperous one, instead envisioning one whose urban fabric is diverse with reformed housing policies and budgets. “Housing is health,” he says. “You cannot have a healthy society if people are under housing stress or have homelessness.” With a third of public space dedicated to private cars in many cities, Chakrabarti points to the massive opportunity we have to dedicate more space to socially distanced ways to commute and ecologically conscious modes of transportation, like walking or biking. We will need to go directly to communities and ask what their needs are to build inclusive, eco-friendly and scalable solutions. “We need a new narrative of generosity, not austerity,” he says.

CryptogramEavesdropping on Sound Using Variations in Light Bulbs

New research is able to recover sound waves in a room by observing minute changes in the room's light bulbs. This technique works from a distance, even from a building across the street through a window.

Details:

In an experiment using three different telescopes with different lens diameters from a distance of 25 meters (a little over 82 feet) the researchers were successfully able to capture sound being played in a remote room, including The Beatles' Let It Be, which was distinguishable enough for Shazam to recognize it, and a speech from President Trump that Google's speech recognition API could successfully transcribe. With more powerful telescopes and a more sensitive analog-to-digital converter, the researchers believe the eavesdropping distances could be even greater.

It's not expensive: less than $1,000 worth of equipment is required. And unlike other techniques like bouncing a laser off the window and measuring the vibrations, it's completely passive.

News articles.

Worse Than FailureCodeSOD: Dates Float

In a lot of legacy code, I've come across "integer dates". It's a pretty common way to store dates in a compact format: an integer in the form "YYYYMMDD", e.g., 20200616 It's relatively compact, it remains human readable (unlike a Unix epoch). It's not too difficult to play with the modulus and rounding operators to pick it back into date parts, if you need to, though mostly we'd use something like this as an ID-like value, or for sorting.

Thanks to Katie E I've learned about a new format: decimal years. The integer portion is the year, and the decimal portion is how far through that year you are, e.g. 2020.4547. This is frequently used in statistical modeling to manage time-series data. Once again, it's not meant to be parsed back into an actual date, but if you're careful, you can do it.

Unless you're whoever wrote this C++ code, which Katie found.


*unYear = (unsigned short)m_fEpoch;
*unMonth = (unsigned short)(((m_fEpoch - (float)*unYear) * 365.0) / 12.0) + 1;

Right off the bat, we can see that they're using pointers to these values: *unYear tells us that unYear must be a pointer. This isn't wrong, but it's a code smell. I've got to wonder why they're doing that. It's not wrong, it just makes me suspicious.

The goal, as you can see from the variable names, is to figure out which month we're in. So the first step is to remove the year portion- (unsigned short)m_fEpoch will truncate the value to just the year, which means the next expression gets us the progress through the year, the decimal portion: (m_fEpoch - (float)*unYear).

So far, so good. Then we make our first mistake: we multiply by 365. So, on leap years, you'll sometimes be a day off. Still, that gives us the day of the year, give or take a bit. And then we make our second mistake: we divide by 12. That'd be great if every month were the same length, but they're not.

Except wait, no, that wouldn't be great, because we've just gotten our divisors backwards. 365/12 gives us 30.416667. We're not dividing the year into twelve equally sized months, we're dividing it into thirty equally sized months.

I've seen a lot of bad date handling code, and it's so rare to see something I've never seen before, an interesting new way to mess up dates. This block manages to fail to do its job in a surprising number of ways.

In any case, summer approaches, so I hope everyone enjoys being nearly through the month of Tredecimber. Only 17 more months to go before 2020 is finally over.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet Linux AustraliaJames Morris: Linux Security Summit North America 2020: Online Schedule

Just a quick update on the Linux Security Summit North America (LSS-NA) for 2020.

The event will take place over two days as an online event, due to COVID-19.  The dates are now July 1-2, and the full schedule details may be found here.

The main talks are:

There are also short (30 minute) topics:

This year we will also have a Q&A panel at the end of each day, moderated by Elena Reshetova. The panel speakers are:

  • Nayna Jain
  • Andrew Lutomirski
  • Dmitry Vyukov
  • Emily Ratliff
  • Alexander Popov
  • Christian Brauner
  • Allison Marie Naaktgeboren
  • Kees Cook
  • Mimi Zohar

LSS-NA this year is included with OSS+ELC registration, which is USD $50 all up.  Register here.

Hope to see you soon!

,

TEDWHAAAAAT?: The talks of TED2020 Session 4

For Session 4 of TED2020, experts in biohacking, synthetic biology, psychology and beyond explored topics ranging from discovering the relationship between the spinal cord and asparagus to using tools of science to answer critical questions about racial bias. Below, a recap of the night’s talks and performances.

“Every scientist can tell you about the time they ignored their doubts and did the experiment that would ‘never’ work,” says biomedical researcher Andrew Pelling. “And the thing is, every now and then, one of those experiments works out.” He speaks at TED2020: Uncharted on June 11, 2020. (Photo courtesy of TED)

Andrew Pelling, biomedical researcher

Big idea: Could we use asparagus to repair spinal cords?

How? Andrew Pelling researches how we might use fruits, vegetables and plants to reconstruct damaged or diseased human tissues. (Check out his 2016 talk about making ears out of apples.) His lab strips these organisms of their DNA and cells, leaving just the fibers behind, which are then used as “scaffolds” to reconstruct tissue. Now, they’re busy working with asparagus, experimenting to see if the vegetable’s microchannels can guide the regeneration of cells after a spinal cord injury. There’s evidence in rats that it’s working, the first data of its kind to show that plant tissues might be capable of repairing such a complex injury. Pelling is also the cofounder of Spiderwort, a startup that’s translating these innovative discoveries into real-world applications. “Every scientist can tell you about the time they ignored their doubts and did the experiment that would ‘never’ work,” he says. “And the thing is, every now and then, one of those experiments works out.”


Synthetic designer Christina Agapakis shares projects that blur the line between art and science at TED2020: Uncharted on June 11, 2020. (Photo courtesy of TED)

Christina Agapakis, synthetic designer

Big idea: Synthetic biology isn’t an oxymoron; it investigates the boundary between nature and technology — and it could shape the future.

How? From teaching bacteria how to play sudoku to self-healing concrete, Christina Agapakis introduces us to the wonders of synthetic biology: a multidisciplinary science that seeks to create and sometimes redesign systems found in nature. “We have been promised a future of chrome, but what if the future is fleshy?” asks Agapakis. She delves into the ways biology could expand technology and alter the way we understand ourselves, exposing the surprisingly blurred lines between art, science and society. “It starts by recognizing that we as synthetic biologists are also shaped by a culture that values ‘real’ engineering more than any of the squishy stuff. We get so caught up in circuits and what happens inside of computers that we sometimes lose sight of the magic that’s happening inside of us,” says Agapakis.

Jess Wolfe and Holly Laessig of Lucius perform “White Lies” and “Turn It Around” at TED2020: Uncharted on June 11, 2020. (Photo courtesy of TED.)

Jess Wolfe and Holly Laessig of indie pop band Lucius provide an enchanting musical break between talks, performing their songs “White Lies” and “Turn It Around.”


“[The] association with blackness and crime … makes its way into all of our children, into all of us. Our minds are shaped by the racial disparities we see out in the world, and the narratives that help us to make sense of the disparities we see,” says psychologist Jennifer L. Eberhardt. She speaks at TED2020: Uncharted on June 11, 2020. (Photo courtesy of TED)

Jennifer L. Eberhardt, psychologist

Big idea: We can use science to break down the societal and personal biases that unfairly target Black people.

How? When Jennifer Eberhardt flew with her five-year-old son one day, he turned to her after looking at the only other Black man on the plane and said, “I hope he doesn’t rob the plane” — showing Eberhardt undeniable evidence that racial bias seeps into every crack of society. For Eberhardt, a MacArthur-winning psychologist specializing in implicit bias, this surfaced a key question at the core of our society: How do we break down the societal and personal biases that target blackness? Just because we’re vulnerable to bias doesn’t mean we need to act on it, Eberhardt says. We can create “friction” points that eliminate impulsive social media posts based on implicit bias, such as when Nextdoor fought back against its “racial profiling problem” that required users to answer a few simple questions before allowing them to raise the alarm on “suspicious” visitors to their neighborhoods. Friction isn’t just a matter of online interaction, either. With the help of similar questions, the Oakland Police Department instituted protocols that reduce traffic stops of African-Americans by 43 percent. “Categorization and the bias that it seeds allow our brains to make judgments more quickly and efficiently,” Eberhardt says. “Just as the categories we create allow us to make quick decisions, they also reinforce bias — so the very things that help us to see the world also can blind us to it. They render our choices effortless, friction-free, yet they exact a heavy toll.”


 

Biological programmer Michael Levin (right) speaks with head of TED Chris Anderson about the wild frontiers of cellular memory at TED2020: Uncharted on June 11, 2020. (Photo courtesy of TED)

Michael Levin, biological programmer

Big idea: DNA isn’t the only builder in the biological world — there’s also an invisible electrical matrix directing cells to change into organs, telling tadpoles to become frogs, and instructing flatworms to regenerate new bodies once sliced in half. If Michael Levin and his colleagues can learn this cellular “machine language,” human beings may be one step closer to curing birth defects, eliminating cancer and evading aging.

How? As cells become organs, systems and bodies, they communicate via an electrical system dictating where the finished parts will go. Guided by this cellular network, organisms grow, transform and even build new limbs (or bodies) after trauma. At Michael Levin’s lab, scientists are cracking this code — and have even succeeded in creating autonomous organisms out of skin cells by altering the cell electrically without genetic manipulation. Mastering this code could not only allow humans to create microscopic biological “xenobots” to rebuild and medicate our bodies from the inside but also let us to grow new organs — and perhaps rejuvenate ourselves as we age. “We are now beginning to crack this morphogenetic code to ask: How is it that these tissues store a map of what to do?” Levin asks. “[How can we] go in and rewrite that map to new outcomes?”


“My vision for the future is that when things come to life, they do so with joy,” says Ali Kashani. He speaks at TED2020: Uncharted on June 11, 2020. (Photo courtesy of TED)

Ali Kashani, VP of special projects at Postmates

Big idea: Robots are becoming a part of everyday life in urban centers, which means we’ll have to design them to be accessible, communicative and human-friendly.

How? On the streets of San Francisco and Los Angeles, delivery robots bustle along neighborhood sidewalks to drop-off packages and food. With potential benefits ranging from environmental responsibility to community-building, these robots offer us an incredible glimpse into the future. The challenge now is ensuring that robots can move out of the lab and fit into our world and among us as well, says Kashani. At Postmates, Kashani designs robots with human reaction in mind. Instead of frightening, dystopian imagery, he wants people to understand robots as familiar and friendly. This is why Postmates’s robots are reminiscent of beloved characters like the Minions and Wall-E; they can use their eyes to communicate with humans and acknowledge obstacles like traffic stops in real-time. There are so many ways robots can help us and our communities: picking up extra food from restaurants for shelters, delivering emergency medication to those in need and more. By designing robots to integrate into our physical and social infrastructures, we can welcome them to the world seamlessly and create a better future for all. “My vision for the future is that when things come to life, they do so with joy,” Kashani says.

CryptogramExamining the US Cyber Budget

Jason Healey takes a detailed look at the US federal cybersecurity budget and reaches an important conclusion: the US keeps saying that we need to prioritize defense, but in fact we prioritize attack.

To its credit, this budget does reveal an overall growth in cybersecurity funding of about 5 percent above the fiscal 2019 estimate. However, federal cybersecurity spending on civilian departments like the departments of Homeland Security, State, Treasury and Justice is overshadowed by that going toward the military:

  • The Defense Department's cyber-related budget is nearly 25 percent higher than the total going to all civilian departments, including the departments of Homeland Security, Treasury and Energy, which not only have to defend their own critical systems but also partner with critical infrastructure to help secure the energy, finance, transportation and health sectors ($9.6 billion compared to $7.8 billion).

  • The funds to support just the headquarters element­ -- that is, not even the operational teams in facilities outside of headquarters -- ­of U.S. Cyber Command are 33 percent higher than all the cyber-related funding to the State Department ($532 million compared to $400 million).

  • Just the increased funding to Defense was 30 percent higher than the total Homeland Security budget to improve the security of federal networks ($909 million compared to $694.1 million).

  • The Defense Department is budgeted two and a half times as much just for cyber operations as the Cybersecurity and Infrastructure Security Agency (CISA), which is nominally in charge of cybersecurity ($3.7 billion compared to $1.47 billion). In fact, the cyber operations budget is higher than the budgets for the CISA, the FBI and the Department of Justice's National Security Division combined ($3.7 billion compared to $2.21 billion).

  • The Defense Department's cyber operations have nearly 10 times the funding as the relevant Homeland Security defensive operational element, the National Cybersecurity and Communications Integration Center (NCCIC) ($3.7 billion compared to $371.4 million).

  • The U.S. government budgeted as much on military construction for cyber units as it did for the entirety of Homeland Security ($1.9 billion for each).

We cannot ignore what the money is telling us. The White House and National Cyber Strategy emphasize the need to protect the American people and our way of life, yet the budget does not reflect those values. Rather, the budget clearly shows that the Defense Department is the government's main priority. Of course, the exact Defense numbers for how much is spent on offense are classified.


Worse Than FailureFaking the Grade

Report Card - The Noun Project

Our friend and frequent submitter Argle once taught evening classes in programming at his local community college. These classes tended to be small, around 20-30 students. Most of them were already programmers and were looking to expand their knowledge. Argle enjoyed helping them in that respect.

The first night of each new semester, Argle had everyone introduce themselves and share their goals for the class. One of his most notable students was a confident, charismatic young man named Emmanuel. "Manny," as he preferred to be called, told everyone that he was a contract programmer who'd been working with a local company for over a year.

"I don't really need to be here," he said. "My employer thought it would be nice if I brushed up on the basics."

Argle's first assignment for the class was a basic "Hello, world" program to demonstrate knowledge of the development environment. Manny handed it in with an eye-roll—then failed to turn in any more homework for the rest of the semester. He skipped lectures and showed up only for exams, each time smirking like he had the crib sheet to the universe in his back pocket. And yet he bombed every test in spectacular fashion, even managing to score below 50% on the true/false midterm. A layperson off the street could've outperformed him with random guessing.

Argle made attempts to offer help during office hours, all of which Manny ignored. This being college and not grade school, there wasn't much else Argle could do. Manny was an adult who'd turned in an F performance, so that was the grade he ended up with.

A few days after final grades had been submitted, Argle received a phone call from Manny. "I don't understand why you failed me," he began with full sincerity.

Baffled, Argle was speechless at first. Is this a joke? he wondered. "You didn't turn in any assignments," he explained, trying to keep emotion out of his voice. "Assignments were worth two-thirds of the grade. It's in the syllabus, and I discussed it on the first day of class."

"I thought my test grades would carry me," Manny replied.

Argle's bafflement only grew. "Even if you'd gotten perfect scores on every test, you still would've failed the class. And you had nowhere near perfect scores on the tests."

Manny broke down crying. He kept talking, almost incomprehensible through his sobs. "My employer paid for the class! They're going to see the grade! I'm not losing my job over this. I'm contesting the F!" He abruptly hung up.

Argle made a quick preemptive phone call to his department head to explain the situation, and was assured that everything would be taken care of. Upon ending the call, he shook his head in astonishment. Had Manny's employer suspected that their contractor wasn't as skilled with programming as he pretended to be? Or would his F come as a total shock to them?

A programmer who doesn't know how to program, he mused to himself, and management who can't tell the difference. Sounds like a match made in heaven.

Argle never heard about the issue again, so he never learned Manny's fate. But once he discovered our website, he came to understand that Manny was far from the only "brillant" programmer out there, and far from the only one whose incompetence went undetected for so long.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 06)

Here’s part six of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

Planet Linux AustraliaDavid Rowe: Codec 2 HF Data Modes 1

Since “attending” MHDC last month I’ve taken an interest in open source HF data modems. So I’ve been busy refactoring the Codec 2 OFDM modem for use with HF data.

The major change is from streaming small (28 bit) voice frames to longer (few hundred byte) packets of data. In some ways data is easier than PTT voice: latency is no longer an issue, and I can use nice long FEC codewords that ride over fades. On the flip side we really care about bit errors with data, for voice it’s acceptable to pass frames with errors to the speech decoder, and let the human ear work it out.

As a first step I’ve been working with GNU Octave simulations, and have developed 3 candidate data modes that I have been testing against simulated HF channels. In simulation they work well with up to 4ms of delay and 2.5Hz of Doppler.

Here are the simulation results for 10% Packet Error Rate (PER). The multipath channel has 2ms delay spread and 2Hz Doppler (CCITT Multipath Poor channel).

Mode Est Bytes/min AWGN SNR (dB) Multipath Poor SNR (dB)
datac1 6000 3 12
datac2 3000 1 7
datac3 1200 -3 0

The bytes/minute metric is commonly used by Winlink (divide by 7.5 for bits/s). I’ve assumed a 20% overhead for ARQ and other overheads. HF data isn’t fast – it’s a tough, narrow channel to push data through. But for certain applications (e.g. if you’re off the grid, or when the lights go out) it may be all you have. Even these low rates can be quite useful, 1200 bytes/minute is 8.5 tweets or SMS texts/minute.

The modem waveforms are pilot assisted coherent PSK using LDPC FEC codes. Coherent PSK can have gains of up to 6dB over differential PSK (DPSK) modems commonly used on HF.

Before I get too far along I wanted to try them over a real HF channels, to make sure I was on the right track. So much can go wrong with DSP in the real world!

So today I sent the new data waveforms over the air for the first time, using an 800km path on the 40m band from my home in Adelaide South Australia to a KiwiSDR about 800km away in Melbourne, Victoria.

Mode Est Bytes/min Power (Wrms) Est SNR (dB) Packets Tx/Rx
datac1 6000 10 10-15 15/15
datac2 3000 10 5-15 8/8
datac3 1200 0.5 -2 20/25

The Tx power is the RMS measured on my spec-an, for the 10W RMS samples it was 75W PEP. The SNR is measured in a 3000Hz noise bandwidth, I have a simple dipole at my end, not sure what the KiwiSDR was using.

I’m quite happy with these results. To give the c3 waveform a decent work out I dropped the power down to just 0.5W (listen), and I could still get 30% of the packets through at 100mW. A few of the tests had significant fading, however it was not very fast. My simulations are far tougher. Maybe I’ll try a NVIS path to give the modem a decent test on fast fading channels.

Here is the spectrogram (think waterfall on it’s side) for the -2dB datac3 sample:

Here are the uncoded (raw) errors, and the errors after FEC. Most of the frames made it. This mode employs a rate 1/3 LDPC code that was developed by Bill, VK5DSP. It can work at up to 16% raw BER! The errors at the end are due to the Tx signal ending, at this stage of development I just have a simple state machine with no “squelch”.

We have also been busy developing an API for the Codec 2 modems, see README_data.md. The idea is to allow developers of HF data protocols and applications to use the Codec 2 modems. As well as the “raw” HF data API, there is a very nice Ethernet style framer for VHF packet developed by Jeroen Vreeken.

If anyone would like to try running the modem Octave code take a look at the GitHub PR.

Reading Further

QAM and Packet Data for OFDM Pull Request for this work. Includes lots of notes. The waveform designs are described in this spreadsheet.
README for the Codec 2 OFDM modem, includes examples and more links.
Test Report of Various Winlink Modems
Modems for HF Digital Voice Part 1
Modems for HF Digital Voice Part 2

Krebs on SecurityPrivnotes.com Is Phishing Bitcoin from Users of Private Messaging Service Privnote.com

For the past year, a site called Privnotes.com has been impersonating Privnote.com, a legitimate, free service that offers private, encrypted messages which self-destruct automatically after they are read. Until recently, I couldn’t quite work out what Privnotes was up to, but today it became crystal clear: Any messages containing bitcoin addresses will be automatically altered to include a different bitcoin address, as long as the Internet addresses of the sender and receiver of the message are not the same.

Earlier this year, KrebsOnSecurity heard from the owners of Privnote.com, who complained that someone had set up a fake clone of their site that was fooling quite a few regular users of the service.

And it’s not hard to see why: Privnotes.com is confusingly similar in name and appearance to the real thing, and comes up second in Google search results for the term “privnote.” Also, anyone who mistakenly types “privnotes” into Google search may see at the top of the results a misleading paid ad for “Privnote” that actually leads to privnotes.com.

A Google search for the term “privnotes” brings up a misleading paid ad for the phishing site privnotes.com, which is listed above the legitimate site — privnote.com.

Privnote.com (the legit service) employs technology that encrypts all messages so that even Privnote itself cannot read the contents of the message. And it doesn’t send and receive messages. Creating a message merely generates a link. When that link is clicked or visited, the service warns that the message will be gone forever after it is read.

But according to the owners of Privnote.com, the phishing site Privnotes.com does not fully implement encryption, and can read and/or modify all messages sent by users.

“It is very simple to check that the note in privnoteS is sent unencrypted in plain text,” Privnote.com explained in a February 2020 message, responding to inquiries from KrebsOnSecurity. “Moreover, it doesn’t enforce any kind of decryption key when opening a note and the key after # in the URL can be replaced by arbitrary characters and the note will still open.”

But that’s not the half of it. KrebsOnSecurity has learned that the phishing site Privnotes.com uses some kind of automated script that scours messages for bitcoin addresses, and replaces any bitcoin addresses found with its own bitcoin address. The script apparently only modifies messages if the note is opened from a different Internet address than the one that composed the address.

Here’s an example, using the bitcoin wallet address from bitcoin’s Wikipedia page as an example. The following message was composed at Privnotes.com from a computer with an Internet address in New York, with the message, “please send money to bc1qar0srrr7xfkvy5l643lydnw9re59gtzzwf5mdq thanks”:

A test message composed on privnotes.com, which is phishing users of the legitimate encrypted message service privnote.com. Pay special attention to the bitcoin address in this message.

When I visited the Privnotes.com link generated by clicking the “create note” button on the above page from a different computer with an Internet address in California, this was the result. As you can see, it lists a different bitcoin address, albeit one with the same first four characters.

The altered message. Notice the bitcoin address has been modified and is not the same address that was sent in the original note.

Several other tests confirmed that the bitcoin modifying script does not seem to change message contents if the sender and receiver’s IP addresses are the same, or if one composes multiple notes with the same bitcoin address in it.

Allison Nixon, the security expert who helped me with this testing, said the script also only seems to replace the first instance of a bitcoin address if it’s repeated within a message, and the site stops replacing a wallet address if it is sent repeatedly over multiple messages.

“And because of the design of the site, the sender won’t be able to view the message because it self destructs after one open, and the type of people using privnote aren’t the type of people who are going to send that bitcoin wallet any other way for verification purposes,” said Nixon, who is chief research officer at Unit 221B. “It’s a pretty smart scam.”

Given that Privnotes.com is phishing bitcoin users, it’s a fair bet the phony service also is siphoning other sensitive data from people who use their site.

“So if there are password dumps in the message, they would be able to read that, too,” Nixon said. “At first, I thought that was their whole angle, just to siphon data. But the bitcoin wallet replacement is probably much closer to the main motivation for running the fake site.”

Even if you never use or plan to use the legitimate encrypted message service Privnote.com, this scam is a great reminder why it pays to be extra careful about using search engines to find sites that you plan to entrust with sensitive data. A far better approach is to bookmark such sites, and rely exclusively on those instead.

,

CryptogramFriday Squid Blogging: Human Cells with Squid-Like Transparency

I think we need more human organs with squid-like features.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramPhishing Attacks against Trump and Biden Campaigns

Google's threat analysts have identified state-level attacks from China.

I hope both campaigns are working under the assumption that everything they say and do will be dumped on the Internet before the election. That feels like the most likely outcome.

Worse Than FailureError'd: People also ask ...WTF?!

"Exactly which people are asking this question?" Jamie M. wrote.

 

"More like a friendly reminder that for one solid day, 2000 years ago, I was insured," Kyle B. writes.

 

Jordan R. wrote, "Ok, I'll bite, how can NULL solve pain points of private clouds?'

 

"Tell me Jira, is the 'undefined' key anywhere near the 'any' key?" writes Gary A.

 

Quentin G. wrote, "92% sales tax seems a bit high, but shipping? I don't live in Outer Mongoila!"

 

Mike S. writes, "Only $1M a year? Cheap &&%$#$!!"

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

CryptogramAnother Intel Speculative Execution Vulnerability

Remember Spectre and Meltdown? Back in early 2018, I wrote:

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they -- and the research into the Intel ME vulnerability -- have shown researchers where to look, more is coming -- and what they'll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

That has turned out to be true. Here's a new vulnerability:

On Tuesday, two separate academic teams disclosed two new and distinctive exploits that pierce Intel's Software Guard eXtension, by far the most sensitive region of the company's processors.

[...]

The new SGX attacks are known as SGAxe and CrossTalk. Both break into the fortified CPU region using separate side-channel attacks, a class of hack that infers sensitive data by measuring timing differences, power consumption, electromagnetic radiation, sound, or other information from the systems that store it. The assumptions for both attacks are roughly the same. An attacker has already broken the security of the target machine through a software exploit or a malicious virtual machine that compromises the integrity of the system. While that's a tall bar, it's precisely the scenario that SGX is supposed to defend against.

Another news article.

Worse Than FailureThe Time-Delay Footgun

A few years back, Mike worked at Initech. Initech has two major products: the Initech Creator and the Initech Analyzer. The Creator, as the name implied, let you create things. The Analyzer could take what you made with the Creator and test them.

For business reasons, these were two separate products, and it was common for one customer to have many more Creator licenses than Analyzer licenses, or upgrade them each on a different cadence. But the Analyzer depended on the Creator, so someone might have two wildly different versions of both tools installed.

Initech wasn’t just incrementing the version number and charging for a new seat every year. Both products were under active development, with a steady stream of new features. The Analyzer needed to be smart enough to check what version of Creator was installed, and enable/disable the appropriate set of features. Which meant the Analyzer needed to check the version string.

From a user’s perspective, the version numbers were simple: a new version was released every year, numbered for the year. So the 2009 release was version 9, the 2012 was version 12, and so on. Internally, however, they needed to track finer-grained versions, patch levels, and whether the build was intended as an alpha, beta, or release version. This meant that they looked more like “12.3g31”.

Mike was tasked with prepping Initech Analyzer 2013 for release. Since the company used an unusual version numbering schema, they had also written a suite of custom version parsing functions, in the form: isCreatorVersion9_0OrLater, isCreatorVersion11_0OrLater, etc. He needed to add isCreaterVersion12_0OrLater.

“Hey,” Mike suggested to his boss, “I notice that all of these functions are unique, we could make a general version that uses a regex.”

“No, don’t do that,” his boss said. “You know what they say, ‘I had a problem, so I used regexes, now I have two problems.’ Just copy-paste the version 11 version, and use that. It uses string slicing, which performs way better than regex anyway.”

“Well, I think there are going to be some problems-”

“It’s what we’ve done every year,” his boss said. “Just do it. It’s the version check, don’t put any thought into it.”

“Like, I mean, really problems- the way it-”

His boss cut him off and spoke very slowly. “It is just the version check. It doesn’t need to be complicated. And we know it can’t be wrong, because all the tests are green.”

Mike did not just copy the version 11 check. He also didn’t use regexes, but patterned his logic off the version 11 check, with some minor corrections. But he did leave the version 11 check alone, because he wasn’t given permission to change that block of code, and all of the tests were green.

So how did isCreatorVersion11_0OrLater work? Well, given a version string like 9.0g57 or 10.0a12, or 11.0b9, it would start by checking the second character. If it was a ., clearly we had a single digit version number which must be less than 11. If the second character was a 0, then it must be 10, which clearly is also less than 11, and there couldn't possibly be any numbers larger than 11 which have a "0" as their second character. Any other number must be greater than or equal 11.

Mike describes this as a “time-delayed footgun”. Because it was “right” for about a decade. Unfortunately, Initech Analyzer 2020 might be having some troubles right now…

Mike adds:

Now, I no longer work at Initech, so unfortunately I can’t tell you the fallout of what happened when that foot-gun finally went off this year.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Sociological ImagesViral Votes & Activism in the New Public Sphere

It is a strange sight to watch politicians working to go viral. Check out this video from the political nonprofit ACRONYM, where Alexis Magnan-Callaway — the Digital Mobilization Director of Kirsten Gillibrand’s presidential campaign — talks us through some key moments on social media. 

Social media content has changed the rules of the game for getting attention in the political world. An entire industry has sprung up around going viral professionally, and politicians are putting these new rules to use for everything from promoting the Affordable Care Act to breaking Twitter’s use policy

In a new paper out at Sociological Theory with Doug Hartmann, I (Evan) argue that part of the reason this is happening is due to new structural transformations in the public sphere. Recent changes in communication technology have created a situation where the social fields for media, politics, academia, and the economy are now much closer together. It is much easier for people who are skilled in any one of these fields to get more public attention by mixing up norms and behaviors from the other three. Thomas Medvetz called people who do this in the policy world “jugglers,” and we argue that many more people have started juggling as well. 

Arm-wrestling a constituent is a long way from the Nixon-Kennedy debates, but there are institutional reasons why this shouldn’t surprise us. Juggling social capital from many fields means that social changes start to accelerate, as people can suddenly be much more successful by breaking the norms in their home fields. Politicians can get electoral gains by going viral, podcasts take off by talking to academics, and ex-policy wonks suddenly land coveted academic positions.


Another good example of this new structural transformation in action is Ziad Ahmed, a Yale undergraduate, business leader, and activist. At the core of his public persona is an interesting mix of both norm-breaking behavior and carefully curated status markers for many different social fields. 

In 2017, Ahmed was accepted to Yale after writing “#BlackLivesMatter” 100 times; this was contemporaneously reported by outlets such as NBC NewsCNNTimeThe Washington PostBusiness InsiderHuffPost, and Mashable

A screenshot excerpt of Ahmed’s bio statement from his personal website

Since then, Ahmed has cultivated a long biography featuring many different meaningful status markers: his educational institution; work as the CEO of a consulting firm; founding of a diversity and inclusion organization; a Forbes “30 Under 30” recognition; Ted Talks; and more. The combination of these symbols paints a complex picture of an elite student, activist, business leader, and everyday person on social media. 

Critics have called this mixture “a super-engineered avatar of corporate progressivism that would make even Mayor Pete blush.” We would say that, for better or worse, this is a new way of doing activism and advocacy that comes out of different institutional conditions in the public sphere. As different media, political, and academic fields move closer together, activists like Ahmed and viral moments like those in the Gillibrand campaign show how a much more complicated set of social institutions and practices are shaping the way we wield public influence today.

Bob Rice is a PhD student in sociology at UMass Boston. They’re interested in perceptions of authority, social movements, culture, stratification, mental health, and digital methods. 

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramAvailability Attacks against Neural Networks

New research on using specially crafted inputs to slow down machine-learning neural network systems:

Sponge Examples: Energy-Latency Attacks on Neural Networks shows how to find adversarial examples that cause a DNN to burn more energy, take more time, or both. They affect a wide range of DNN applications, from image recognition to natural language processing (NLP). Adversaries might use these examples for all sorts of mischief -- from draining mobile phone batteries, though degrading the machine-vision systems on which self-driving cars rely, to jamming cognitive radar.

So far, our most spectacular results are against NLP systems. By feeding them confusing inputs we can slow them down over 100 times. There are already examples in the real world where people pause or stumble when asked hard questions but we now have a dependable method for generating such examples automatically and at scale. We can also neutralize the performance improvements of accelerators for computer vision tasks, and make them operate on their worst case performance.

The paper.

Worse Than FailureCodeSOD: Sort Yourself Out

Object-Relational-Mappers (ORMs) are a subject of pain, pleasure, and flamewars. On one hand, they make it trivially easy to write basic persistence logic, as long as it stays basic. But they do this by concealing the broader powers of relational databases, which means that an ORM is a leaky abstraction. Used incautiously or inappropriately, and they stop making your life easy, and make it much, much harder.

That’s bad, unless you’re Tenesha’s co-worker, because you apparently want to suffer.

In addition to new products, Tenesha’s team works on a legacy Ruby-on-Rails application. It’s an ecommerce tool, and thus it handles everything from inventory to order management to billing to taxes.

Taxes can get tricky. Each country may have national tax rates. Individual cities may have different ones. In their database, they have a TaxesRate table which tracks the country name, the city name, the region code name, and the tax rate.

You’ll note that the actual database is storing names, which is problematic when you need to handle localizations. Is it Spain or España? The ugly hack to fix this is to have a lookup file in YAML, like so:

i18n_keys:
  es-ca: "Canary Islands"
  es: "Spain"
  eu: "Inside European Union"
  us: "United States"
  ar: "Argentina"
  la: "Latin America"
  as-oc: "Asia & Oceania"
  row: "Rest of the world"

Those are just the countries, but there were similar structures for regions, cities, and so on.

The YAML file took on more importance when management decided that the sort order of the tax codes within a region needed to be a specific order. They didn’t want it to be sorted alphabetically, or by date added, or by number of orders, or anything: they had a specific order they wanted.

So Tenesha’s co-worker had a bright idea: they could store the lookup keys in the YAML file in the order specified. It meant they didn’t have to add or manage a sort_order field in the database, which sounded easier to them, and would be easier to implement, right?

Well, no. There’s no easy way to tell an SQL order-by clause to sort in an arbitrary order. But our intrepid programmer was using an ORM, so they didn’t need to think about little details like “connecting to a database” or “worrying about round trips” or “is this at all efficient”.

So they implemented it this way:

  # Order locations by their I18n registers to make it easier to reorder
  def self.order_by_location(regions)
    codes = I18n.t("quotation.selectable_taxes_rate_locations").keys.map{ |k| k.to_s }
    regions_ordered = []

    codes.each do |code|
      regions_ordered.push(regions.where(region_code: code))
    end

    # Insert the codes that are not listed at the end
    regions_ordered.push(regions.where("region_code NOT IN (?)", codes)).flatten
  end

This is called like so:

# NOTE: TaxesRate.all_rates returns all records with unique region codes,
#   ignoring cities; something like `TaxesRate.distinct(:region_code)`.
regions = order_by_location(TaxesRate.all_rates)

We should be thankful that they didn’t find a way to make this execute N2 queries, but as it is, it needs to execute N+1 queries.

First, we pull the rate locations from our internationalization YAML file. Then, for each region code, we run a query to fetch the tax rate for that one region code. This is one query for each code. Based on the internationalization file, it’s just the codes for one country, but that can still be a large number. Finally, we run one final query to fetch all the other regions that aren’t in our list.

This fetches the tax code for all regions, sorted based on the sort order in the localization file (which does mean each locale could have a different sort order, a feature no one requested).

Tenesha summarizes it:

So many things done wrong; in summary:
* Country names stored with different localization on the same database, instead of storing country codes.
* Using redundant data for storing region codes for different cities.
* Hard-coding a new front-end feature using localization keys order.
* Performing N+1 queries to retrieve well known data.

Now, this was a legacy application, so when Tenesha and her team went to management suggesting that they fix this terrible approach, the answer was “Nope!” It was a legacy product, and was only going to get new features and critical bug fixes.

Tenesha scored a minor victory: she did convince them to let her rewrite the method so that it fetched the data from the database and then sorted using the Array#index method, which still wasn’t great, but was far better than hundreds of database round trips.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Krebs on SecurityMicrosoft Patch Tuesday, June 2020 Edition

Microsoft today released software patches to plug at least 129 security holes in its Windows operating systems and supported software, by some accounts a record number of fixes in one go for the software giant. None of the bugs addressed this month are known to have been exploited or detailed prior to today, but there are a few vulnerabilities that deserve special attention — particularly for enterprises and employees working remotely.

June marks the fourth month in a row that Microsoft has issued fixes to address more than 100 security flaws in its products. Eleven of the updates address problems Microsoft deems “critical,” meaning they could be exploited by malware or malcontents to seize complete, remote control over vulnerable systems without any help from users.

A chief concern among the panoply of patches is a trio of vulnerabilities in the Windows file-sharing technology (a.k.a. Microsoft Server Message Block or “SMB” service). Perhaps most troubling of these (CVE-2020-1301) is a remote code execution bug in SMB capabilities built into Windows 7 and Windows Server 2008 systems — both operating systems that Microsoft stopped supporting with security updates in January 2020. One mitigating factor with this flaw is that an attacker would need to be already authenticated on the network to exploit it, according to security experts at Tenable.

The SMB fixes follow closely on news that proof-of-concept code was published this week that would allow anyone to exploit a critical SMB flaw Microsoft patched for Windows 10 systems in March (CVE-2020-0796). Unlike this month’s critical SMB bugs, CVE-2020-0796 does not require the attacker to be authenticated to the target’s network. And with countless company employees now working remotely, Windows 10 users who have not yet applied updates from March or later could be dangerously exposed right now.

Microsoft Office and Excel get several updates this month. Two different flaws in Excel (CVE-2020-1225 and CVE-2020-1226) could be used to remotely commandeer a computer running Office just by getting a user to open a booby-trapped document. Another weakness (CVE-2020-1229) in most versions of Office may be exploited to bypass security features in Office simply by previewing a malicious document in the preview pane. This flaw also impacts Office for Mac, although updates are not yet available for that platform.

After months of giving us a welcome break from patching, Adobe has issued an update for its Flash Player program that fixes a single, albeit critical security problem. Adobe says it is not aware of any active exploits against the Flash flaw. Mercifully, Chrome and Firefox both now disable Flash by default, and Chrome and IE/Edge auto-update the program when new security updates are available. Adobe is slated to retire Flash Player later this year. Adobe also released security updates for its Experience Manager and Framemaker products.

Windows 7 users should be aware by now that while a fair number of flaws addressed this month by Microsoft affect Windows 7 systems, this operating system is no longer being supported with security updates (unless you’re an enterprise taking advantage of Microsoft’s paid extended security updates program, which is available to Windows 7 Professional and Windows 7 enterprise users).

Before you update with this month’s patch batch, please make sure you have backed up your system and/or important files. It’s not uncommon for a wonky Windows update to hose one’s system or prevent it from booting properly, and some updates even have known to erase or corrupt files. So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Further reading:

AskWoody and Martin Brinkmann on Patch Tuesday fixes and potential pitfalls

Trend Micro’s Zero Day Initiative June 2020 patch lowdown

U.S-CERT on Active Exploitation of CVE-2020-0796

,

Krebs on SecurityFlorence, Ala. Hit By Ransomware 12 Days After Being Alerted by KrebsOnSecurity

In late May, KrebsOnSecurity alerted numerous officials in Florence, Ala. that their information technology systems had been infiltrated by hackers who specialize in deploying ransomware. Nevertheless, on Friday, June 5, the intruders sprang their attack, deploying ransomware and demanding nearly $300,000 worth of bitcoin. City officials now say they plan to pay the ransom demand, in hopes of keeping the personal data of their citizens off of the Internet.

Nestled in the northwest corner of Alabama, Florence is home to roughly 40,000 residents. It is part of a quad-city metropolitan area perhaps best known for the Muscle Shoals Sound Studio that recorded the dulcet tones of many big-name music acts in the 1960s and 70s.

Image: Florenceal.org

On May 26, acting on a tip from Milwaukee, Wisc.-based cybersecurity firm Hold Security, KrebsOnSecurity contacted the office of Florence’s mayor to alert them that a Windows 10 system in their IT environment had been commandeered by a ransomware gang.

Comparing the information shared by Hold Security dark web specialist Yuliana Bellini with the employee directory on the Florence website indicated the username for the computer that attackers had used to gain a foothold in the network on May 6 belonged to the city’s manager of information systems.

My call was transferred to no fewer than three different people, none of whom seemed eager to act on the information. Eventually, I was routed to the non-emergency line for the Florence police department. When that call went straight to voicemail, I left a message and called the city’s emergency response team.

That last effort prompted a gracious return call the following day from a system administrator for the city, who thanked me for the heads up and said he and his colleagues had isolated the computer and Windows network account Hold Security flagged as hacked.

“I can’t tell you how grateful we are that you helped us dodge this bullet,” the technician said in a voicemail message for this author. “We got everything taken care of now, and some different protocols are in place. Hopefully we won’t have another near scare like we did, and hopefully we won’t have to talk to each other again.”

But on Friday, Florence Mayor Steve Holt confirmed that a cyberattack had shut down the city’s email system. Holt told local news outlets at the time there wasn’t any indication that ransomware was involved.

However, in an interview with KrebsOnSecurity Tuesday, Holt acknowledged the city was being extorted by DoppelPaymer, a ransomware gang with a reputation for negotiating some of the highest extortion payments across dozens of known ransomware families.

The average ransomware payment by ransomware strain. Source: Chainalysis.

Holt said the same gang appears to have simultaneously compromised networks belonging to four other victims within an hour of Florence, including another municipality that he declined to name. Holt said the extortionists initially demanded 39 bitcoin (~USD $378,000), but that an outside security firm hired by the city had negotiated the price down to 30 bitcoin (~USD $291,000).

Like many other cybercrime gangs operating these days, DoppelPaymer will steal reams of data from victims prior to launching the ransomware, and then threaten to publish or sell the data unless a ransom demand is paid.

Holt told KrebsOnSecurity the city can’t afford to see its citizens’ personal and financial data jeopardized by not paying.

“Do they have our stuff? We don’t know, but that’s the roll of the dice,” Holt said.

Steve Price, the Florence IT manager whose Microsoft Windows credentials were stolen on May 6 by a DHL-themed phishing attack and used to further compromise the city’s network, explained that following my notification on May 26 the city immediately took a number of preventative measures to stave off a potential ransomware incident. Price said that when the ransomware hit, they were in the middle of trying to get city leaders to approve funds for a more thorough investigation and remediation.

“We were trying to get another [cybersecurity] response company involved, and that’s what we were trying to get through the city council on Friday when we got hit,” Price said. “We feel like we can build our network back, but we can’t undo things if peoples’ personal information is released.”

A DoppelPaymer ransom note. Image: Crowdstrike.

Fabian Wosar, chief technology officer at Emsisoft, said organizations need to understand that the only step which guarantees a malware infestation won’t turn into a full-on ransomware attack is completely rebuilding the compromised network — including email systems.

“There is a misguided belief that if you were compromised you can get away with anything but a complete rebuild of the affected networks and infrastructure,” Wosar said, noting that it’s not uncommon for threat actors to maintain control even as a ransomware victim organization is restoring their systems from backups.

“They often even demonstrate that they still ‘own’ the network by publishing screenshots of messages talking about the incident,” Wosar said.

Hold Security founder Alex Holden said Florence’s situation is all too common, and that very often ransomware purveyors are inside a victim’s network for weeks or months before launching their malware.

“We often get glimpses of the bad guys beginning their assaults against computer networks and we do our best to let the victims know about the attack,” Holden said. “Since we can’t see every aspect of the attack we advise victims to conduct a full investigation of the events, based on the evidence collected. But when we deal with sensitive situations like ransomware, timing and precision are critical. If the victim will listen and seek out expert opinions, they have a great chance of successfully stopping the breach before it turns into ransom.”

TEDWays of seeing: The talks of TED2020 Session 3

TED’s head of curation Helen Walters (left) and writer, activist and comedian Baratunde Thurston host Session 3 of TED2020: Uncharted on June 4, 2020. (Photo courtesy of TED)

Session 3 of TED2020, hosted by TED’s head of curation Helen Walters and writer, activist and comedian Baratunde Thurston, was a night of something different — a night of camaraderie, cleverness and, as Baratunde put it, “a night of just some dope content.” Below, a recap of the night’s talks and performances.

Actor and performer Cynthia Erivo recites Maya Angelou’s iconic 2006 poem, “A Pledge to Rescue Our Youth.” She speaks at TED2020: Uncharted on June 4, 2020. (Photo courtesy of TED)

In a heartfelt and candid moment to start the session, Tony- and Emmy-winner Cynthia Erivo performs “A Pledge to Rescue Our Youth,” an iconic 2006 poem by Maya Angelou. “You are the best we have. You are all we have. You are what we have become. We pledge you our whole hearts from this day forward,” Angelou writes.

“Drawing has taught me to create my own rules. It has taught me to open my eyes and see not only what is, but what can be. Where there are broken systems … we can create new ones that actually function and benefit all, instead of just a select few,” says Shantell Martin. She speaks at TED2020: Uncharted on June 4, 2020. (Photo courtesy of TED)

Shantell Martin, Artist

Big idea: Drawing is more than just a graphic art — it’s a medium of self-discovery that enables anyone to let their hands spin out freestyle lines independent of rules and preconceptions. If we let our minds follow our hands, we can reach mental spaces where new worlds are tangible and art is the property of all – regardless of ethnicity or class.

How? A half-Nigerian, half-English artist growing up in a council estate in southeast London, Martin has firsthand knowledge of the race and class barriers within England’s institutions. Drawing afforded her a way out, taking her first to Tokyo and then to New York, where her large-scale, freestyle black and white drawings (often created live in front of an audience) taught her the power of lines to build new worlds. By using our hands to draw lines that our hearts can follow, she says, we not only find solace, but also can imagine and build worlds where every voice is valued equally. “Drawing has taught me to create my own rules,” Martin says. “It has taught me to open my eyes and see not only what is, but what can be. Where there are broken systems … we can create new ones that actually function and benefit all, instead of just a select few.”


“If we’re not protecting the arts, we’re not protecting our future, we’re not protecting this world,” says Swizz Beatz. He speaks at TED2020: Uncharted on June 4, 2020. (Photo courtesy of TED)

Swizz Beatz, Music producer, entrepreneur, art enthusiast

Big idea: Art is for everyone. Let’s make it that way.

Why? Creativity heals us — and everybody who harbors love for the arts deserves access to them, says Swizz Beatz. Interweaving a history of his path as a creative in the music industry, Beatz recounts his many successful pursuits in the art of giving back. In creating these spaces at the intersection of education, celebration, inclusion and support — such as The Dean Collection, No Commissions, The Dean’s Choice and Verzuz — he plans to outsmart lopsided industries that exploit creatives and give the power of art back to the people. “If we’re not protecting the arts, we’re not protecting our future, we’re not protecting this world,” he says.


“In this confusing world, we need to be the bridge between differences. You interrogate those differences, you hold them for as long as you can until something happens, something reveals itself,” says Jad Abumrad. He speaks at TED2020: Uncharted on June 4, 2020. (Photo courtesy of TED)

Jad Abumrad, host of RadioLab and Dolly Parton’s America

Big Idea: Storytellers and journalists are the bridge that spans conflict and difference to reveal a new meaning. 

How: When journalist Jad Abumrad began storytelling in 2002, he crafted each story to culminate the same way: mind-blowing science discoveries, paired with ear-tickling auditory creations, resolved into “moments of wonder.” But after 10 years, he began to wonder himself: Is this the only way to tell a story? Seeking an answer, Abumrad turned to more complex, convoluted stories and used science to sniff out the facts. But these stories often ended without an answer or resolution, instead leading listeners to “moments of struggle,” where truth collided with truth. It wasn’t until Abumrad returned to his home of Tennessee where he met an unlikely teacher in the art of storytelling: Dolly Parton. In listening to the incredible insights she had into her own life, he realized that the best stories can’t be summarized neatly and instead should find revelation — or what he calls “the third.” A term rooted in psychotherapy, the third is the new entity created when two opposing forces meet and reconcile their differences. For Abumrad, Dolly had found resolution in her life, fostered it in her fanbase and showcased it in her music — and revealed to him his new purpose in telling stories. “In this confusing world, we need to be the bridge between differences,” Abumrad says. “You interrogate those differences, you hold them for as long as you can until something happens, something reveals itself.”


Aloe Blacc performs “Amazing Grace” at TED2020: Uncharted on June 4, 2020. (Photo courtesy of TED)

Backed by piano from Greg Phillinganes, singer, songwriter and producer Aloe Blacc provides balm for the soul with a gorgeous rendition of “Amazing Grace.”


Congressman John Lewis, politician and civil rights leader, interviewed by Bryan Stevenson, public interest lawyer and founder of the Equal Justice Initiative — an excerpt from the upcoming TED Legacy Project

Big idea: As a new generation of protesters takes to the streets to fight racial injustice, many have looked to the elders of the Civil Rights Movement — like John Lewis — to study how previous generations have struggled not just to change the world but also to maintain morale in the face of overwhelming opposition.

How? In order to truly effect change and move people into a better world, contemporary protestors must learn tactics that many have forgotten — especially nonviolent engagement and persistence. Fortunately, John Lewis sees an emerging generation of new leaders of conscience, and he urges them to have hope, to be loving and optimistic and, most of all, to keep going tirelessly even in the face of setbacks. As interviewer Bryan Stevenson puts it, “We cannot rest until justice comes.”

Worse Than FailureRepresentative Line: The Truest Comment

Usually, when posting a representative line, it’s a line of code. Rarely, it’s been a “representative comment”.

Today’s submitter supplied some code, but honestly, the code doesn’t matter- it’s just some method signatures. The comment, however, is representative. Not just representative of this code, but honestly, all code, everywhere.

        // i should have commented this stupid code

As our submitter writes:

I wrote this code. I also wrote that comment. That comment is true.
Why do I do this to myself?

I sometimes ask myself the same question.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

LongNowRacial Injustice & Long-term Thinking

Long Now Community,

Since 01996, Long Now has endeavored to foster long-term thinking in the world by sharing perspectives on our deep past and future. We have too often failed, however, to include and listen to Black perspectives.

Racism is a long-term civilizational problem with deep roots in the past, profound effects in the present, and so many uncertain futures. Solving the multigenerational challenge of racial inequality requires many things, but long-term thinking is undoubtedly one of them. As an institution dedicated to the long view, we have not addressed this issue enough. We can and will do better.

We are committed to surfacing these perspectives on both the long history of racial inequality and possible futures of racial justice going forward, both through our speaker series and in the resources we share online. And if you have any suggestions for future resources or speakers, we are actively looking.

Alexander Rose

Executive Director, Long Now

Learn More

  • A recent episode of this American Life explored Afrofuturism: “It’s more than sci-fi. It’s a way of looking at black culture that’s fantastic, creative, and oddly hopeful—which feels especially urgent during a time without a lot of optimism.”
  • The 1619 Project from The New York Times, winner of the 02020 Pulitzer Prize for Commentary,  re-examines the 400 year-legacy of slavery. 
  • A paper from political scientists at Stanford and Harvard analyzes the long-term effects of slavery on Southern attitudes toward race and politics.
  • Ava DuVernay’s 13th is a Netflix documentary about the links between slavery and the US penal system. It is available to watch on YouTube for free here
  • “The Case for Reparations” is a landmark essay by Ta-Nehisi Coates about the institutional racism of housing discrimination. 
  • I Am Not Your Negro is a documentary exploring the history of racism in the United States through the lens of the life of writer and activist James Baldwin. The documentary is viewable on PBS for free here.
  • Science Fiction author N.K. Jemisin defies convention and creates worlds informed by the structural forces that cause inequality.

Sociological ImagesParty Affiliation in a Pandemic

Since mid-March 2020, Gallup has been polling Americans about their degree of self-isolation during the pandemic. The percent who said they had “avoided small gatherings” rose from 50% in early-March to 81% in early April, dropping slightly to 75% in late April as pressures began rising to start loosening stay-at-home orders.

What makes this curve sociologically interesting is our leaders generally made the restrictions largely voluntary, hoping for social norms to do the job of control. Only a few state and local governments have issued citations for holding social gatherings. Mostly, social norms have been doing the job. But increasingly the partisan divide on self-isolation is widening and undermining pandemic precautions. The chart, which appeared in a Gallup report on May 11, 2020, vividly shows the partisan divide on beliefs in distancing as protection from the pandemic. The striking finding is the huge partisan gap with independents leaning slightly toward Democrats.  

Not only did the partisan divide remain wide, but the number of adults practicing “social distancing” dropped from 75% in early April to 58%. This drop in so called self-reported “social distancing” occurred in states both with and without stay-at-home orders. Elsewhere I argue that “social distancing” is a most unfortunate label for physical distancing.

Republicans have been advocating for opening up businesses early, but it is not a mere intellectual debate. Some held large protests while brandishing firearms; others appeared in public without masks and without observing 6-feet distances. Some business that re-opened in early May reported customers acting disrespectful to others, ignoring the store’s distancing rules. In another incident, an armed militia stood outside a barbershop to keep authorities from closing down the newly reopened shop.

Retail operations in particular are concerned about compliance to social norms because without adequate compliance, other customers will not return. Social norms rely on social trust. If retail operations cannot depend upon customers to be respectful, they will not only lose additional customers but employees as well.

The Sad Impact of Pandemic Partisanship    

American society was highly partisan before the pandemic, so it is not surprising that partisan signs remain. For a few weeks in March and April, partisanship took a back seat and signs of cooperation suggested societal solidarity.

We are only months away from the Presidential election, so we do not expect either side to let us forget the contest. However, we can only hope that partisans will not forget that politics cannot resolve the pandemic alone. Without relying heavily on scientists and health system experts, our society can only fail.

Unfortunately, lives hang in the balance if there is a partisan failure to reach consensus on distancing and related precautions. Economists at Stanford and Harvard, using distancing data from smartphones as well as local data on COVID cases and deaths, completed a sophisticated model of the first few months of the pandemic. Their report, “Polarization and Public Health: Partisan Differences in Social Distancing during the Coronavirus Pandemic,” found that (1) Republicans engage in less social distancing, and (2) if this partisanship difference continues, the US will end up with more COVIC-19 transmission at a higher economic cost. Assuming the researchers’ analytical model is accurate, the Republican ridicule of social distancing is such an ironic tragedy. Not only will lives be lost but what is done under the banner of promoting economic benefit, is actually producing greater economic hardship.

Ron Anderson, Professor Emeritus, University of Minnesota, taught sociology from 1968 to 2005. His early work centered around the social diffusion of technology. Since 2005, his work has focused on compassion and the social dimensions of suffering.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureEditor's Soapbox: On Systemic Debt

I recently caught up with an old co-worker from my first "enterprise" job. In 2007, I was hired to support an application which was "on its way out," as "eventually" we'd be replacing it with a new ERP. September 2019 was when it finally got retired.

Interest on the federal debt

The application was called "Total Inventory Process" and it is my WTF origin story. Elements of that application and the organization have filtered into many of my articles on this site. Boy, did it have more than its share of WTFs.

"Total Inventory Process". Off the bat, you know that this is a case of an overambitious in-house team trying to make a "do everything" application that's gonna solve every problem. Its core was inventory management and invoicing, but it had its own data-driven screen rendering/templating engine (which barely worked), its own internationalization engine (for an application which never got localized), a blend of web, COM+, client-side VB6, and hooks into an Oracle backend but also a mainframe.

TIP was also a lesson in the power of technical debt, how much it really costs, and why technical solutions are almost never enough to address technical debt.

TIP was used to track the consumption of inventory at several customers' factories to figure out how to invoice them. We sold them the paint that goes on their widgets, but it wasn't as simple as "You used this much paint, we bill you this much." We handled the inventory of everything in their paint line, from the paint to the toilet paper in the bathrooms. The customer didn't want to pay for consumption, they wanted to pay for widgets. The application needed to be able to say, "If the customer makes 300 widgets, that's $10 worth of toilet paper."

The application handled millions of dollars of invoices each year, for dozens of customer facilities. When I was hired, I was the second developer hired to work full time on supporting TIP. I wasn't hired to fix bugs. I wasn't hired to add new features. I was hired because it took two developers, working 40 hours a week, just to keep the application from falling over in production.

My key job responsibility was to log into the production database and manually tweak records, because the application was so bug-ridden, so difficult to use, and so locked down that users couldn't correct their own mistakes. With whatever time I had left, I'd respond to user requests for new functionality or bug-fixes.

It's usually hard to quantify exactly what technical debt costs. We talk about it a lot, but it very often takes the form of, "I know it when I see it," or "technical debt is other people's code." Here, we have a very concrete number: technical debt was two full-time salaries to just maintain basic operations.

My first development task was to add a text-box to one screen. Because other developers had tried to do this in the past, the estimate I was given was two weeks of developer time. 80 hours of effort for a new text box.

Once, the users wanted to be able to sort a screen in descending order. The application had a complex sorting/paging system designed to prevent fetching more than one page of data at a time for performance reasons. While it did that, it had no ability to sort in descending order, and adding that functionality would have entailed a full rewrite. My "fix" was to disable paging and sorting entirely on the backend, then re-implement in on the client-side in JavaScript for that screen. The users loved it, because suddenly one screen in the application was actually fast.

Oh, and as it was, "sorting" on the backend was literally putting the order-by clause in the URL and then SQL injecting it.

There were side effects to this quantity of technical debt. Since we were manually changing data in production, we were working closely with a handful of stakeholder users. Three, in specific, who were the "triumvirate" of TIP. Those three people were the direct point of contact with the developers. They were the direct point of contact to management. They set priorities. They entered bug reports. They made data change requests. They ruled the system.

They had a lot of power over this system, and this was a system handling millions of dollars. I think they liked that power. To this day, one of them holds that the "concept" of TIP was "unbeatable", and its replacement system is "cheap" and "crap". Now, from a technical perspective, you can't really get crappier than TIP, but from the perspective of a super-user, I can understand how great TIP felt.

I was a young developer, and didn't really like this working environment. Oh, I liked the triumvirate just fine, they were lovely people to work with, but I didn't like spending my days as nothing more than an extension of them. They'd send me emails containing UPDATE statements, and just ask that I execute them in production. There's not a lot of job satisfaction in being nothing more than "the person with permission to change data."

Frustrated, I immediately starting looking for ways to pay down that technical debt. There was only so much I could do, for a number of reasons. The obvious one was the software. If "adding a textbox" could reasonably take two weeks, "restructuring data access so that you can page data sorted in descending order" felt impossible. Even "easy" changes could unleash chaos as they triggered some edge case no one knew about.

Honestly, though, the technical obstacles were the small ones. The big challenges were the cultural and political ones.

First, the company knew that much of the code running in production was bad. So they created policies which erected checkpoints to confirm code quality, making it harder to deploy new code. Much harder, and much longer: you couldn't release to production until someone with authority signed off on it, and that might take days. Instead of resulting in better code, it instead meant the old, bad code stayed bad, and new, bad code got rushed through the process. "We have a hard go-live date, we'll improve the code after that." Or people found other workarounds: your web code had to go through those checkpoints, but stored procedures didn't, so if you had the authority to modify things in production, like I did, you could change stored procedures to your heart's content.

Second, the organization as a whole was risk-averse. The application was handling millions of dollars of invoices, and while it required a lot of manual babysitting, by the time the invoices went out, they were accurate. The company got paid. No one wanted to make sweeping changes which could impact that.

Third, the business rules were complicated. "How many rolls of toilet paper do you bill per widget?" is a hard question to answer. It's fair to say that no one fully understood the system. On the business side, the triumvirate probably had the best handle on it, but even they could be blindsided by its behavior. It was a complex system that needed to stay functioning, because it was business critical, but no one knows exactly what all of those functions are.

Fourth, the organization viewed "support" and "enhancement" the way other companies might view "operational" and "capital" budgets. They were different pools of money, and you weren't supposed to use the support money to pay for new features, nor could enhancement money be used to fix bugs.

Most frustrating was that I would sometimes get push-back from the triumvirate. Oh, they loved it when I could give them a button to self-service some feature which needed to use manual intervention, so long as the interface was just cumbersome enough that only they could use it. They hated it when a workflow got so streamlined that any user could do it. Worse, for them, was that as the technical debt got paid down, we started transitioning more and more of the "just change production data" to low-level contractors. Now the triumvirate no longer had a developer capable of changing code at their beck and call. They had to surrender power.

Fortunately, management was sensitive to the support costs around TIP. Once we started to build a track-record of reduced costs, management started to remove some of the obstacles. It was a lot of politics. I suspect some management were wary of how much power the triumvirate had, and were happy when that could get reduced. After a few years of work on TIP, I mostly rolled off of it onto other projects. Usually, I'd just pop in for month-end billing support, or being the expert who understood some of the bizarre technical intricacies. Obviously, the application continued working just fine for years without me, and I can't say that I miss it.

TIP accrued technical debt far more quickly than most systems would. Some of that comes from ambition: it tried to be everything, including reinventing functions which had nothing to do with its core business. This led to a tortured development process, complete with death marches, team restructurings, "throw developers at this until it gets back on track," and several ultimatums like "If we don't get something in production by the end of the month, everyone is fired." It was born in severe debt, within an organization which didn't have good mechanisms to manage that debt. And the main obstacles to paying down that debt weren't technical: they were social and political.

I was lucky. I was given the freedom to tackle that debt (or, if we're being fully honest, I also took some freedom under the "it's easier to seek forgiveness than to ask permission" principle). In a lot of systems, the technical debt accrues until the costs of servicing the debt are untenable. You end up paying so much in "interest" that you stop being able to actually do anything. This is a failure mode for a lot of projects, and that's usually when a "ground up" rewrite happens. Rewrites have a mixed record, though: TIP itself was actually a rewrite of an older system and promised to "do it right" this time.

Technical debt has been on my mind because I've been thinking a lot lately about broader, systemic debt. Any systems humans build—software systems, mechanical systems, or even social systems—are going to accrue debt. Decisions made in the history of the system are going to create problems in the future, whether those decisions were wrong from the get-go, or had unforeseen consequences, or just the world changed and they're no longer workable.

The United States, right now, is experiencing a swell of protests unlike anything I've seen in my lifetime. I think it's fair to say that these protests are rooted in historical inequities and injustices which constitute a form of social debt. Our social systems, especially around the operation of police, but broadly in the realm of race, are loaded with debt from the past.

I see similarities in the obstacles to paying down that debt. Attempts to make changes by policy end up creating roadblocks to change, or simply fail to accomplish their goals. Resistance to change because change entails risk, and managing or mitigating those risks feels more important than actually fixing the problems. The systems we're talking about are complicated, and it's difficult to even build consensus on what better versions look like, because it's difficult to even understand what they currently are. And finally, there are groups that have power, and in paying down the social debt, they would have to give up some of that power.

That's a big, hard-to-grapple-with quantity of debt. It's connected to a set of interlocking systems which are difficult to pick apart. And it's frighteningly important to get right.

All systems accrue systemic debt. Software systems accrue debt. Social systems accrue debt. Even your personal mental systems, your psychological health, accrue debt. Without action, the debt will grow, the interest on that debt grows, and more energy ends up servicing that debt. One of the ways in which systems fail is when the accumulated debt gets too high. Bankruptcy occurs. Of all of these kinds of systems, the software debt tends to be the trivial one. Software products become unsupportable and get replaced. Psychological debt can lead towards things like depression or other mental health problems. Social debt can lead to unrest or perpetuations of injustice.

Systemic debt may be a technical problem at its root, but its solution is always going to require politics. TIP couldn't be improved without overcoming political challenges. US social debt can't be resolved without significant political changes.

What I hope people take away from this story is an awareness of systemic debt, and an understanding of some of the long-term costs of that debt. I encourage you to look around you and at the systems you interact with. What sources of debt are there? What is that debt costing the system and the people who depend on it? What are the obstacles to paying down that debt? What are the challenges of making the systems we use better?

Systemic debt doesn't go away by itself, and left unmanaged, it will only grow. If you don't pay attention to it, broken systems end up staying in production for way too long. TIP was used for nearly 18 years. Don't use broken things for that long, please.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 05)

Here’s part five of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

There’s more of Kurt in this week’s episode; as I mentioned in last week’s intro, Kurt is loosely based on my old friend Darren Atkinson, who pulled down a six-figure income by recovering, repairing and reselling high-tech waste from Toronto’s industrial suburbs. Darren was the subject of the first feature I ever sold to Wired, Dumpster Diving.

But Kurt was also based loosely on Igor Kenk, a friend of mine who turned out to be one of Toronto’s most prolific bike thieves (I knew him as a bike repair guy). Igor was a strange and amazing guy, and Richard Poplak and Nick Marinkovich’s 2010 graphic novel biography of him is a fantastic read.

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

Krebs on SecurityOwners of DDoS-for-Hire Service vDOS Get 6 Months Community Service

The co-owners of vDOS, a now-defunct service that for four years helped paying customers launch more than two million distributed denial-of-service (DDoS) attacks that knocked countless Internet users and websites offline, each have been sentenced to six months of community service by an Israeli court.

vDOS as it existed on Sept. 8, 2016.

A judge in Israel handed down the sentences plus fines and probation against Yarden Bidani and Itay Huri, both Israeli citizens arrested in 2016 at age 18 in connection with an FBI investigation into vDOS.

Until it was shuttered in 2016, vDOS was by far the most reliable and powerful DDoS-for-hire or “booter” service on the market, allowing even completely unskilled Internet users to launch crippling assaults capable of knocking most websites offline.

vDOS advertised the ability to launch attacks at up to 50 gigabits of data per second (Gbps) — well more than enough to take out any site that isn’t fortified with expensive anti-DDoS protection services.

The Hebrew-language sentencing memorandum (PDF) has redacted the names of the defendants, but there are more than enough clues in the document to ascertain the identities of the accused. For example, it says the two men earned a little more than $600,000 running vDOS, a fact first reported by this site in September 2016 just prior to their arrest, when vDOS was hacked and KrebsOnSecurity obtained a copy of its user database.

In addition, the document says the defendants were initially apprehended on September 8, 2016, arrests which were documented here two days later.

Also, the sentencing mentions the supporting role of a U.S. resident named only as “Jesse.” This likely refers to 23-year-old Jesse Wu, who KrebsOnSecurity noted in October 2016 pseudonymously registered the U.K. shell company used by vDOS, and ran a tiny domain name registrar called NameCentral that vDOS and many other booter services employed.

Israeli prosecutors say Wu also set up their payment infrastructure, and received 15 percent of vDOS’s total revenue for his trouble. NameCentral no longer appears to be in business, and Wu could not be reached for comment.

Although it is clear Bidani and Huri are defendants in this case, it is less clear which is referenced as Defendant #1 or Defendant #2. Both were convicted of “corrupting/disturbing a computer or computer material,” charges that the judge said had little precedent in Israeli courts, noting that “cases of this kind have not been discussed in court so far.” Defendant #1 also was convicted of sharing nude pictures of a 14 year old girl.

vDOS also sold API access to their backend attack infrastructure to other booter services to further monetize their excess firepower, including Vstress, Ustress, and PoodleStresser and LizardStresser.

Yarden Bidani. Image: Facebook.

Both defendants received the lowest possible sentence (the maximum was two years in prison) — six months of community service under the watch of the Israeli prison service — mainly because the accused were minors during the bulk of their offenses. The judge also imposed small fines on each, noting that more than $175,000 dollars worth of profits had already been seized from their booter business.

The judge observed that while Defendant #2 had shown remorse for his crimes and an understanding of how his actions affected others — even sobbing throughout one court proceeding — Defendant #1 failed to participate in the therapy sessions previously ordered by the court, and that he has “a clear and daunting boundary for recurrence of further offenses in the future.”

Boaz Dolev, CEO of ClearSky Cyber Security, said he’s disappointed in the lightness of the sentences given how much damage the young men caused.

“I think that such an operation that caused big damage to so many companies should have been dealt differently by the Israeli justice system,” Dolev said. “The fact that they were under 18 when committing their crimes saved them from much harder sentences.”

While DDoS attacks typically target a single website or Internet host, they often result in widespread collateral Internet disruption. Less than two weeks after the 2016 arrest of Bidani and Huri, KrebsOnSecurity.com suffered a three-day outage as a result of a record 620 Gbps attack that was alleged to have been purchased in retribution for my reporting on vDOS. That attack caused stability issues for other companies using the same DDoS protection firm my site enjoyed at the time, so much so that the provider terminated my service with them shortly thereafter.

To say that vDOS was responsible for a majority of the DDoS attacks clogging up the Internet between 2012 and 2016 would be an understatement. The various subscription packages for the service were sold based in part on how many seconds the denial-of-service attack would last. And in just four months between April and July 2016, vDOS was responsible for launching more than 277 million seconds of attack time, or approximately 8.81 years worth of attack traffic.

It seems likely vDOS was responsible for several decades worth of DDoS years, but it’s impossible to say for sure because vDOS’s owners routinely wiped attack data from their servers.

Prosecutors in the United States and United Kingdom have in recent years sought tough sentences for those convicted of running booter services. While a number of  current charges against alleged offenders have not yet been fully adjudicated, only a handful of defendants in these cases have seen real jail time.

The two men responsible for creating and unleashing the Mirai botnet (the same duo responsible for building the massive crime machine that knocked my site offline in 2016) each avoided jail time thanks to their considerable cooperation with the FBI.

Likewise, Pennsylvania resident David Bukoski recently got five years probation and six months of “community confinement” after pleading guilty to running the Quantum Stresser booter service. Lizard Squad member and PoodleStresser operator Zachary Buchta was sentenced to three months in prison and ordered to pay $350,000 in restitution for his role in running various booter services.

On the other end of the spectrum, last November 21-year-old Illinois resident Sergiy Usatyuk was sentenced to 13 months in jail for running multiple booter services that launched millions of attacks over several years. And a 20-year-old U.K. resident in 2017 got two years in prison for operating the Titanium Stresser service.

For their part, authorities in the U.K. have sought to discourage would-be customers of these booter services by purchasing Google ads warning that such services are illegal. The goal is to steer customers away from committing further offenses that could land them in jail, and toward more productive uses of their skills and/or curiosity about cybersecurity.

,

MEComparing Compression

I just did a quick test of different compression options in Debian. The source file is a 1.1G MySQL dump file. The time is user CPU time on a i7-930 running under KVM, the compression programs may have different levels of optimisation for other CPU families.

Facebook people designed the zstd compression system (here’s a page giving an overview of it [1]). It has some interesting new features that can provide real differences at scale (like unusually large windows and pre-defined dictionaries), but I just tested the default mode and the -9 option for more compression. For the SQL file “zstd -9” provides significantly better compression than gzip while taking only slightly less CPU time than “gzip -9” while zstd with the default option (equivalent to “zstd -3”) gives much faster compression than “gzip -9” while also being slightly smaller. For this use case bzip2 is too slow for inline compression of a MySQL dump as the dump process locks tables and can hang clients. The lzma and xz compression algorithms provide significant benefits in size but the time taken is grossly disproportionate.

In a quick check of my collection of files compressed with gzip I was only able to fine 1 fild that got less compression with zstd with default options, and that file got better compression with “zstd -9”. So zstd seems to beat gzip everywhere by every measure.

The bzip2 compression seems to be obsolete, “zstd -9” is much faster and has slightly smaller output.

Both xz and lzma seem to offer a combination of compression and time taken that zstd can’t beat (for this file type at least). The ultra compression mode 22 gives 2% smaller output files but almost 28 minutes of CPU time for compression is a bit ridiculous. There is a threaded mode for zstd that could potentially allow a shorter wall clock time for “zstd --ultra -22” than lzma/xz while also giving better compression.

Compression Time Size
zstd 5.2s 130m
zstd -9 28.4s 114m
gzip -9 33.4s 141m
bzip2 -9 3m51 119m
lzma 6m20 97m
xz 6m36 97m
zstd -19 9m57 99m
zstd --ultra -22 27m46 95m

Conclusion

For distributions like Debian which have large archives of files that are compressed once and transferred a lot the “zstd --ultra -22” compression might be useful with multi-threaded compression. But given that Debian already has xz in use it might not be worth changing until faster CPUs with lots of cores become more commonly available. One could argue that for Debian it doesn’t make sense to change from xz as hard drives seem to be getting larger capacity (and also smaller physical size) faster than the Debian archive is growing. One possible reason for adopting zstd in a distribution like Debian is that there are more tuning options for things like memory use. It would be possible to have packages for an architecture like ARM that tends to have less RAM compressed in a way that decreases memory use on decompression.

For general compression such as compressing log files and making backups it seems that zstd is the clear winner. Even bzip2 is far too slow and in my tests zstd clearly beats gzip for every combination of compression and time taken. There may be some corner cases where gzip can compete on compression time due to CPU features, optimisation for CPUs, etc but I expect that in almost all cases zstd will win for compression size and time. As an aside I once noticed the 32bit of gzip compressing faster than the 64bit version on an Opteron system, the 32bit version had assembly optimisation and the 64bit version didn’t at that time.

To create a tar archive you can run “tar czf” or “tar cJf” to create an archive with gzip or xz compression. To create an archive with zstd compression you have to use “tar --zstd -cf”, that’s 7 extra characters to type. It’s likely that for most casual archive creation (EG for copying files around on a LAN or USB stick) saving 7 characters of typing is more of a benefit than saving a small amount of CPU time and storage space. It would be really good if tar got a single character option for zstd compression.

The external dictionary support in zstd would work really well with rsync for backups. Currently rsync only supports zlib, adding zstd support would be a good project for someone (unfortunately I don’t have enough spare time).

Now I will change my database backup scripts to use zstd.

Update:

The command “tar acvf a.zst filenames” will create a zstd compressed tar archive, the “a” option to GNU tar makes it autodetect the compression type from the file name. Thanks Enrico!

,

CryptogramFriday Squid Blogging: Shark vs. Squid

National Geographic has a photo of a 7-foot long shark that fought a giant squid and lived to tell the tale. Or, at least, lived to show off the suction marks on his skin.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramZoom's Commitment to User Security Depends on Whether you Pay It or Not

Zoom was doing so well.... And now we have this:

Corporate clients will get access to Zoom's end-to-end encryption service now being developed, but Yuan said free users won't enjoy that level of privacy, which makes it impossible for third parties to decipher communications.

"Free users for sure we don't want to give that because we also want to work together with FBI, with local law enforcement in case some people use Zoom for a bad purpose," Yuan said on the call.

This is just dumb. Imagine the scene in the terrorist/drug kingpin/money launderer hideout: "I'm sorry, boss. We could have have strong encryption to secure our bad intentions from the FBI, but we can't afford the $20." This decision will only affect protesters and dissidents and human rights workers and journalists.

Here's advisor Alex Stamos doing damage control:

Nico, it's incorrect to say that free calls won't be encrypted and this turns out to be a really difficult balancing act between different kinds of harms. More details here:

Some facts on Zoom's current plans for E2E encryption, which are complicated by the product requirements for an enterprise conferencing product and some legitimate safety issues. The E2E design is available here: https://github.com/zoom/zoom-e2e-whitepaper/blob/master/zoom_e2e.pdf

I read that document, and it doesn't explain why end-to-end encryption is only available to paying customers. And note that Stamos said "encrypted" and not "end-to-end encrypted." He knows the difference.

Anyway, people were rightly incensed by his remarks. And yesterday, Yuan tried to clarify:

Yuan sought to assuage users' concerns Wednesday in his weekly webinar, saying the company was striving to "do the right thing" for vulnerable groups, including children and hate-crime victims, whose abuse is sometimes broadcast through Zoom's platform.

"We plan to provide end-to-end encryption to users for whom we can verify identity, thereby limiting harm to vulnerable groups," he said. "I wanted to clarify that Zoom does not monitor meeting content. We do not have backdoors where participants, including Zoom employees or law enforcement, can enter meetings without being visible to others. None of this will change."

Notice that is specifically did not say that he was offering end-to-end encryption to users of the free platform. Only to "users we can verify identity," which I'm guessing means users that give him a credit card number.

The Twitter feed was similarly sloppily evasive:

We are seeing some misunderstandings on Twitter today around our encryption. We want to provide these facts.

Zoom does not provide information to law enforcement except in circumstances such as child sexual abuse.

Zoom does not proactively monitor meeting content.

Zoom does no have backdoors where Zoom or others can enter meetings without being visible to participants.

AES 256 GCM encryption is turned on for all Zoom users -- free and paid.

Those facts have nothing to do with any "misunderstanding." That was about end-to-end encryption, which the statement very specifically left out of that last sentence. The corporate communications have been clear and consistent.

Come on, Zoom. You were doing so well. Of course you should offer premium features to paying customers, but please don't include security and privacy in those premium features. They should be available to everyone.

And, hey, this is kind of a dumb time to side with the police over protesters.

I have emailed the CEO, and will report back if I hear back. But for now, assume that the free version of Zoom will not support end-to-end encryption.

EDITED TO ADD (6/4): Another article.

EDITED TO ADD (6/4): I understand that this is complicated, both technically and politically. (Note, though, Jitsi is doing it.) And, yes, lots of people confused end-to-end encryption with link encryption. (My readers tend to be more sophisticated than that.) My worry that the "we'll offer end-to-end encryption only to paying customers we can verify, even though there's plenty of evidence that 'bad purpose' people will just get paid accounts" story plays into the dangerous narrative that encryption itself is dangerous when widely available. And disagree with the notion that the possibility child exploitation is a valid reason to deny security to large groups of people.

Matthew Green on this issue. An excerpt:

Once the precedent is set that E2E encryption is too "dangerous" to hand to the masses, the genie is out of the bottle. And once corporate America accepts that private communications are too politically risky to deploy, it's going to be hard to put it back.

From Signal:

Want to help us work on end-to-end encrypted group video calling functionality that will be free for everyone? Zoom on over to our careers page....

TEDConversations on social progress: Week 3 of TED2020

For week 3 of TED2020, global leaders in technology, vulnerability research and activism gathered for urgent conversations on how to foster connection, channel energy into concrete social action and work to end systemic racism in the United States. Below, a recap of their insights.

“When we see the internet of things, let’s make an internet of beings. When we see virtual reality, let’s make it a shared reality,” says Audrey Tang, Taiwan’s digital minister for social innovation. She speaks with TED science curator David Biello at TED2020: Uncharted on June 1, 2020. (Photo courtesy of TED)

Audrey Tang, Taiwan’s digital minister for social innovation

Big idea: Digital innovation rooted in communal trust can create a stronger, more transparent democracy that is fast, fair — and even fun.

How? Taiwan has built a “digital democracy” where digital innovation drives active, inclusive participation from all its citizens. Sharing how she’s helped transform her government, Audrey Tang illustrates the many creative and proven ways technology can be used to foster community. In responding to the coronavirus pandemic, Taiwan created a collective intelligence system that crowdsources information and ideas, which allowed the government to act quickly and avoid a nationwide shutdown. They also generated a publicly accessible map that shows the availability of masks in local pharmacies to help people get supplies, along with a “humor over rumor” campaign that combats harmful disinformation with comedy. In reading her job description, Tang elegantly lays out the ideals of digital citizenship that form the bedrock of this kind of democracy: “When we see the internet of things, let’s make an internet of beings. When we see virtual reality, let’s make it a shared reality. When we see machine learning, let’s make it collaborative learning. When we see user experience, let’s make it about human experience. And whenever we hear the singularity is near, let us always remember the plurality is here.”


Brené Brown explores how we can harness vulnerability for social progress and work together to nurture an era of moral imagination. She speaks with TED’s head of curation Helen Walters at TED2020: Uncharted on June 2, 2020. (Photo courtesy of TED)

Brené Brown, Vulnerability researcher, storyteller

Big question: The United States is at its most vulnerable right now. Where do we go from here?

Some ideas: As the country reels from the COVID-19 pandemic and the murder of George Floyd, along with the protests that have followed, Brené Brown offers insights into how we might find a path forward. Like the rest of us, she’s in the midst of processing this moment, but believes we can harness vulnerability for progress and work together to nurture an era of moral imagination. Accountability must come first, she says: people have to be held responsible for their racist behaviors and violence, and we have to build safe communities where power is shared. Self-awareness will be key to this work: the ability to understand your emotions, behaviors and actions lies at the center of personal and social change and is the basis of empathy. This is hard work, she admits, but our ability to experience love, belonging, joy, intimacy and trust — and to build a society rooted in empathy — depend on it. “In the absence of love and belonging, there’s nothing left,” she says.


Dr. Phillip Atiba Goff, Rashad Robinson, Dr. Bernice King and Anthony D. Romero share urgent insights into this historic moment. Watch the discussion on TED.com.

In a time of mourning and anger over the ongoing violence inflicted on Black communities by police in the US and the lack of accountability from national leadership, what is the path forward? In a wide-ranging conversation, Dr. Phillip Atiba Goff, the CEO of Center for Policing Equity; Rashad Robinson, the president of Color of Change; Dr. Bernice Albertine King, the CEO of the King Center; and Anthony D. Romero, the executive director of the American Civil Liberties Union, share urgent insights into how we can dismantle the systems of oppression and racism responsible for tragedies like the murders of Ahmaud Arbery, Breonna Taylor, George Floyd and far too many others — and explored how the US can start to live up to its ideals. Watch the discussion on TED.com.

CryptogramNew Research: "Privacy Threats in Intimate Relationships"

I just published a new paper with Karen Levy of Cornell: "Privacy Threats in Intimate Relationships."

Abstract: This article provides an overview of intimate threats: a class of privacy threats that can arise within our families, romantic partnerships, close friendships, and caregiving relationships. Many common assumptions about privacy are upended in the context of these relationships, and many otherwise effective protective measures fail when applied to intimate threats. Those closest to us know the answers to our secret questions, have access to our devices, and can exercise coercive power over us. We survey a range of intimate relationships and describe their common features. Based on these features, we explore implications for both technical privacy design and policy, and offer design recommendations for ameliorating intimate privacy risks.

This is an important issue that has gotten much too little attention in the cybersecurity community.

Worse Than FailureError'd: Just a Big Mixup

Daniel M. writes, "How'd they make this mistake? Simple. You add the prices into the bowl and turn the mixer on."

 

"I'm really glad to see a retailer making sure that I get the most accurate discount possible," Kelly K. wrote.

 

"I sure hope they're not also receiving invalid maintenance," Steven S. wrote.

 

Ernie writes, "Recently, I was looking for some hints on traditional bread making and found some interesting sources. Some of them go back to the middle ages."

 

"Tried to get a refund travel voucher through KLM, and well, obviously they know more than everyone else," Matthias wrote.

 

Roger G. writes, "I'm planning my ride in Mapometer and apparently I'm descending 100,000ft into the earths core. I'll let you know what I find..."

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

TEDConversations on rebuilding a healthy economy: Week 1 of TED2020

To kick off TED2020, leaders in business, finance and public health joined the TED community for lean-forward conversations to answer the question: “What now?” Below, a recap of the fascinating insights they shared.

“If you don’t like the pandemic, you are not going to like the climate crisis,” says Kristalina Georgieva, Managing Director of the International Monetary Fund. She speaks with head of TED Chris Anderson at TED2020: Uncharted on May 18, 2020. (Photo courtesy of TED)

Kristalina Georgieva, Managing Director of the International Monetary Fund (IMF)

Big idea: The coronavirus pandemic shattered the global economy. To put the pieces back together, we need to make sure money is going to countries that need it the most — and that we rebuild financial systems that are resilient to shocks.

How? Kristalina Georgieva is encouraging an attitude of determined optimism to lead the world toward recovery and renewal amid the economic fallout of COVID-19. The IMF has one trillion dollars to lend — it’s now deploying these funds to areas hardest hit by the pandemic, particularly in developing countries, and it’s also put a debt moratorium into effect for the poorest countries. Georgieva admits recovery is not going to be quick, but she thinks that countries can emerge from this “great transformation” stronger than before if they build resilient, disciplined financial systems. Within the next ten years, she hopes to see positive shifts towards digital transformation, more equitable social safety nets and green recovery. And as the environment recovers while the world grinds to a halt, she urges leaders to maintain low carbon footprints — particularly since the pandemic foreshadows the devastation of global warming. “If you don’t like the pandemic, you are not going to like the climate crisis,” Georgieva says. Watch the interview on TED.com »


“I’m a big believer in capitalism. I think it’s in many ways the best economic system that I know of, but like everything, it needs an upgrade. It needs tuning,” says Dan Schulman, president and CEO of PayPal. He speaks with TED business curators Corey Hajim at TED2020: Uncharted on May 19, 2020. (Photo courtesy of TED)

Dan Schulman, President and CEO of PayPal

Big idea: Employee satisfaction and consumer trust are key to building the economy back better.

How? A company’s biggest competitive advantage is its workforce, says Dan Schulman, explaining how PayPal instituted a massive reorientation of compensation to meet the needs of its employees during the pandemic. The ripple of benefits of this shift have included increased productivity, financial health and more trust. Building further on the concept of trust, Schulman traces how the pandemic has transformed the managing and moving of money — and how it will require consumers to renew their focus on privacy and security. And he shares thoughts on the new roles of corporations and CEOs, the cashless economy and the future of capitalism. “I’m a big believer in capitalism. I think it’s in many ways the best economic system that I know of, but like everything, it needs an upgrade. It needs tuning,” Schulman says. “For vulnerable populations, just because you pay at the market [rate] doesn’t mean that they have financial health or financial wellness. And I think everyone should know whether or not their employees have the wherewithal to be able to save, to withstand financial shocks and then really understand what you can do about it.”


Biologist Uri Alon shares a thought-provoking idea on how we could get back to work: a two-week cycle of four days at work followed by 10 days of lockdown, which would cut the virus’s reproductive rate. He speaks with head of TED Chris Anderson at TED2020: Uncharted on May 20, 2020. (Photo courtesy of TED)

Uri Alon, Biologist

Big idea: We might be able to get back to work by exploiting one of the coronavirus’s key weaknesses. 

How? By adopting a two-week cycle of four days at work followed by 10 days of lockdown, bringing the virus’s reproductive rate (R₀ or R naught) below one. The approach is built around the virus’s latent period: the three-day delay (on average) between when a person gets infected and when they start spreading the virus to others. So even if a person got sick at work, they’d reach their peak infectious period while in lockdown, limiting the virus’s spread — and helping us avoid another surge. What would this approach mean for productivity? Alon says that by staggering shifts, with groups alternating their four-day work weeks, some industries could maintain (or even exceed) their current output. And having a predictable schedule would give people the ability to maximize the effectiveness of their in-office work days, using the days in lockdown for more focused, individual work. The approach can be adopted at the company, city or regional level, and it’s already catching on, notably in schools in Austria.


“The secret sauce here is good, solid public health practice … this one was a bad one, but it’s not the last one,” says Georges C. Benjamin, Executive Director of the American Public Health Association. He speaks with TED science curator David Biello at TED2020: Uncharted on May 20, 2020. (Photo courtesy of TED)

Georges C. Benjamin, Executive Director of the American Public Health Association

Big Idea: We need to invest in a robust public health care system to lead us out of the coronavirus pandemic and prevent the next outbreak.

How: The coronavirus pandemic has tested the public health systems of every country around the world — and, for many, exposed shortcomings. Georges C. Benjamin details how citizens, businesses and leaders can put public health first and build a better health structure to prevent the next crisis. He envisions a well-staffed and equipped governmental public health entity that runs on up-to-date technology to track and relay information in real time, helping to identify, contain, mitigate and eliminate new diseases. Looking to countries that have successfully lowered infection rates, such as South Korea, he emphasizes the importance of early and rapid testing, contact tracing, self-isolation and quarantining. Our priority, he says, should be testing essential workers and preparing now for a spike of cases during the summer hurricane and fall flu seasons.The secret sauce here is good, solid public health practice,” Benjamin says. “We should not be looking for any mysticism or anyone to come save us with a special pill … because this one was a bad one, but it’s not the last one.”

TEDConversations on climate action and contact tracing: Week 2 of TED2020

For week 2 of TED2020, global leaders in climate, health and technology joined the TED community for insightful discussions around the theme “build back better.” Below, a recap of the week’s fascinating and enlightening conversations about how we can move forward, together.

“We need to change our relationship to the environment,” says Chile’s former environment minister Marcelo Mena. He speaks with TED current affairs curator Whitney Pennington Rodgers at TED2020: Uncharted on May 26, 2020. (Photo courtesy of TED)

Marcelo Mena, environmentalist and former environment minister of Chile

Big idea: People power is the antidote to climate catastrophe.

How? With a commitment to transition to zero emissions by 2050, Chile is at the forefront of resilient and inclusive climate action. Mena shares the economic benefits instilling green solutions can have on a country: things like job creation and reduced cost of mobility, all the result of sustainability-minded actions (including phasing coal-fired power plants and creating fleets of energy-efficient buses). Speaking to the air of social unrest across South America, Mena traces how climate change fuels citizen action, sharing how protests have led to green policies being enacted. There will always be those who do not see climate change as an imminent threat, he says, and economic goals need to align with climate goals for unified and effective action. “We need to change our relationship to the environment,” Mena says. “We need to protect and conserve our ecosystems so they provide the services that they do today.”


“We need to insist on the future being the one that we want, so that we unlock the creative juices of experts and engineers around the world,” says Nigel Topping, UK High Level Climate Action Champion, COP26. He speaks with TED Global curator Bruno Giussani at TED2020: Uncharted on May 26, 2020. (Photo courtesy of TED)

Nigel Topping, UK High Level Climate Action Champion, COP26

Big idea: The COVID-19 pandemic presents a unique opportunity to break from business as usual and institute foundational changes that will speed the world’s transition to a greener economy. 

How? Although postponed, the importance of COP26 — the UN’s international climate change conference — has not diminished. Instead it’s become nothing less than a forum on whether a post-COVID world should return to old, unsustainable business models, or instead “clean the economy” before restarting it. In Topping’s view, economies that rely on old ways of doing business jeopardize the future of our planet and risk becoming non-competitive as old, dirty jobs are replaced by new, cleaner ones. By examining the benefits of green economics, Topping illuminates the positive transformations happening now and leverages them to inspire businesses, local governments and other economic players to make radical changes to business as usual. “From the bad news alone, no solutions come. You have to turn that into a motivation to act. You have to go from despair to hope, you have to choose to act on the belief that we can avoid the worst of climate change… when you start looking, there is evidence that we’re waking up.”


“Good health is something that gives us all so much return on our investment,” says Joia Mukherjee. Shes speaks with head of TED Chris Anderson at TED2020: Uncharted on May 27, 2020. (Photo courtesy of TED)

Joia Mukherjee, Chief Medical Officer, Partners in Health (PIH)

Big idea: We need to massively scale up contact tracing in order to slow the spread of COVID-19 and safely reopen communities and countries.

How? Contact tracing is the process of identifying people who come into contact with someone who has an infection, so that they can be quarantined, tested and supported until transmission stops. The earlier you start, the better, says Mukherjee — but, since flattening the curve and easing lockdown measures depend on understanding the spread of the disease, it’s never too late to begin. Mukherjee and her team at PIH are currently supporting the state of Massachusetts to scale up contact tracing for the most vulnerable communities. They’re employing 1,700 full-time contact tracers to investigate outbreaks in real-time and, in partnership with resource care coordinators, ensuring infected people receive critical resources like health care, food and unemployment benefits. With support from The Audacious Project, a collaborative funding initiative housed at TED, PIH plans to disseminate its contact tracing expertise across the US and support public health departments in slowing the spread of COVID-19. “Good health is something that gives us all so much return on our investment,” Mukherjee says. See what you can do for this idea »


Google’s Chief Health Officer Karen DeSalvo shares the latest on the tech giant’s critical work on contact tracing. She speaks with head of TED Chris Anderson at TED2020: Uncharted on May 27, 2020. (Photo courtesy of TED)

Karen DeSalvo, Chief Health Officer, Google

Big idea: We can harness the power of tech to combat the pandemic — and reshape the future of public health.

How? Google and Apple recently announced an unprecedented partnership on the COVID-19 Exposure Notifications API, a Bluetooth-powered technology that would tell people they may have been exposed to the virus. The technology is designed with privacy at its core, DeSalvo says: it doesn’t use GPS or location tracking and isn’t an app but rather an API that public health agencies can incorporate into their own apps, which users could opt in to — or not. Since smartphones are so ubiquitous, the API promises to augment contact tracing and help governments and health agencies reduce the spread of the coronavirus. Overall, the partnership between tech and public health is a natural one, DeSalvo says; communication and data are pillars of public health, and a tech giant like Google has the resources to distribute those at a global scale. By helping with the critical work of contact tracing, DeSalvo hopes to ease the burden on health workers and give scientists time to create a vaccine. “Having the right information at the right time can make all the difference,” DeSalvo says. “It can literally save lives.”

After the conversation, Karen DeSalvo was joined by Joia Mukherjee to further discuss how public health entities can partner with tech companies. Both DeSalvo and Mukherjee emphasize the importance of knitting together the various aspects of public health systems — from social services to housing — to create a healthier and more just society. They also both emphasize the importance of celebrating community health workers, who provide on-the-ground information and critical connection with people across the world.

,

Cory DoctorowRave for “Poesy the Monster Slayer”

No matter how many books I write (20+ now!), the first review for a new one is always scary. That goes double when the book is a first as well – like Poesy the Monster Slayer, my first-ever picture book, which comes out from First Second on Jul 14.

https://us.macmillan.com/books/9781626723627

So it was with delight and relief that I read Publishers Weekly’s (rave) review of Poesy:

“Some children fear monsters at bedtime, but Poesy welcomes them. Her pink ‘monster lair’ features gothic art and stuffed animals, and she makes her father read The Book of Monsters from cover to cover before lights out. ‘PLEASE stay in bed tonight,’ he pleads as he leaves, but there’s no chance: the werewolf who soon enters her window is the size of a grizzly. ‘Werewolves HATED silver,’ Poesy knows, ‘and they feared the light’ 0 armed with a Princess Frillypants silver tiara and a light-up wand, she vanquishes the beast. And that’s just the beginning of her tear through monsterdom. ‘Poesy Emmeline Russell Schnegg,’ her mother growls from the doorway (in a funny turn, the girl gains a middle name every time a parent appears). Assured panels by Rockefeller (Pop!) combine frilly with threatening, illuminated by eerie light sources. Doctorow, making his picture book debut, strikes a gently edgy tone (‘He was so tired,’ Poesy sees, ‘that he stepped on a Harry the Hare block and said some swears. Poor Daddy!’), and his blow-by-blow account races to its closing spread: of two tired parents who resemble yet another monster. Ages 4-6.”

Whew!

I had planned to do a launch party at Dark Delicacies, my neighborhood horror bookstore, on Jul 11, but that’s off (obviously).

So we’re doing the next-best thing: preorder from the store and you’ll get a signature and dedication from me AND my daughter, Poesy (the book’s namesake).

https://www.darkdel.com/store/p1562/_July%3A_Poesy_the_Monster_Slayer.html

Sociological ImagesConflict Brings Us Together

For a long time, political talk at the “moderate middle” has focused on a common theme that goes something like this: 

There is too much political polarization and conflict. It’s tearing us apart. People aren’t treating each other with compassion. We need to come together, set aside our differences, and really listen to each other.

I have heard countless versions of this argument in my personal life and in public forums. It is hard to disagree with them at first. Who can be against seeking common ground?

But as a political sociologist, I am also skeptical of this argument because we have good research showing how it keeps people and organizations from working through important disagreements. When we try to avoid conflict above all, we often end up avoiding politics altogether. It is easy to confuse common ground with occupied territory — social spaces where legitimate problems and grievances are ignored in the name of some kind of pleasant consensus. 

A really powerful sociological image popped up in my Twitter feed that makes the point beautifully. We actually did find some common ground this week through a trend that united the country across red states and blue states:

It is tempting to focus on protests as a story about conflict alone, and conflict certainly is there. But it is also important to realize that this week’s protests represent a historic level of social consensus. The science of cooperation and social movements reminds us that getting collective action started is hard. And yet, across the country, we see people not only stepping up, but self-organizing groups to handle everything from communication to community safety and cleanup. In this way, the protests also represent a remarkable amount of agreement that the current state of policing in this country is simply neither just nor tenable. 

I was struck by this image because I don’t think nationwide protests are the kind of thing people have in mind when they call for everyone to come together, but right now protesting itself seems like one of the most unifying trends we’ve got. That’s the funny thing about social cohesion and cultural consensus. It is very easy to call for setting aside our differences and working together when you assume everyone will be rallying around your particular way of life. But social cohesion is a group process, one that emerges out of many different interactions, and so none of us ever have that much control over when and where it actually happens.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramThermal Imaging as Security Theater

Seems like thermal imaging is the security theater technology of today.

These features are so tempting that thermal cameras are being installed at an increasing pace. They're used in airports and other public transportation centers to screen travelers, increasingly used by companies to screen employees and by businesses to screen customers, and even used in health care facilities to screen patients. Despite their prevalence, thermal cameras have many fatal limitations when used to screen for the coronavirus.

  • They are not intended for medical purposes.
  • Their accuracy can be reduced by their distance from the people being inspected.
  • They are "an imprecise method for scanning crowds" now put into a context where precision is critical.
  • They will create false positives, leaving people stigmatized, harassed, unfairly quarantined, and denied rightful opportunities to work, travel, shop, or seek medical help.
  • They will create false negatives, which, perhaps most significantly for public health purposes, "could miss many of the up to one-quarter or more people infected with the virus who do not exhibit symptoms," as the New York Times recently put it. Thus they will abjectly fail at the core task of slowing or preventing the further spread of the virus.

Worse Than FailureCodeSOD: Scheduling your Terns

Mike has a co-worker who’s better at Code Golf than I am. They needed to generate a table with 24 column headings, one for each hour of the day, formatted in HAM- the hour and AM/PM. As someone bad at code golf, my first instinct is honestly to use two for loops, but in practice I’d probably do a 24 iteration loop with a branch to decide if it’s AM/PM and handle it appropriately, as well as a branch to handle the fact that hour 0 should be printed as 12.

Which, technically, more or less what Mike’s co-worker did, but they did in in golf style, using PHP.

<tr>
<?php for ($i = 0; $i < 24; $i++) {
echo '<th><div>'.($i%12?$i%12:12).($i/12>=1?'pm':'am').'</div></th><th></th>';
}
?>
</tr>

This is code written by someone who just recently discovered ternaries. It’s not wrong. It’s not even a complete and utter disaster. It’s just annoying. Maybe I’m jealous of their code golf skills, but this is the kind of code that makes me grind my teeth when I see it.

It’s mildly… clever? $i%12?$i%12:12- i%12 will be zero when i is 12, which is false, and our false branch says to output 12, and our true branch says to output i%12. So that’s sorted, handles all 24 hours of the day.

Then, for AM/PM, they ($i/12>=1?'pm':'am')- which also works. Values less than 12 fail the condition, so our false path is 'am', values greater than 12 will get 'pm'.

But wait a second. We don’t need the >= or the division in there. This could just be ($i>11?'pm':'am').

Well, maybe I am good at Code Golf.

I still hate it.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Krebs on SecurityRomanian Skimmer Gang in Mexico Outed by KrebsOnSecurity Stole $1.2 Billion

An exhaustive inquiry published today by a consortium of investigative journalists says a three-part series KrebsOnSecurity published in 2015 on a Romanian ATM skimming gang operating in Mexico’s top tourist destinations disrupted their highly profitable business, which raked in an estimated $1.2 billion and enjoyed the protection of top Mexican authorities.

The multimedia investigation by the Organized Crime and Corruption Reporting Project (OCCRP) and several international journalism partners detailed the activities of the so-called Riviera Maya crime gang, allegedly a mafia-like group of Romanians who until very recently ran their own ATM company in Mexico called “Intacash” and installed sophisticated electronic card skimming devices inside at least 100 cash machines throughout Mexico.

According to the OCCRP, Riviera Maya’s skimming devices allowed thieves to clone the cards, which were used to withdraw funds from ATMs in other countries — often halfway around the world in places like India, Indonesia, and Taiwan.

Investigators say each skimmer captured on average 1,000 cards per month, siphoning about $200 from individual victim accounts. This allowed the crime gang to steal approximately $20 million monthly.

“The gang had little tricks,” OCCRP reporters recounted in their video documentary (above). “They would use the cards in different cities all over the globe and wait three months so banks would struggle to trace where the card had originally been cloned.”

In September 2015, I traveled to Mexico’s Yucatan Peninsula to find and document almost two dozen ATMs in the region that were compromised with Bluetooth-based skimming devices. Unlike most skimmers — which can be detected by looking for out-of-place components attached to the exterior of a compromised cash machine — these skimmers were hooked to the internal electronics of ATMs operated by Intacash’s competitors by authorized personnel who’d reportedly been bribed or coerced by the gang.

But because the skimmers were Bluetooth-based, allowing thieves periodically to collect stolen data just by strolling up to a compromised machine with a mobile device, I was able to detect which ATMs had been hacked using nothing more than a cheap smart phone.

One of the Bluetooth-enabled PIN pads pulled from a compromised ATM in Mexico. The two components on the left are legitimate parts of the machine. The fake PIN pad made to be slipped under the legit PIN pad on the machine, is the orange bit, top right. The Bluetooth and data storage chips are in the middle.

Several days of wandering around Mexico’s top tourist areas uncovered these sophisticated skimmers inside ATMs in Cancun, Cozumel, Playa del Carmen and Tulum, including a compromised ATM in the lobby of my hotel in Cancun. OCCRP investigators said the gang also had installed the same skimmers in ATMs at tourist hotspots on the western coast of Mexico, in Puerto Vallarta, Sayulita and Tijuana.

Part III of my 2015 investigation concluded that Intacash was likely behind the scheme. An ATM industry source told KrebsOnSecurity at the time that his technicians had been approached by ATM installers affiliated with Intacash, offering those technicians many times their monthly salaries if they would provide periodic access to the machines they maintained.

The alleged leader of the Riviera Maya organization and principal owner of Intacash, 43-year-old Florian “The Shark” Tudor, is a Romanian with permanent residence in Mexico. Tudor claims he’s an innocent, legitimate businessman who’s been harassed and robbed by Mexican authorities.

Last year, police in Mexico arrested Tudor for illegal weapons possession, and raided his various properties there in connection with an investigation into the 2018 murder of his former bodyguard, Constantin Sorinel Marcu.

According to prosecution documents, Marcu and The Shark spotted my reporting shortly after it was published in 2015, and discussed what to do next on a messaging app:

The Shark: Krebsonsecurity.com See this. See the video and everything. There are two episodes. They made a telenovela.

Marcu: I see. It’s bad.

The Shark: They destroyed us. That’s it. Fuck his mother. Close everything.

The intercepted communications indicate The Shark also wanted revenge on whoever was responsible for leaking information about their operations.

The Shark: Tell them that I am going to kill them.

Marcu: Okay, I can kill them. Any time, any hour.

The Shark: They are checking all the machines. Even at banks. They found over 20.

Marcu: Whaaaat?!? They found? Already??

Throughout my investigation, I couldn’t be sure whether Intacash’s shiny new ATMs — which positively blanketed tourist areas in and around Cancun — also were used to siphon customer card data. I did write about my suspicions that Intacash’s ATMs were up to no good when I found they frequently canceled transactions just after a PIN was entered, and typically failed to provide paper receipts for withdrawals made in U.S. dollars.

But citing some of the thousands of official documents obtained in their investigation, the OCCRP says investigators now believe Intacash installed the same or similar skimming devices in its own ATMs prior to deploying them — despite advertising them as equipped with the latest security features and fraudulent device inhibitors.

Tudor’s organization “had the access that gave The Shark’s crew huge opportunities for fraud,” the OCCRP reports. “And on the Internet, the number of complaints grew. Foreign tourists in Mexico fleeced” by Intacash’s ATMs.

Many of the compromised ATMs I located in my travels throughout Mexico were at hotels, and while Intacash’s ATMs could be found on many street locations in the region, it was rare to find them installed at hotels.

The confidential source with whom I drove from place to place at the time said Intacash avoided installing their machines at hotels — despite such locations being generally far more profitable — for one simple reason: If one’s card is cloned from a hotel ATM, the customer can easily complain to the hotel staff. With a street ATM, not so much.

The investigation by the OCCRP and its partners paints a vivid picture of a highly insular, often violent transnational organized crime ring that controlled at least 10 percent of the $2 billion annual global market for skimmed cards.

It also details how the group laundered their ill-gotten gains, and is alleged to have built a human smuggling ring that helped members of the crime gang cross into the U.S. and ply their skimming trade against ATMs in the United States. Finally, the series highlights how the Riviera Maya gang operated with impunity for several years by exploiting relationships with powerful anti-corruption officials in Mexico.

Tudor and many of his associates maintain their innocence and are still living as free men in Mexico, although Tudor is facing charges in Romania for his alleged involvement with organized crime, attempted murder and blackmail. Intacash is no longer operating in Mexico. In 2019, Intacash’s sponsoring bank in Mexico suspended the company’s contract to process ATM transactions.

For much more on this investigation, check out OCCRP’s multi-part series, How a Crew of Romanian Criminals Conquered the World of ATM Skimming.

TEDIgnite: The talks of TED@WellsFargo

TED curator Cyndi Stivers opens TED@WellsFargo at the Knight Theater on February 5, 2020, in Charlotte, North Carolina. (Photo: Ryan Lash / TED)

World-changing ideas that unearth solutions and ignite progress can come from anywhere. With that spirit in mind at TED@WellsFargo, thirteen speakers showcased how human empathy and problem-solving can combine with technology to transform lives (and banking) for the better.

The event: TED@WellsFargo, a day of thought-provoking talks on topics including how to handle challenging situations at work, the value of giving back and why differences can be strengths. It’s the first time TED and Wells Fargo have partnered to create inspiring talks from Wells Fargo Team Members.

When and where: Wednesday, February 5, 2020, at the Knight Theater in Charlotte, North Carolina

Opening and closing remarks: David Galloreese, Wells Fargo Head of Human Resources, and Jamie Moldafsky, Wells Fargo Chief Marketing Officer

Performances by: Dancer Simone Cooper and singer/songwriter Jason Jet and his band

The talks in brief:

“What airlines don’t tell you is that putting your oxygen mask on first, while seeing those around you struggle, it takes a lot of courage. But being able to have that self-control is sometimes the only way that we are able to help those around us,” says sales and trading analyst Elizabeth Camarillo Gutierrez. She speaks at TED@WellsFargo at the Knight Theater on February 5, 2020, in Charlotte, North Carolina. (Photo: Ryan Lash / TED)

Elizabeth Camarillo Gutierrez, sales and trading analyst

Big idea: As an immigrant, learning to thrive in America while watching other immigrants struggle oddly echoes what flight attendants instruct us to do when the oxygen masks drop in an emergency landing: if you want to help others put on their masks, you must put on your own mask first.

How? At age 15, Elizabeth Camarillo Gutierrez found herself alone in the US when her parents were forced to return to Mexico, taking her eight-year-old brother with them. For eight years, she diligently completed her education — and grappled with guilt, believing she wasn’t doing enough to aid fellow immigrants. Now working as a sales and trading analyst while guiding her brother through school in New York, she’s learned a valuable truth: in an emergency, you can’t save others until you save yourself.

Quote of the talk: “Immigrants [can’t] and will never be able to fit into any one narrative, because most of us are actually just traveling along a spectrum, trying to survive.”


Matt Trombley, customer remediation supervisor

Big idea: Agonism — “taking a warlike stance in contexts that are not literally war” — plagues many aspects of modern-day life, from the way we look at our neighbors to the way we talk about politics. Can we work our way out of this divisive mindset?

How: Often we think that those we disagree with are our enemies, or that we must approve of everything our loved ones say or believe. Not surprisingly, this is disastrous for relationships. Matt Trombley shows us how to fight agonism by cultivating common ground (working to find just a single shared thread with someone) and by forgiving others for the slights that we believe their values cause us. If we do this, our relationships will truly come to life.

Quote of the talk: “When you can find even the smallest bit of common ground with somebody, it allows you to understand just the beautiful wonder and complexity and majesty of the other person.”


Dorothy Walker, project manager

Big idea: Anybody can help resolve a conflict — between friends, coworkers, strangers, your children — with three simple steps.

How? Step one: prepare. Whenever possible, set a future date and time to work through a conflict, when emotions aren’t running as high. Step two: defuse and move forward. When you do begin mediating the conflict, start off by observing, listening and asking neutral questions; this will cause both parties to stop and think, and give you a chance to shift positive energy into the conversation. Finally, step three: make an agreement. Once the energy of the conflict has settled, it’s time to get an agreement (either written or verbal) so everybody can walk away with a peaceful resolution.

Quote of the talk: “There is a resolution to all conflicts. It just takes your willingness to try.”


Charles Smith, branch manager

Big idea: The high rate of veteran suicide is intolerable — and potentially avoidable. By prioritizing the mental health of military service members both during and after active duty, we can save lives.

How? There are actionable solutions to end the devastating epidemic of military suicide, says Charles Smith. First, by implementing a standard mental health evaluation to military applicants, we can better gauge the preliminary markers of post traumatic stress disorder (PTSD) or depression. Data is a vital part of the solution: if we keep better track of mental health data on service members, we can also predict where support is most needed and create those structures proactively. By identifying those with a higher risk early on in their military careers, we can ensure they have appropriate care during their service and connect them to the resources they need once they are discharged, enabling veterans to securely and safely rejoin civilian life.

Quote of the talk: “If we put our minds and resources together, and we openly talk and try to find solutions for this epidemic, hopefully, we can save a life.”

“We all know retirement is all about saving more now, for later. What if we treated our mental health and overall well-being in the same capacity? Develop and save more of you now, for later in life,” says premier banker Rob Cooke. He speaks at TED@WellsFargo at the Knight Theater on February 5, 2020, in Charlotte, North Carolina. (Photo: Ryan Lash / TED)

Rob Cooke, premier banker

Big idea: Work-related stress costs us a lot, in our lives and the economy. We need to reframe the way we manage stress — both in our workplaces and in our minds.

How? “We tend to think of [stress] as a consequence, but I see it as a culture,” says Rob Cooke. Despite massive global investments in the wellness industry, we are still losing trillions of dollars due to a stress-related decrease in employee productivity and illness. Cooke shares a multifaceted approach to shifting the way stress is managed, internally and culturally. It starts with corporations prioritizing the well-being of employees, governments incentivizing high standards for workplace wellness and individually nurturing our relationship with our own mental health.

Quote of the talk: “We all know retirement is all about saving more now, for later. What if we treated our mental health and overall well-being in the same capacity? Develop and save more of you now, for later in life.”


Aeris Nguyen, learning and development facilitator

Big idea: What would our world be like if we could use DNA to verify our identity?

Why? Every year, millions of people have their identities stolen or misused. This fact got Aeris Nguyen thinking about how to safeguard our information for good. She shares an ambitious thought experiment, asking: Can we use our own bodies to verify our selves? While biometric data such as facial or palm print recognition have their own pitfalls (they can be easily fooled by, say, wearing a specially lighted hat or using a wax hand), what if we could use our DNA — our blood, hair or earwax? Nguyen acknowledges the ethical dilemmas and logistical nightmares that would come with collecting and storing more than seven billion files of DNA, but she can’t help but wonder if someday, in the far future, this will become the norm.

Quote of the talk: “Don’t you find it strange that we carry around these arbitrary, government assigned numbers or pieces of paper with our picture on it and some made-up passwords to prove we are who we say we are?  When, in fact, the most rock-solid proof of our identity is something we carry around in our cells — our DNA.”

“To anyone reeling from forces trying to knock you down and cram you into these neat little boxes people have decided for you — don’t break. I see you. My ancestors see you. Their blood runs through me as they run through so many of us. You are valid. And you deserve rights and recognition. Just like everyone else,” says France Villarta. He speaks at TED@WellsFargo at the Knight Theater on February 5, 2020, in Charlotte, North Carolina. (Photo: Ryan Lash / TED)

France Villarta, communications consultant

Big idea: Modern ideas of gender are much older than we may think.

How? In many cultures around the world, the social construct of gender is binary — man or woman, assigned certain characteristics and traits, all designated by biological sex. But that’s not the case for every culture. France Villarta details the gender-fluid history of his native Philippines and how the influence of colonial rule forced narrow-minded beliefs onto its people. In a talk that’s part cultural love letter, part history lesson, Villarta emphasizes the beauty and need in reclaiming gender identities. “Oftentimes, we think of something as strange only because we’re not familiar with it or haven’t taken enough time to try and understand,” he says. “The good thing about social constructs is that they can be reconstructed — to fit a time and age.”

Quote of the talk: “To anyone reeling from forces trying to knock you down and cram you into these neat little boxes people have decided for you — don’t break. I see you. My ancestors see you. Their blood runs through me as they run through so many of us. You are valid. And you deserve rights and recognition. Just like everyone else.”

Dancer Simone Cooper performs a self-choreographed dance onstage at TED@WellsFargo at the Knight Theater on February 5, 2020, in Charlotte, North Carolina. (Photo: Ryan Lash / TED)

Dean Furness, analytic consultant

Big idea: You can overcome personal challenges by focusing on yourself, instead of making comparisons to others.

How? After a farming accident paralyzed Dean Furness below the waist, he began the process of adjusting to life in a wheelchair. He realized he’d have to nurture and focus on this new version of himself, rather than fixate on his former height, strength and mobility. With several years of rehabilitation and encouragement from his physical therapist, Furness began competing in the Chicago and Boston marathons as a wheelchair athlete. By learning how to own each day, he says, we can all work to get better, little by little.

Quote of the talk: “Take some time and focus on you, instead of others. I bet you can win those challenges and really start accomplishing great things.”


John Puthenveetil, financial advisor

Big idea: Because of the uncertain world we live in, many seek solace from “certainty merchants” — like physicians, priests and financial advisors. Given the complex, chaotic mechanisms of our economy, we’re better off discarding “certainty” for better planning.

How? We must embrace adaptable plans that address all probable contingencies, not just the most obvious ones. This is a crucial component of “scenario-based planning,” says John Puthenveetil. We should always aim for being approximately right rather than precisely wrong. But this only works if we pay attention, heed portents of possible change and act decisively — even when that’s uncomfortable.

Quote of the talk: “It is up to us to use [scenario-based planning] wisely: Not out of a sense of weakness or fear, but out of the strength and conviction that comes from knowing that we are prepared to play the hand that is dealt.”


Johanna Figueira, digital marketing consultant

Big idea: The world is more connected than ever, but some communities are still being cut off from vital resources. The solution? Digitally matching professional expertise with locals who know what their communities really need.

How? Johanna Figueira is one of millions who has left Venezuela due to economic crisis, crumbling infrastructure and decline in health care — but she hasn’t left these issues behind. With the help of those still living in the country, Figueira helped organize Code for Venezuela — a platform that matches experts with communities in need to create simple, effective tools to improve quality of life. She shares two of their most successful projects: MediTweet, an intelligent Twitter bot that helps Venezuelans find medicinal supplies, and Blackout Tracker, a tool that helps pinpoint power cuts in Venezuela that the government won’t report. Her organization shows the massive difference made when locals participate in their own solutions.

Quote of the talk: “Some people in Silicon Valley may look at these projects and say that they’re not major technological innovations. But that’s the point. These projects are not insanely advanced — but it’s what the people of Venezuela need, and they can have a tremendous impact.”


Jeanne Goldie, branch sales manager

Big idea: We’re looking for dynamic hotbeds of innovation in all the wrong places.

How? Often, society looks to the young for the next big thing, leaving older generations to languish in their shadow until being shuffled out altogether, taking their brain power and productivity with them. Instead of discarding today’s senior workforce, Jeanne Goldie suggests we tap into their years of experience and retrain them, just as space flight has moved from the disposable rockets of NASA’s moon launches to today’s reusable Space X models.

Quote of the talk: “If we look at data and technology as the tools they are … but not as the answer, we can come up with better solutions to our most challenging problems.”


Rebecca Knill, business systems consultant

Big idea: By shifting our cultural understanding of ability and using technology to connect, we can build a more inclusive and human world.

How? The medical advances of modern technology have improved accessibility for disabled communities. Rebecca Knill, a self-described cyborg who has a cochlear implant, believes the next step to a more connected world is changing our perspectives. For example, being deaf isn’t shameful or pitiful, says Knill — it’s just a different way of navigating the world. To take full advantage of the fantastic opportunities new technology offers us, we must drop our assumptions and meet differences with empathy.

Quote of the talk: “Technology has come so far. Our mindset just needs to catch up.”

“We have to learn to accept where people are and adjust ourselves to handle those situations … to recognize when it is time to professionally walk away from someone,” says business consultant Anastasia Penright. She speaks at TED@WellsFargo at the Knight Theater on February 5, 2020, in Charlotte, North Carolina. (Photo: Ryan Lash / TED)

Anastasia Penright, business consultant

Big idea: No workplace is immune to drama, but there are steps we can follow to remove ourselves from the chatter and focus on what’s really important.

How? No matter your industry, chances are you’ve experienced workplace drama. In a funny and relatable talk, Anastasia Penright shares a better way to coexist with our coworkers using five simple steps she’s taken to leave drama behind and excel in her career. First, we must honestly evaluate our own role in creating and perpetuating conflicts; then evaluate our thoughts and stop thinking about every possible scenario. Next, it’s important to release our negative energy to a trusted confidant (a “venting buddy”) while trying to understand and accept the unique communication styles and work languages of our colleagues. Finally, she says, we need to recognize when we’re about to step into drama and protect our energy by simply walking away.

Quote of the talk: “We have to learn to accept where people are and adjust ourselves to handle those situations … to recognize when it is time to professionally walk away from someone.”

Jason Jet performs the toe-tapping, electro-soul song “Time Machine” at TED@WellsFargo at the Knight Theater on February 5, 2020, in Charlotte, North Carolina. (Photo: Ryan Lash / TED)