Planet Russell

,

Planet DebianDima Kogan: OpenCV C API transition. A rant.

I just went through a debugging exercise that was so ridiculous, I just had to write it up. Some of this probably should go into a bug report instead of a rant, but I'm tired. And clearly I don't care anymore.

OK, so I'm doing computer vision work. OpenCV has been providing basic functions in this area, so I have been using them for a while. Just for really, really basic stuff, like projection. The C API was kinda weird, and their error handling is a bit ridiculous (if you give it arguments it doesn't like, it asserts!), but it has been working fine for a while.

At some point (around OpenCV 3.0) somebody over there decided that they didn't like their C API, and that this was now a C++ library. Except the docs still documented the C API, and the website said it supported C, and the code wasn't actually removed. They just kinda stopped testing it and thinking about it. So it would mostly continue to work, except some poor saps would see weird failures; like this and this, for instance. OpenCV 3.2 was the last version where it was mostly possible to keep using the old C code, even when compiling without optimizations. So I was doing that for years.

So now, in 2020, Debian is finally shipping a version of OpenCV that definitively does not work with the old code, so I had to do something. Over time I stopped using everything about OpenCV, except a few cvProjectPoints2() calls. So I decided to just write a small C++ shim to call the new version of that function, expose that with =extern "C"= to the rest of my world, and I'd be done. And normally I would be, but this is OpenCV we're talking about. I wrote the shim, and it didn't work. The code built and ran, but the results were wrong. After some pointless debugging, I boiled the problem down to this test program:

#include <opencv2/calib3d.hpp>
#include <stdio.h>

int main(void)
{
    double fx = 1000.0;
    double fy = 1000.0;
    double cx = 1000.0;
    double cy = 1000.0;
    double _camera_matrix[] =
        { fx,  0, cx,
          0,  fy, cy,
          0,   0,  1 };
    cv::Mat camera_matrix(3,3, CV_64FC1, _camera_matrix);

    double pp[3] = {1., 2., 10.};
    double qq[2] = {444, 555};

    int N=1;
    cv::Mat object_points(N,3, CV_64FC1, pp);
    cv::Mat image_points (N,2, CV_64FC1, qq);

    // rvec,tvec
    double _zero3[3] = {};
    cv::Mat zero3(1,3,CV_64FC1, _zero3);

    cv::projectPoints( object_points,
                       zero3,zero3,
                       camera_matrix,
                       cv::noArray(),
                       image_points,
                       cv::noArray(), 0.0);

    fprintf(stderr, "manually-projected no-distortion: %f %f\n",
            pp[0]/pp[2] * fx + cx,
            pp[1]/pp[2] * fy + cy);
    fprintf(stderr, "opencv says: %f %f\n", qq[0], qq[1]);

    return 0;
}

This is as trivial as it gets. I project one point through a pinhole camera, and print out the right answer (that I can easily compute, since this is trivial), and what OpenCV reports:

$ g++ -I/usr/include/opencv4 -o tst tst.cc -lopencv_calib3d -lopencv_core && ./tst

manually-projected no-distortion: 1100.000000 1200.000000
opencv says: 444.000000 555.000000

Well that's no good. The answer is wrong, but it looks like it didn't even write anything into the output array. Since this is supposed to be a thin shim to C code, I want this thing to be filling in C arrays, which is what I'm doing here:

double qq[2] = {444, 555};
int N=1;
cv::Mat image_points (N,2, CV_64FC1, qq);

This is how the C API has worked forever, and their C++ API works the same way, I thought. Nothing barfed, not at build time, or run time. Fine. So I went to figure this out. In the true spirit of C++, the new API is inscrutable. I'm passing in cv::Mat, but the API wants cv::InputArray for some arguments and cv::OutputArray for others. Clearly cv::Mat can be coerced into either of those types (and that's what you're supposed to do), but the details are not meant to be understood. You can read the snazzy C++-style documentation. Clicking on "OutputArray" in the doxygen gets you here. Then I guess you're supposed to click on "_OutputArray", and you get here. Understand what's going on now? Me neither.

Stepping through the code revealed the problem. cv::projectPoints() looks like this:

void cv::projectPoints( InputArray _opoints,
                        InputArray _rvec,
                        InputArray _tvec,
                        InputArray _cameraMatrix,
                        InputArray _distCoeffs,
                        OutputArray _ipoints,
                        OutputArray _jacobian,
                        double aspectRatio )
{
    ....
    _ipoints.create(npoints, 1, CV_MAKETYPE(depth, 2), -1, true);
    ....

I.e. they're allocating a new data buffer for the output, and giving it back to me via the OutputArray object. This object already had a buffer, and that's where I was expecting the output to go. Instead it went to the brand-new buffer I didn't want. Issues:

  • The OutputArray object knows it already has a buffer, and they could just use it instead of allocating a new one
  • If for some reason my buffer smells bad, they could complain to tell me they're ignoring it to save me the trouble of debugging, and then bitching about it on the internet
  • I think dynamic memory allocation smells bad
  • Doing it this way means the new function isn't a drop-in replacement for the old function

Well that's just super. I can call the C++ function, copy the data into the place it's supposed to go to, and then deallocate the extra buffer. Or I can pull out the meat of the function I want into my project, and then I can drop the OpenCV dependency entirely. Clearly that's the way to go.

So I go poking back into their code to grab what I need, and here's what I see:

static void cvProjectPoints2Internal( const CvMat* objectPoints,
                  const CvMat* r_vec,
                  const CvMat* t_vec,
                  const CvMat* A,
                  const CvMat* distCoeffs,
                  CvMat* imagePoints, CvMat* dpdr CV_DEFAULT(NULL),
                  CvMat* dpdt CV_DEFAULT(NULL), CvMat* dpdf CV_DEFAULT(NULL),
                  CvMat* dpdc CV_DEFAULT(NULL), CvMat* dpdk CV_DEFAULT(NULL),
                  CvMat* dpdo CV_DEFAULT(NULL),
                  double aspectRatio CV_DEFAULT(0) )
{
...
}

Looks familiar? It should. Because this is the original C-API function they replaced. So in their quest to move to C++, they left the original code intact, C API and everything, un-exposed it so you couldn't call it anymore, and made a new, shitty C++ wrapper for people to call instead. CvMat is still there. I have no words.

Yes, this is a massive library, and maybe other parts of it indeed did make some sort of non-token transition, but this thing is ridiculous. In the end, here's the function I ended up with (licensed as OpenCV; see the comment)

// The implementation of project_opencv is based on opencv. The sources have
// been heavily modified, but the opencv logic remains. This function is a
// cut-down cvProjectPoints2Internal() to keep only the functionality I want and
// to use my interfaces. Putting this here allows me to drop the C dependency on
// opencv. Which is a good thing, since opencv dropped their C API
//
// from opencv-4.2.0+dfsg/modules/calib3d/src/calibration.cpp
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
//   * Redistribution's of source code must retain the above copyright notice,
//     this list of conditions and the following disclaimer.
//
//   * Redistribution's in binary form must reproduce the above copyright notice,
//     this list of conditions and the following disclaimer in the documentation
//     and/or other materials provided with the distribution.
//
//   * The name of the copyright holders may not be used to endorse or promote products
//     derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
typedef union
{
    struct
    {
        double x,y;
    };
    double xy[2];
} point2_t;
typedef union
{
    struct
    {
        double x,y,z;
    };
    double xyz[3];
} point3_t;
void project_opencv( // outputs
                     point2_t* q,
                     point3_t* dq_dp,               // may be NULL
                     double* dq_dintrinsics_nocore, // may be NULL

                     // inputs
                     const point3_t* p,
                     int N,
                     const double* intrinsics,
                     int Nintrinsics)
{
    const double fx = intrinsics[0];
    const double fy = intrinsics[1];
    const double cx = intrinsics[2];
    const double cy = intrinsics[3];

    double k[12] = {};
    for(int i=0; i<Nintrinsics-4; i++)
        k[i] = intrinsics[i+4];

    for( int i = 0; i < N; i++ )
    {
        double z_recip = 1./p[i].z;
        double x = p[i].x * z_recip;
        double y = p[i].y * z_recip;

        double r2      = x*x + y*y;
        double r4      = r2*r2;
        double r6      = r4*r2;
        double a1      = 2*x*y;
        double a2      = r2 + 2*x*x;
        double a3      = r2 + 2*y*y;
        double cdist   = 1 + k[0]*r2 + k[1]*r4 + k[4]*r6;
        double icdist2 = 1./(1 + k[5]*r2 + k[6]*r4 + k[7]*r6);
        double xd      = x*cdist*icdist2 + k[2]*a1 + k[3]*a2 + k[8]*r2+k[9]*r4;
        double yd      = y*cdist*icdist2 + k[2]*a3 + k[3]*a1 + k[10]*r2+k[11]*r4;

        q[i].x = xd*fx + cx;
        q[i].y = yd*fy + cy;


        if( dq_dp )
        {
            double dx_dp[] = { z_recip, 0,       -x*z_recip };
            double dy_dp[] = { 0,       z_recip, -y*z_recip };
            for( int j = 0; j < 3; j++ )
            {
                double dr2_dp = 2*x*dx_dp[j] + 2*y*dy_dp[j];
                double dcdist_dp = k[0]*dr2_dp + 2*k[1]*r2*dr2_dp + 3*k[4]*r4*dr2_dp;
                double dicdist2_dp = -icdist2*icdist2*(k[5]*dr2_dp + 2*k[6]*r2*dr2_dp + 3*k[7]*r4*dr2_dp);
                double da1_dp = 2*(x*dy_dp[j] + y*dx_dp[j]);
                double dmx_dp = (dx_dp[j]*cdist*icdist2 + x*dcdist_dp*icdist2 + x*cdist*dicdist2_dp +
                                k[2]*da1_dp + k[3]*(dr2_dp + 4*x*dx_dp[j]) + k[8]*dr2_dp + 2*r2*k[9]*dr2_dp);
                double dmy_dp = (dy_dp[j]*cdist*icdist2 + y*dcdist_dp*icdist2 + y*cdist*dicdist2_dp +
                                k[2]*(dr2_dp + 4*y*dy_dp[j]) + k[3]*da1_dp + k[10]*dr2_dp + 2*r2*k[11]*dr2_dp);
                dq_dp[i*2 + 0].xyz[j] = fx*dmx_dp;
                dq_dp[i*2 + 1].xyz[j] = fy*dmy_dp;
            }
        }
        if( dq_dintrinsics_nocore )
        {
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 0] = fx*x*icdist2*r2;
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 0] = fy*(y*icdist2*r2);

            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 1] = fx*x*icdist2*r4;
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 1] = fy*y*icdist2*r4;

            if( Nintrinsics-4 > 2 )
            {
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 2] = fx*a1;
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 2] = fy*a3;
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 3] = fx*a2;
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 3] = fy*a1;
                if( Nintrinsics-4 > 4 )
                {
                    dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 4] = fx*x*icdist2*r6;
                    dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 4] = fy*y*icdist2*r6;

                    if( Nintrinsics-4 > 5 )
                    {
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 5] = fx*x*cdist*(-icdist2)*icdist2*r2;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 5] = fy*y*cdist*(-icdist2)*icdist2*r2;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 6] = fx*x*cdist*(-icdist2)*icdist2*r4;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 6] = fy*y*cdist*(-icdist2)*icdist2*r4;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 7] = fx*x*cdist*(-icdist2)*icdist2*r6;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 7] = fy*y*cdist*(-icdist2)*icdist2*r6;
                        if( Nintrinsics-4 > 8 )
                        {
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 8] = fx*r2; //s1
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 8] = fy*0; //s1
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 9] = fx*r4; //s2
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 9] = fy*0; //s2
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 10] = fx*0;//s3
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 10] = fy*r2; //s3
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 11] = fx*0;//s4
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 11] = fy*r4; //s4
                        }
                    }
                }
            }
        }
    }
}

This does only the stuff I need: projection only (no geometric transformation), and gradients in respect to the point coordinates and distortions only. Gradients in respect to fxy and cxy are trivial, and I don't bother reporting them.

So now I don't compile or link against OpenCV, my code builds and runs on Debian/sid and (surprisingly) it runs much faster than before. Apparently there was a lot of pointless overhead happening.

Alright. Rant over.

,

CryptogramFriday Squid Blogging: Giant Squid Washes Up on South African Beach

Fourteen feet long and 450 pounds. It was dead before it washed up.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityTurn on MFA Before Crooks Do It For You

Hundreds of popular websites now offer some form of multi-factor authentication (MFA), which can help users safeguard access to accounts when their password is breached or stolen. But people who don’t take advantage of these added safeguards may find it far more difficult to regain access when their account gets hacked, because increasingly thieves will enable multi-factor options and tie the account to a device they control. Here’s the story of one such incident.

As a career chief privacy officer for different organizations, Dennis Dayman has tried to instill in his twin boys the importance of securing their online identities against account takeovers. Both are avid gamers on Microsoft’s Xbox platform, and for years their father managed their accounts via his own Microsoft account. But when the boys turned 18, they converted their child accounts to adult, effectively taking themselves out from under their dad’s control.

On a recent morning, one of Dayman’s sons found he could no longer access his Xbox account. The younger Dayman admitted to his dad that he’d reused his Xbox profile password elsewhere, and that he hadn’t enabled multi-factor authentication for the account.

When the two of them sat down to reset his password, the screen displayed a notice saying there was a new Gmail address tied to his Xbox account. When they went to turn on multi-factor authentication for his son’s Xbox profile — which was tied to a non-Microsoft email address — the Xbox service said it would send a notification of the change to unauthorized Gmail account in his profile.

Wary of alerting the hackers that they were wise to their intrusion, Dennis tried contacting Microsoft Xbox support, but found he couldn’t open a support ticket from a non-Microsoft account. Using his other son’s Outlook account, he filed a ticket about the incident with Microsoft.

Dennis soon learned the unauthorized Gmail address added to his son’s hacked Xbox account also had enabled MFA. Meaning, his son would be unable to reset the account’s password without approval from the person in control of the Gmail account.

Luckily for Dayman’s son, he hadn’t re-used the same password for the email address tied to his Xbox profile. Nevertheless, the thieves began abusing their access to purchase games on Xbox and third-party sites.

“During this period, we started realizing that his bank account was being drawn down through purchases of games from Xbox and [Electronic Arts],” Dayman the elder recalled. “I pulled the recovery codes for his Xbox account out of the safe, but because the hacker came in and turned on multi-factor, those codes were useless to us.”

Microsoft support sent Dayman and his son a list of 20 questions to answer about their account, such as the serial number on the Xbox console originally tied to the account when it was created. But despite answering all of those questions successfully, Microsoft refused to let them reset the password, Dayman said.

“They said their policy was not to turn over accounts to someone who couldn’t provide the second factor,” he said.

Dayman’s case was eventually escalated to Tier 3 Support at Microsoft, which was able to walk him through creating a new Microsoft account, enabling MFA on it, and then migrating his son’s Xbox profile over to the new account.

Microsoft told KrebsOnSecurity that while users currently are not prompted to enable two-step verification upon sign-up, they always have the option to enable the feature.

“Users are also prompted shortly after account creation to add additional security information if they have not yet done so, which enables the customer to receive security alerts and security promotions when they login to their account,” the company said in a written statement. “When we notice an unusual sign-in attempt from a new location or device, we help protect the account by challenging the login and send the user a notification. If a customer’s account is ever compromised, we will take the necessary steps to help them recover the account.”

Certainly, not enabling MFA when it is offered is far more of a risk for people in the habit of reusing or recycling passwords across multiple sites. But any service to which you entrust sensitive information can get hacked, and enabling multi-factor authentication is a good hedge against having leaked or stolen credentials used to plunder your account.

What’s more, a great many online sites and services that do support multi-factor authentication are completely automated and extremely difficult to reach for help when account takeovers occur. This is doubly so if the attackers also can modify and/or remove the original email address associated with the account.

KrebsOnSecurity has long steered readers to the site twofactorauth.org, which details the various MFA options offered by popular websites. Currently, twofactorauth.org lists nearly 900 sites that have some form of MFA available. These range from authentication options like one-time codes sent via email, phone calls, SMS or mobile app, to more robust, true “2-factor authentication” or 2FA options (something you have and something you know), such as security keys or push-based 2FA such as Duo Security (an advertiser on this site and a service I have used for years).

Email, SMS and app-based one-time codes are considered less robust from a security perspective because they can be undermined by a variety of well-established attack scenarios, from SIM-swapping to mobile-based malware. So it makes sense to secure your accounts with the strongest form of MFA available. But please bear in mind that if the only added authentication options offered by a site you frequent are SMS and/or phone calls, this is still better than simply relying on a password to secure your account.

CryptogramSecurity and Human Behavior (SHB) 2020

Today is the second day of the thirteenth Workshop on Security and Human Behavior. It's being hosted by the University of Cambridge, which in today's world means we're all meeting on Zoom.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The forty or so attendees include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It's not just an interdisciplinary event; most of the people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. We've done pretty well translating this format to video chat, including using the random breakout feature to put people into small groups.

I invariably find this to be the most intellectually stimulating two days of my professional year. It influences my thinking in many different, and sometimes surprising, ways.

This year's schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, and twelfth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

Planet DebianIngo Juergensmann: Jitsi Meet and ejabberd

Since the more or less global lockdown caused bei Covid-19 there was a lot talk about video conferencing solutions that can be used for e.g. those people that try to coordinate with coworkers while in home office. One of the solutions is Jitsi Meet, which is packaged in Debian. But there are also Debian packages provided by Jitsi itself.

Jitsi relies on an XMPP server. You can see the network overview in the docs. While Jitsi itself uses Prosody as XMPP and their docs only covers that one. But it's basically irrelevant which XMPP you want to use. Only thing is that you can't follow the official Jitsi documentation when you are not using Prosody but instead e.g. ejabberd. As always, it's sometimes difficult to find the correct/best non-official documentation or how-to, so I try to describe what helped me in configuring Jitsi Meet with ejabberd as XMPP server and my own coturn STUN/TURN server...

This is not a step-by-step description, but anyway... here we go with some links:

One of the first issue I stumpled across was that my that my Java was too old, but this can be quickly solved by update-alternatives:

There are 3 choices for the alternative java (providing /usr/bin/java).

Selection Path Priority Status
------------------------------------------------------------
* 0 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 auto mode
1 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 manual mode
2 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 1081 manual mode
3 /usr/lib/jvm/jre-7-oracle-x64/bin/java 316 manual mode

It was set to jre-7, but I guess this was from years ago when I ran OpenFire as XMPP server.

If something is not working with Jitsi Meet, it helps to not watch the log files only, but also to open the Debug Console in your web browser. That way I catched some XMPP Failures and saw that it tries to connect to some components where the DNS records were missing:

meet IN A yourIP
chat.meet IN A yourIP
focus.meet IN A yourIP
pubsub.meet IN A yourIP

Of course you also need to add some config to your ejabberd.yml:

host_config:
domain.net:
auth_password_format: scram
meet.domain.net:
## Disable s2s to prevent spam
s2s_access: none
auth_method: anonymous
allow_multiple_connections: true
anonymous_protocol: both
modules:
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
#mod_disco: {}
mod_stun_disco:
secret: "YOURSECRETTURNCREDENTIALS"
services:
-
host: yourIP # Your coturn's public address.
type: stun
transport: udp
-
host: yourIP # Your coturn's public address.
type: stun
transport: tcp
-
host: yourIP # Your coturn's public address.
type: turn
transport: udp
mod_muc:
access: all
access_create: local
access_persistent: local
access_admin: admin
host: "chat.@"
mod_muc_admin: {}
mod_ping: {}
mod_pubsub:
access_createnode: local
db_type: sql
host: "pubsub.@"
ignore_pep_from_offline: false
last_item_cache: true
max_items_node: 5000 # For Jappix this must be set to 1000000
plugins:
- "flat"
- "pep" # requires mod_caps
force_node_config:
"eu.siacs.conversations.axolotl.*":
access_model: open
## Avoid buggy clients to make their bookmarks public
"storage:bookmarks":
access_model: whitelist

There is more config that needs to be done, but one of the XMPP Failures I spotted from debug console in Firefox was that it tried to connect to conference.domain.net while I prefer to use chat.domain.net for its brevity. If you prefer conference instead of chat, then you shouldn't be affected by this. Just make just that your config is consistent with what Jitsi Meet wants to connect to. Maybe not all of the above lines are necessary, but this works for me.

Jitsi config is like this for me with short domains (/etc/jitsi/meet/meet.domain.net-config.js):

var config = {

hosts: {
domain: 'domain.net',
anonymousdomain: 'meet.domain.net',
authdomain: 'meet.domain.net',
focus: 'focus.meet.domain.net',
muc: 'chat.hookipa.net'
},

bosh: '//meet.domain.net:5280/http-bind',
//websocket: 'wss://meet.domain.net/ws',
clientNode: 'http://jitsi.org/jitsimeet',
focusUserJid: 'focus@domain.net',

useStunTurn: true,

p2p: {
// Enables peer to peer mode. When enabled the system will try to
// establish a direct connection when there are exactly 2 participants
// in the room. If that succeeds the conference will stop sending data
// through the JVB and use the peer to peer connection instead. When a
// 3rd participant joins the conference will be moved back to the JVB
// connection.
enabled: true,

// Use XEP-0215 to fetch STUN and TURN servers.
useStunTurn: true,

// The STUN servers that will be used in the peer to peer connections
stunServers: [
//{ urls: 'stun:meet-jit-si-turnrelay.jitsi.net:443' },
//{ urls: 'stun:stun.l.google.com:19302' },
//{ urls: 'stun:stun1.l.google.com:19302' },
//{ urls: 'stun:stun2.l.google.com:19302' },
{ urls: 'stun:turn.domain.net:5349' },
{ urls: 'stun:turn.domain.net:3478' }
],

....

In the above config snippet one of the issues solved was to add the port to the bosh setting. Of course you can also take care that your bosh requests get forwarded by your webserver as reverse proxy. Using the port in the config might be a quick way to test whether or not your config is working. It's easier to solve one issue after the other and make one config change at a time instead of needing to make changes in several places.

/etc/jitsi/jicofo/sip-communicator.properties:

org.jitsi.jicofo.auth.URL=XMPP:meet.domain.net
org.jitsi.jicofo.BRIDGE_MUC=jvbbrewery@chat.meet.domain.net

 /etc/jitsi/videobridge/sip-communicator.properties:

org.jitsi.videobridge.ENABLE_STATISTICS=true
org.jitsi.videobridge.STATISTICS_TRANSPORT=muc
org.jitsi.videobridge.STATISTICS_INTERVAL=5000

org.jitsi.videobridge.xmpp.user.shard.HOSTNAME=localhost
org.jitsi.videobridge.xmpp.user.shard.DOMAIN=domain.net
org.jitsi.videobridge.xmpp.user.shard.USERNAME=jvb
org.jitsi.videobridge.xmpp.user.shard.PASSWORD=SECRET
org.jitsi.videobridge.xmpp.user.shard.MUC_JIDS=JvbBrewery@chat.meet.domain.net
org.jitsi.videobridge.xmpp.user.shard.MUC_NICKNAME=videobridge1

org.jitsi.videobridge.DISABLE_TCP_HARVESTER=true
org.jitsi.videobridge.TCP_HARVESTER_PORT=4443
org.ice4j.ice.harvest.NAT_HARVESTER_LOCAL_ADDRESS=yourIP
org.ice4j.ice.harvest.NAT_HARVESTER_PUBLIC_ADDRESS=yourIP
org.ice4j.ice.harvest.DISABLE_AWS_HARVESTER=true
org.ice4j.ice.harvest.STUN_MAPPING_HARVESTER_ADDRESSES=turn.domain.net:3478
org.ice4j.ice.harvest.ALLOWED_INTERFACES=eth0

Sometimes there might be stupid errors like (in my case) wrong hostnames like "chat.meet..domain.net" (a double dot in the domain), but you can spot those easily in the debug console of your browser.

There tons of config options where you can easily make mistakes, but watching your logs and your debug console should really help you in sorting out these kind of errors. The other URLs above are helpful as well and more elaborate then my few lines here. Especially Mike Kuketz has some advanced configuration tips like disabling third party requests with "disableThirdPartyRequests: true" or limiting the number of video streams and such.

Hope this helps...

Kategorie: 

Planet DebianRussell Coker: Storage Trends

In considering storage trends for the consumer side I’m looking at the current prices from MSY (where I usually buy computer parts). I know that other stores will have slightly different prices but they should be very similar as they all have low margins and wholesale prices are the main factor.

Small Hard Drives Aren’t Viable

The cheapest hard drive that MSY sells is $68 for 500G of storage. The cheapest SSD is $49 for 120G and the second cheapest is $59 for 240G. SSD is cheaper at the low end and significantly faster. If someone needed about 500G of storage there’s a 480G SSD for $97 which costs $29 more than a hard drive. With a modern PC if you have no hard drives you will notice that it’s quieter. For anyone who’s buying a new PC spending an extra $29 is definitely worthwhile for the performance, low power use, and silence.

The cheapest 1TB disk is $69 and the cheapest 1TB SSD is $159. Saving $90 on the cost of a new PC probably isn’t worth while.

For 2TB of storage the cheapest options are Samsung NVMe for $339, Crucial SSD for $335, or a hard drive for $95. Some people would choose to save $244 by getting a hard drive instead of NVMe, but if you are getting a whole system then allocating $244 to NVMe instead of a faster CPU would probably give more benefits overall.

Computer stores typically have small margins and computer parts tend to quickly either become cheaper or be obsoleted by better parts. So stores don’t want to stock parts unless they will sell quickly. Disks smaller than 2TB probably aren’t going to be profitable for stores for very long. The trend of SSD and NVMe becoming cheaper is going to make 2TB disks non-viable in the near future.

NVMe vs SSD

M.2 NVMe devices are at comparable prices to SATA SSDs. For some combinations of quality and capacity NVMe is about 50% more expensive and for some it’s slightly cheaper (EG Intel 1TB NVMe being cheaper than Samsung EVO 1TB SSD). Last time I checked about half the motherboards on sale had a single M.2 socket so for a new workstation that doesn’t need more than 2TB of storage (the largest NVMe that MSY sells) it wouldn’t make sense to use anything other than NVMe.

The benefit of NVMe is NOT throughput (even though NVMe devices can often sustain over 4GB/s), it’s low latency. Workstations can’t properly take advantage of this because RAM is so cheap ($198 for 32G of DDR4) that compiles etc mostly come from cache and because most filesystem writes on workstations aren’t synchronous. For servers a large portion of writes are synchronous, for example a mail server can’t acknowledge receiving mail until it knows that it’s really on disk, so there’s a lot of small writes that block server processes and the low latency of NVMe really improves performance. If you are doing a big compile on a workstation (the most common workstation task that uses a lot of disk IO) then the writes aren’t synchronised to disk and if the system crashes you will just do all the compilation again. While NVMe doesn’t give a lot of benefit over SSD for workstation use (I’ve uses laptops with SSD and NVMe and not noticed a great difference) of course I still want better performance. ;)

Last time I checked I couldn’t easily buy a PCIe card that supported 2*NVMe cards, I’m sure they are available somewhere but it would take longer to get and probably cost significantly more than twice as much. That means a RAID-1 of NVMe takes 2 PCIe slots if you don’t have an M.2 socket on the motherboard. This was OK when I installed 2*NVMe devices on a server that had 18 disks and lots of spare PCIe slots. But for some systems PCIe slots are an issue.

My home server has all PCIe slots used by a video card and Ethernet cards and the BIOS probably won’t support booting from NVMe. It’s a Dell server so I can’t just replace the motherboard with one that has more PCIe slots and M.2 on the motherboard. As it’s running nicely and doesn’t need replacing any time soon I won’t be using NVMe for home server stuff.

Small Servers

Most servers that I am responsible for have less than 2TB of storage. For my clients I now only recommend SSD storage for small servers and am recommending SSD for replacing any failed disks.

My home server has 2*500G SSDs in a BTRFS RAID-1 for the root filesystem, and 3*4TB disks in a BTRFS RAID-1 for storing big files. I bought the SSDs when 500G SSDs were about $250 each and bought 2*4TB disks when they were about $350 each. Currently that server has about 3.3TB of space used and I could probably get it down to about 2.5TB if I deleted things I don’t really need. If I was getting storage for that server now I’d use 2*2TB SSDs and 3*1TB hard drives for the stuff that doesn’t fit on SSDs (I have some spare 1TB disks that came with servers). If I didn’t have spare hard drives I’d get 3*2TB SSDs for that sort of server which would give 3TB of BTRFS RAID-1 storage.

Last time I checked Dell servers had a card for supporting M.2 as an optional extra so Dells probably won’t boot from NVMe without extra expense.

Ars Technica has an informative article about WD selling SMR disks as “NAS” disks [1]. The Shingled Magnetic Recording technology allows greater storage density on a platter which leads to either larger capacity or cheaper disks but at the cost of lower write performance and apparently extremely bad latency in some situations. NAS disks are supposed to be low latency as the expectation is that they will be used in a RAID array and kicked out of the array if they have problems. There are reports of ZFS kicking SMR disks from RAID sets. I think this will end the use of hard drives for small servers. For a server you don’t want to deal with this sort of thing, by definition when a server goes down multiple people will stop work (small server implies no clustering). Spending extra to get SSDs just to avoid the risk of unexpected SMR would be a good plan.

Medium Servers

The largest SSD and NVMe devices that are readily available are 2TB but 10TB disks are commodity items, there are reports of 20TB hard drives being available but I can’t find anyone in Australia selling them.

If you need to store dozens or hundreds of terabytes than hard drives have to be part of the mix at this time. There’s no technical reason why SSDs larger than 10TB can’t be made (the 2.5″ SATA form factor has more than 5* the volume of a 2TB M.2 card) and it’s likely that someone sells them outside the channels I buy from, but probably at a price higher than what my clients are willing to pay. If you want 100TB of affordable storage then a mid range server like the Dell PowerEdge T640 which can have up to 18*3.5″ disks is good. One of my clients has a PowerEdge T630 with 18*3.5″ disks in the 8TB-10TB range (we replace failed disks with the largest new commodity disks available, it used to have 6TB disks). ZFS version 0.8 introduced a “Special VDEV Class” which stores metadata and possibly small data blocks on faster media. So you could have some RAID-Z groups on hard drives for large storage and the metadata on a RAID-1 on NVMe for fast performance. For medium size arrays on hard drives having a “find /” operation take hours is not uncommon, for large arrays having it take days isn’t that uncommon. So far it seems that ZFS is the only filesystem to have taken the obvious step of storing metadata on SSD/NVMe while bulk data is on cheap large disks.

One problem with large arrays is that the vibration of disks can affect the performance and reliability of nearby disks. The ZFS server I run with 18 disks was originally setup with disks from smaller servers that never had ZFS checksum errors, but when disks from 2 small servers were put in one medium size server they started getting checksum errors presumably due to vibration. This alone is a sufficient reason for paying a premium for SSD storage.

Currently the cost of 2TB of SSD or NVMe is between the prices of 6TB and 8TB hard drives, and the ratio of price/capacity for SSD and NVMe is improving dramatically while the increase in hard drive capacity is slow. 4TB SSDs are available for $895 compared to a 10TB hard drive for $549, so it’s 4* more expensive on a price per TB. This is probably good for Windows systems, but for Linux systems where ZFS and “special VDEVs” is an option it’s probably not worth considering. Most Linux user cases where 4TB SSDs would work well would be better served by smaller NVMe and 10TB disks running ZFS. I don’t think that 4TB SSDs are at all popular at the moment (MSY doesn’t stock them), but prices will come down and they will become common soon enough. Probably by the end of the year SSDs will halve in price and no hard drives less than 4TB will be viable.

For rack mounted servers 2.5″ disks have been popular for a long time. It’s common for vendors to offer 2 versions of a rack mount server for 2.5″ and 3.5″ disks where the 2.5″ version takes twice as many disks. If the issue is total storage in a server 4TB SSDs can give the same capacity as 8TB HDDs.

SMR vs Regular Hard Drives

Rumour has it that you can buy 20TB SMR disks, I haven’t been able to find a reference to anyone who’s selling them in Australia (please comment if you know who sells them and especially if you know the price). I expect that the ZFS developers will soon develop a work-around to solve the problems with SMR disks. Then arrays of 20TB SMR disks with NVMe for “special VDEVs” will be an interesting possibility for storage. I expect that SMR disks will be the majority of the hard drive market by 2023 – if hard drives are still on the market. SSDs will be large enough and cheap enough that only SMR disks will offer enough capacity to be worth using.

I think that it is a possibility that hard drives won’t be manufactured in a few years. The volume of a 3.5″ disk is significantly greater than that of 10 M.2 devices so current technology obviously allows 20TB of NVMe or SSD storage in the space of a 3.5″ disk. If the price of 16TB NVMe and SSD devices comes down enough (to perhaps 3* the price of a 20TB hard drive) almost no-one would want the hard drive and it wouldn’t be viable to manufacture them.

It’s not impossible that in a few years time 3D XPoint and similar fast NVM technologies occupy the first level of storage (the ZFS “special VDEV”, OS swap device, log device for database servers, etc) and NVMe occupies the level for bulk storage with no space left in the market for spinning media.

Computer Cases

For servers I expect that models supporting 3.5″ storage devices will disappear. A 1RU server with 8*2.5″ storage devices or a 2RU server with 16*2.5″ storage devices will probably be of use to more people than a 1RU server with 4*3.5″ or a 2RU server with 8*3.5″.

My first IBM PC compatible system had a 5.25″ hard drive, a 5.25″ floppy drive, and a 3.5″ floppy drive in 1988. My current PC is almost a similar size and has a DVD drive (that I almost never use) 5 other 5.25″ drive bays that have never been used, and 5*3.5″ drive bays that I have never used (I have only used 2.5″ SSDs). It would make more sense to have PC cases designed around 2.5″ and maybe 3.5″ drives with no more than one 5.25″ drive bay.

The Intel NUC SFF PCs are going in the right direction. Many of them only have a single storage device but some of them have 2*M.2 sockets allowing RAID-1 of NVMe and some of them support ECC RAM so they could be used as small servers.

A USB DVD drive costs $36, it doesn’t make sense to have every PC designed around the size of an internal DVD drive that will probably only be used to install the OS when a $36 USB DVD drive can be used for every PC you own.

The only reason I don’t have a NUC for my personal workstation is that I get my workstations from e-waste. If I was going to pay for a PC then a NUC is the sort of thing I’d pay to have on my desk.

CryptogramNew Hacking-for-Hire Company in India

Citizen Lab has a new report on Dark Basin, a large hacking-for-hire company in India.

Key Findings:

  • Dark Basin is a hack-for-hire group that has targeted thousands of individuals and hundreds of institutions on six continents. Targets include advocacy groups and journalists, elected and senior government officials, hedge funds, and multiple industries.

  • Dark Basin extensively targeted American nonprofits, including organisations working on a campaign called #ExxonKnew, which asserted that ExxonMobil hid information about climate change for decades.

  • We also identify Dark Basin as the group behind the phishing of organizations working on net neutrality advocacy, previously reported by the Electronic Frontier Foundation.

  • We link Dark Basin with high confidence to an Indian company, BellTroX InfoTech Services, and related entities.

  • Citizen Lab has notified hundreds of targeted individuals and institutions and, where possible, provided them with assistance in tracking and identifying the campaign. At the request of several targets, Citizen Lab shared information about their targeting with the US Department of Justice (DOJ). We are in the process of notifying additional targets.

BellTroX InfoTech Services has assisted clients in spying on over 10,000 email accounts around the world, including accounts of politicians, investors, journalists and activists.

News article. Boing Boing post

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV June 2020 Workshop: Emergency Security Discussion

Jun 20 2020 12:30
Jun 20 2020 14:30
Jun 20 2020 12:30
Jun 20 2020 14:30
Location: 
Online event (TBA)

On Friday morning, our prime minister held an unprecedented press conference to warn Australia (Governments, Industry & Individuals) about a sophisticated cyber attack that is currently underway.

 

 

Linux Users of Victoria is a subcommittee of Linux Australia.

June 20, 2020 - 12:30

read more

CryptogramZoom Will Be End-to-End Encrypted for All Users

Zoom is doing the right thing: it's making end-to-end encryption available to all users, paid and unpaid. (This is a change; I wrote about the initial decision here.)

...we have identified a path forward that balances the legitimate right of all users to privacy and the safety of users on our platform. This will enable us to offer E2EE as an advanced add-on feature for all of our users around the globe -- free and paid -- while maintaining the ability to prevent and fight abuse on our platform.

To make this possible, Free/Basic users seeking access to E2EE will participate in a one-time process that will prompt the user for additional pieces of information, such as verifying a phone number via a text message. Many leading companies perform similar steps on account creation to reduce the mass creation of abusive accounts. We are confident that by implementing risk-based authentication, in combination with our current mix of tools -- including our Report a User function -- we can continue to prevent and fight abuse.

Thank you, Zoom, for coming around to the right answer.

And thank you to everyone for commenting on this issue. We are learning -- in so many areas -- the power of continued public pressure to change corporate behavior.

EDITED TO ADD (6/18): Let's do Apple next.

Worse Than FailureError'd: Fast Hail and Round Wind

"He's not wrong. With wind and hail like this, an isolated tornado definitely ranks third in severity," Rob K. writes.

 

"Upon linking my Days of Wonder account with Steam, I was initially told that I had 7 days to verify my email before account deletion and then I was told something else..." Ian writes.

 

Harvey wrote, "Great. Thanks for the warm welcome to your site ${AUCTION_WEBSITE}"

 

Peter G. writes, "In this case, I imagine the art department did something like 'OK Google, find image of Pentagon, insert into document'."

 

"I'm happy with my efforts but I feel for Terri. 1,400km in 21 days, 200km in the lead and she's barely overcome by this 'NaN' individual," wrote Roger G.

 

Sam writes, "While I admire the honesty of this particular scammer, I do rather think they missed the point."

 

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet DebianReproducible Builds (diffoscope): diffoscope 148 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 148. This version includes the following changes:

[ Daniel Fullmer ]
* Fix a regression in the CBFS comparator due to changes in our_check_output.

[ Chris Lamb ]
* Add a remark in the deb822 handling re. potential security issue in the
  .changes, .dsc, .buildinfo comparator.

You find out more by visiting the project homepage.

,

TEDConversations on the future of vaccines, tech, government and art: Week 5 of TED2020

Week 5 of TED2020 featured wide-ranging discussions on the quest for a coronavirus vaccine, the future of the art world, what it’s like to lead a country during a pandemic and much more. Below, a recap of insights shared.

Jerome Kim, Director General of the International Vaccine Institute, shares an update on the quest for a coronavirus vaccine in conversation with TED science curator David Biello at TED2020: Uncharted on June 15, 2020. (Photo courtesy of TED)

Jerome Kim, Director General of the International Vaccine Institute

Big idea: There’s a lot of work still to be done, but the world is making progress on developing a COVID-19 vaccine. 

How? A normal vaccine takes five to 10 years to develop and costs about a billion dollars, with a failure rate of 93 percent. Under the pressure of the coronavirus pandemic, however, we’re being asked to speed things up to within a window of 12 to 18 months, says Jerome Kim. How are things going? He updates us on the varied field of vaccine candidates and approaches, from Moderna’s mRNA vaccine to AstraZeneca’s vectored vaccine to whole inactivated vaccines, and how these companies are innovating to develop and manufacture their products in record time. In addition to the challenge of making a sufficient amount of a safe, effective vaccine (at the right price), Kim says we must think about how to distribute it for the whole world — not just rich nations. The question of equity and access is the toughest one of all, he says, but the answer will ultimately lead us out of this pandemic.


Bioethicist Nir Eyal discusses the mechanism and ethics of human challenge trials in vaccine development with head of TED Chris Anderson at TED2020: Uncharted on June 15, 2020. (Photo courtesy of TED)

Nir Eyal, Bioethicist

Big idea: Testing vaccine efficacy is normally a slow, years-long process, but we can ethically accelerate COVID-19 vaccine development through human challenge trials.

How? Thousands of people continue to die every day from COVID-19 across the globe, and we risk greater death and displacement if we rely on conventional vaccine trials, says bioethicist Nir Eyal. While typical trials observe experimental and control groups over time until they see meaningful differences between the two, Eyal proposes using human challenge trials in our search for a vaccine — an approach that deliberately exposes test groups to the virus in order to quickly determine efficacy. Human challenge trials might sound ethically ambiguous or even immoral, but Eyal suggests the opposite is true. Patients already take informed risks by participating in drug trials and live organ donations; if we look at statistical risk and use the right bioethical framework, we can potentially hasten vaccine development while maintaining tolerable risks. The key, says Eyal, is the selection criteria: by selecting young participants who are free from risk factors like hypertension, for example, the search for a timely solution to this pandemic is possible. “The dramatic number of people who could be aided by a faster method of testing vaccines matters,” he says. “It’s not the case that we are violating the rights of individuals to maximize utility. We are both maximizing utility and respecting rights, and this marriage is very compelling in defending the use of these accelerated [vaccine trial] designs.”


“What is characteristic of our people is the will to overcome the past and to move forward. Poverty is real. Inequality is real. But we also have a very determined population that embraces the notion of the Republic and the notion of citizenship,” says Ashraf Ghani, president of Afghanistan. He speaks with head of TED Chris Anderson at TED2020: Uncharted on June 16, 2020. (Photo courtesy of TED)

Ashraf Ghani, President of Afghanistan

Big Idea: Peacemaking is a discipline that must be practiced daily, both in life and politics. 

How? Having initiated sweeping economic, trade and social reforms, Afghanistan president Ashraf Ghani shares key facets of peacemaking that he relies on to navigate politically sensitive relationships and the ongoing health crisis: mutual respect, listening and humanity. Giving us a glimpse of Afghanistan that goes beyond the impoverished, war-torn image painted in the media, he describes the aspirations, entrepreneurship and industry that’s very much alive there, especially in its youth and across all genders. “What I hear from all walks of life, men and women, girls and boys, [is] a quest for normalcy. We’re striving to be normal. It’s not we who are abnormal; it’s the circumstances in which we’ve been caught. And we are attempting to carve a way forward to overcome the types of turbulence that, in interaction with each other, provide an environment of continuous uncertainty. Our goal is to overcome this, and I think with the will of the people, we will be able to,” he says. President Ghani also shares perspective on Afghanistan’s relationship to China, the Taliban and Pakistan — expressing a commitment to his people and long term peace that fuels every conversation. “The ultimate goal is a sovereign, democratic, united Afghanistan at peace with itself in the world,” he says. 


“How do we make it so that if you’re having a conversation with someone and you have to be separated by thousands of miles, it feels as close to face-to-face?” asks Will Cathcart, CEO of WhatsApp. He speaks with head of TED Chris Anderson at TED2020: Uncharted on June 16, 2020. (Photo courtesy of TED)

Will Cathcart, CEO of WhatsApp

Big idea: Tech platforms have a responsibility to provide privacy and security to users.

Why? On WhatsApp, two billion users around the world send more than 100 billion messages every day. All of them are protected by end-to-end encryption, which means that the conversations aren’t stored and no one can access them — not governments, companies or even WhatsApp itself. Due to the COVID-19 pandemic, more and more of our conversations with family, friends and coworkers have to occur through digital means. This level of privacy is a fundamental right that has never been more important, says Cathcart. To ensure their encryption services aren’t misused to promote misinformation or conduct crime, WhatsApp has developed tools and protocols that keep users safe without disrupting the privacy of all of its users. “It’s so important that we match the security and privacy you have in-person, and not say, ‘This digital world is totally different: we should change all the ways human beings communicate and completely upend the rules.’ No, we should try to match that as best we can, because there’s something magical about people talking to each other privately.”


“Museums are among the few truly public democratic spaces for people to come together. We’re places of inspiration and learning, and we help expand empathy and moral thinking. We are places for difficult and courageous conversations. I believe we can, and must be, places in real service of community,” says Anne Pasternak, director of the Brooklyn Museum. She speaks with TED design curator Chee Pearlman at TED2020: Uncharted on June 17, 2020. (Photo courtesy of TED)

Anne Pasternak, Director of the Brooklyn Museum

Big idea: We need the arts to be able to document and reflect on what we’re living through, express our pain and joy and imagine a better future.

How? Museums are vital community institutions that reflect the memories, knowledge and dreams of a society. Located in a borough of more than 2.5 million people, the Brooklyn Museum is one of the largest and most influential museums in the world, and it serves a community that has been devastated by the COVID-19 pandemic. Pasternak calls on museums to take a leading role in manifesting community visions of a better world. In a time defined by dramatic turmoil and global suffering, artists will help ignite the radial imagination that leads to cultural, political and social change, she says. Museums also have a responsibility to uplift a wide variety of narratives, taking special care to highlight communities who have historically been erased from societal remembrance and artmaking. The world has been irreversibly changed and devastated by the pandemic. It’s time to look to art as a medium of collective memorializing, mourning, healing and transformation.


“Art changes minds, shifts mentalities, changes the behavior of people and the way they think and how they feel,” says Honor Hager. She speaks with TED current affairs curator Whitney Pennington Rodgers at TED2020: Uncharted on June 17, 2020. (Photo courtesy of TED)

Honor Hager, Executive Director of the ArtScience Museum

Big Idea: Cultural institutions can care for their communities by listening to and amplifying marginalized voices.

How: The doors of Singapore’s famed ArtScience Museum building are closed — but online, the museum is engaging with its community more deeply than ever. Executive director Honor Hager shares how the museum has moved online with ArtScience at Home, a program offering online talks, streamed performances and family workshops addressing COVID-19 and our future. Reflecting on the original meaning of “curator” (from the Latin curare, or “to care”), Hager shares how ArtScience at Home aims to care for its community by listening to underrepresented groups. The program seeks out marginalized voices and provides a global platform for them to tell their own stories, unmediated and unedited, she says. Notably, the program included a screening of Salary Day by Ramasamy Madhavan, the first film made by a migrant worker in Singapore. The programming will have long-lasting effects on the museum’s curation in the future and on its international audience, Hager says. “Art changes minds, shifts mentalities, changes the behavior of people and the way they think and how they feel,” she says. “We are seeing the power of culture and art to both heal and facilitate dramatic change.”

Krebs on SecurityFEMA IT Specialist Charged in ID Theft, Tax Refund Fraud Conspiracy

An information technology specialist at the Federal Emergency Management Agency (FEMA) was arrested this week on suspicion of hacking into the human resource databases of University of Pittsburgh Medical Center (UPMC) in 2014, stealing personal data on more than 65,000 UPMC employees, and selling the data on the dark web.

On June 16, authorities in Michigan arrested 29-year-old Justin Sean Johnson in connection with a 43-count indictment on charges of conspiracy, wire fraud and aggravated identity theft.

Federal prosecutors in Pittsburgh allege that in 2013 and 2014 Johnson hacked into the Oracle PeopleSoft databases for UPMC, a $21 billion nonprofit health enterprise that includes more than 40 hospitals.

According to the indictment, Johnson stole employee information on all 65,000 then current and former employees, including their names, dates of birth, Social Security numbers, and salaries.

The stolen data also included federal form W-2 data that contained income tax and withholding information, records that prosecutors say Johnson sold on dark web marketplaces to identity thieves engaged in tax refund fraud and other financial crimes. The fraudulent tax refund claims made in the names of UPMC identity theft victims caused the IRS to issue $1.7 million in phony refunds in 2014.

“The information was sold by Johnson on dark web forums for use by conspirators, who promptly filed hundreds of false form 1040 tax returns in 2014 using UPMC employee PII,” reads a statement from U.S. Attorney Scott Brady. “These false 1040 filings claimed hundreds of thousands of dollars of false tax refunds, which they converted into Amazon.com gift cards, which were then used to purchase Amazon merchandise which was shipped to Venezuela.”

Johnson could not be reached for comment. At a court hearing in Pittsburgh this week, a judge ordered the defendant to be detained pending trial. Johnson’s attorney declined to comment on the charges.

Prosecutors allege Johnson’s intrusion into UPMC was not an isolated occurrence, and that for several years after the UPMC hack he sold personally identifiable information (PII) to buyers on dark web forums.

The indictment says Johnson used the hacker aliases “DS and “TDS” to market the stolen records to identity thieves on the Evolution and AlphaBay dark web marketplaces. However, archived copies of the now-defunct dark web forums indicate those aliases are merely abbreviations that stand for “DearthStar” and “TheDearthStar,” respectively.

“You can expect good things come tax time as I will have lots of profiles with verified prior year AGIs to make your refund filing 10x easier,” TheDearthStar advertised in an August 2015 message to AlphaBay members.

In some cases, it appears these DearthStar identities were actively involved in not just selling PII and tax refund fraud, but also stealing directly from corporate payrolls.

In an Aug. 2015 post to AlphaBay titled “I’d like to stage a heist but…,” TheDearthStar solicited people to help him cash out access he had to the payroll systems of several different companies:

“… I have nowhere to send the money. I’d like to leverage the access I have to payroll systems of a few companies and swipe a chunk of their payroll. Ideally, I’d like to find somebody who has a network of trusted individuals who can receive ACH deposits.”

When another AlphaBay member asks how much he can get, TheDearthStar responds, “Depends on how many people end up having their payroll records ‘adjusted.’ Could be $1,000 could be $100,000.”

2014 and 2015 were particularly bad years for tax refund fraud, a form of identity theft which cost taxpayers and the U.S. Treasury billions of dollars. In April 2014, KrebsOnSecurity wrote about a spike in tax refund fraud perpetrated against medical professionals that caused many to speculate that one or more major healthcare providers had been hacked.

A follow-up story that same month examined the work of a cybercrime gang that was hacking into HR departments at healthcare organizations across the country and filing fraudulent tax refund requests with the IRS on employees of those victim firms.

The Justice Department’s indictment quotes from Johnson’s online resume as stating that he is proficient at installing and administering Oracle PeopleSoft systems. A LinkedIn resume for a Justin Johnson from Detroit says the same, and that for the past five months he has served as an information technology specialist at FEMA. A Facebook profile with the same photo belongs to a Justin S. Johnson from Detroit.

Johnson’s resume also says he was self-employed for seven years as a “cyber security researcher / bug bounty hunter” who was ranked in the top 1,000 by reputation on Hacker One, a program that rewards security researchers who find and report vulnerabilities in software and web applications.

Planet DebianEnrico Zini: Missing Qt5 designer library in cross-build development

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

The problem

While testing the cross-compiler, we noticed that the designer library was not being built.

The designer library is needed to build designer plugins, which allow loading, dynamically at runtime, .ui interface files that use custom widgets.

The error the customer got at runtime is: QFormBuilder was unable to create a custom widget of the class '…'; defaulting to base class 'QWidget'.

The library with the custom widget implementation was correctly linked, and indeed the same custom widget was used by the application in other parts of its interface not loaded via .ui files.

It turns out that it is not sufficient, and to load custom widgets automatically, QUiLoader wants to read their metadata from plugin libraries containing objects that implement the QDesignerCustomWidgetInterface interface.

Sadly, building such a library requires using QT += designer, and the designer library, that was not being built by Qt5's build system. This looks very much like a Qt5 bug.

A work around would be to subclass QUiLoader extending createWidget to teach it how to create the custom widgets we need. Unfortunately, the customer has many custom widgets.

The investigation

To find out why designer was not being built, I added -d to the qmake invocation at the end of qtbase/configure, and trawled through the 3.1G build output.

The needle in the haystack seems to be here:

DEBUG 1: /home/build/armhf/qt-everywhere-src-5.15.0/qttools/src/designer/src/src.pro:18: SUBDIRS := uiplugin uitools lib components designer plugins
DEBUG 1: /home/build/armhf/qt-everywhere-src-5.15.0/qttools/src/designer/src/src.pro:23: calling qtNomakeTools(lib components designer plugins)

As far as I can understand, qtNomakeTools seems to be intended to disable building those components if QT_BUILD_PARTS doesn't contain tools. For cross-building, QT_BUILD_PARTS is libs examples, so designer does not get built.

However, designer contains the library part needed for QDesignerCustomWidgetInterface and that really needs to be built. I assume that part should really be built as part of libs, not tools.

The fixes/workarounds

I tried removing designer from the qtNomakeTools invocation at qttools/src/designer/src/src.pro:23, to see if qttools/src/designer/src/designer/ would get built.

It did get built, but then build failed with designer/src/designer and designer/src/uitools both claiming the designer plugin.

I tried editing qttools/src/designer/src/uitools/uitools.pro not to claim the designer plugin when tools is not a build part.

I added the tweaks to the Qt5 build system as debian/patches.

2 hours of build time later...

make check is broken:

make[6]: Leaving directory '/home/build/armhf/qt-everywhere-src-5.15.0/qttools/src/designer/src/uitools'
make[5]: *** No rule to make target 'sub-components-check', needed by 'sub-designer-check'.  Stop.

But since make check doesn't do anything in this build, we can simply override dh_auto_test to skip that step.

Finally, this patch builds a new executable, of an architecture that makes dh_shlibdeps struggle:

dpkg-shlibdeps: error: cannot find library libQt5DesignerComponentssystem.so.5 needed by debian/qtbase5system-armhf-dev/opt/qt5system-armhf/bin/designer (ELF format: 'elf32-little' abi: '0101002800000000'; RPATH: '')
dpkg-shlibdeps: error: cannot find library libQt5Designersystem.so.5 needed by debian/qtbase5system-armhf-dev/opt/qt5system-armhf/bin/designer (ELF format: 'elf32-little' abi: '0101002800000000'; RPATH: '')

And we can just skip running dh_shlibdeps on the designer executable.

The result is in the qt5custom git repository.

CryptogramTheft of CIA's "Vault Seven" Hacking Tools Due to Its Own Lousy Security

The Washington Post is reporting on an internal CIA report about its "Vault 7" security breach:

The breach -- allegedly committed by a CIA employee -- was discovered a year after it happened, when the information was published by WikiLeaks, in March 2017. The anti-secrecy group dubbed the release "Vault 7," and U.S. officials have said it was the biggest unauthorized disclosure of classified information in the CIA's history, causing the agency to shut down some intelligence operations and alerting foreign adversaries to the spy agency's techniques.

The October 2017 report by the CIA's WikiLeaks Task Force, several pages of which were missing or redacted, portrays an agency more concerned with bulking up its cyber arsenal than keeping those tools secure. Security procedures were "woefully lax" within the special unit that designed and built the tools, the report said.

Without the WikiLeaks disclosure, the CIA might never have known the tools had been stolen, according to the report. "Had the data been stolen for the benefit of a state adversary and not published, we might still be unaware of the loss," the task force concluded.

The task force report was provided to The Washington Post by the office of Sen. Ron Wyden (D-Ore.), a member of the Senate Intelligence Committee, who has pressed for stronger cybersecurity in the intelligence community. He obtained the redacted, incomplete copy from the Justice Department.

It's all still up on WikiLeaks.

Planet DebianIan Jackson: BountySource have turned evil - alternatives ?

I need an alternative to BountySource, who have done an evil thing. Please post recommendations in the comments.

From: Ian Jackson <*****>
To: support@bountysource.com
Subject: Re: Update to our Terms of Service
Date: Wed, 17 Jun 2020 16:26:46 +0100

Bountysource writes ("Update to our Terms of Service"):
> You are receiving this email because we are updating the Bountysource Terms of
> Service, effective 1st July 2020.
>
> What's changing?
> We have added a Time-Out clause to the Bounties section of the agreement:
>
> 2.13 Bounty Time-Out.
> If no Solution is accepted within two years after a Bounty is posted, then the
> Bounty will be withdrawn and the amount posted for the Bounty will be retained
> by Bountysource. For Bounties posted before June 30, 2018, the Backer may
> redeploy their Bounty to a new Issue by contacting support@bountysource.com
> before July 1, 2020. If the Backer does not redeploy their Bounty by the
> deadline, the Bounty will be withdrawn and the amount posted for the Bounty
> will be retained by Bountysource.
>
> You can read the full Terms of Service here
>
> What do I need to do?
> If you agree to the new terms, you don't have to do anything.
>
> If you have a bounty posted prior to June 30, 2018 that is not currently being
> solved, email us at support@bountysource.com to redeploy your bounty.  Or, if
> you do not agree with the new terms, please discontinue using Bountysource.

I do not agree to this change to the Terms and Conditions.
Accordingly, I will not post any more bounties on BountySource.

I currently have one outstanding bounty of $200 on
   https://www.bountysource.com/issues/86138921-rfe-add-a-frontend-for-the-rust-programming-language

That was posted in December 2019.  It is not clear now whether that
bounty will be claimed within your 2-year timeout period.

Since I have not accepted the T&C change, please can you confirm that

(i) My bounty will not be retained by BountySource even if no solution
    is accepted by December 2021.

(ii) As a backer, you will permit me to vote on acceptance of that
    bounty should a solution be proposed before then.

I suspect that you intend to rely on the term in the previous T&C
giving you unlimited ability to modify the terms and conditions.  Of
course such a term is an unfair contract term, because if it were
effective it would give you the power to do whatever you like.  So it
is not binding on me.

I look forward to hearing from you by the 8th of July.  If I do not
hear from you I will take the matter up with my credit card company.

Thank you for your attention.

Ian.

They will try to say "oh it's all governed by US law" but of course section 75 of the Consumer Credit Act makes the card company jointly liable for Bountysource's breach of contract and a UK court will apply UK consumer protection law even to a contract which says it is to be governed by US law - because you can't contract out of consumer protection. So the card company are on the hook and I can use them as a lever.

Update - BountySource have changed their mind

From: Bountysource <support@bountysource.com>
To: *****
Subject: Re: Update to our Terms of Service
Date: Wed, 17 Jun 2020 18:51:11 -0700

Hi Ian

The new terms of service has with withdrawn.

This is not the end of the matter, I'm sure. They will want to get long-unclaimed bounties off their books (and having the cash sat forever at BountySource is not ideal for backers either). Hopefully they can engage in a dialogue and find a way that is fair, and that doesn't incentivise BountySource to sabotage bounty claims(!) I think that means that whatever it is, BountySource mustn't keep the money. There are established ways of dealing with similar problems (eg ancient charitable trusts; unclaimed bank accounts).

I remain wary. That BountySource is now owned by cryptocurrency company is not encouraging. That they would even try what they just did is a really bad sign.

Edited 2020-06-17 16:28 for a typo in Bountysource's email address
Update added 2020-06-18 11:40 for BountySource's change of mind.



comment count unavailable comments

Planet DebianGunnar Wolf: On masters and slaves, whitelists and blacklists…

LWN published today yet another great piece of writing, Loaded terms in free software. I am sorry, the content will not be immediately available to anybody following at home, as LWN is based on a subscription model — But a week from now, the article will be open for anybody to read. Or you can ask me (you most likely can find my contact addresses, as they are basically everywhere) for a subscriber link, I will happily provide it.

In consonance with the current mood that started with the killing of George Floyd and sparked worldwide revolts against police brutality, racism (mostly related to police and law enforcement forces, but social as well) and the like, the debate that already started some months ago in technical communities has re-sparked:

We have many terms that come with long histories attached to them, and we are usually oblivious to their obvious meaning. We? Yes, we, the main users and creators of technology. I never felt using master and slave to refer to different points of a protocol, bus, clock or whatever (do refer to the Wikipedia article for a fuller explanation) had any negative connotations — but then again, those terms have never tainted my personal family. That is, I understand I speak from a position of privilege.

A similar –although less heated– issue goes around the blacklist and whitelist terms, or other uses that use white to refer to good, law-abiding citizens, and black to refer to somewhat antisocial uses (i.e. the white hat and black hat hackers).

For several years, this debate has been sparking and dying off. Some important changes have been made — Particularly, in 2017 the Internet Software Consortium started recommending Primary and Secondary, Python dropped master/slave pairs after a quite thorough and deep review throughout 2018, GitHub changed the default branch from master to main earlier this week. The Internet Engineering Task Force has a draft (that lapsed and thus sadly didn’t become an RFC, but still, is archived), Terminology, Power and Oppressive Language that lists suggested alternatives:

There are also many other relationships that can be used as metaphors, Eglash’s research calls into question the accuracy of the master-slave metaphor. Fortunately, there are ample alternatives for the master-slave relationship. Several options are suggested here and should be chosen based on the pairing that is most clear in context:

  • Primary-secondary
  • Leader-follower
  • Active-standby
  • Primary-replica
  • Writer-reader
  • Coordinator-worker
  • Parent-helper

I’ll add that I think we Spanish-speakers are not fully aware of the issue’s importance, because the most common translation I have seen for master/slave is maestro/esclavo: Maestro is the word for teacher (although we do keep our slaves in place). But think whether it sounds any worse if you refer to device pairs, or members of a database high-availability cluster, or whatever as Amo and Esclavo. It does sound much worse…

I cannot add much of value to this debate. I am just happy issues like this are being recognized and dealt with. If the topic interests you, do refer to the LWN article! Some excrepts:

I consider the following to be the core of Jonathan Corbet’s writeup:

Recent events, though, have made it clear — even to those of us who were happy to not question this view — that the story of slavery and the wider racist systems around it is not yet finished. There are many people who are still living in the middle of it, and it is not a nice place to be. We are not so enlightened as we like to think we are.

If there is no other lesson from the events of the last few weeks, we should certainly take to heart the point that we need to be listening to the people who have been saying, for many years, that they are still suffering. If there are people who are telling us that terms like “slave” or “blacklist” are a hurtful reminder of the inequities that persist in our society, we need to accept that as the truth and act upon it. Etymological discussions on what, say, “master” really means may be interesting, but they miss the point and are irrelevant to this discussion.

Part of a comment by user yokem_55:

Often, it seems to me that the replacement words are much more descriptive and precise than the old language. ‘Allowlist’ is far more obviously a list of explicitly authorized entities than ‘whitelist’. ‘Mainline’ has a more obvious meaning of a core stream of development than ‘master’.

The benefit of moving past this language is more than just changing cultural norms, it’s better, more precise communication across the board.

Another spot-on comment, by user alan:

From my perspective as a Black American male, I think that it’s nice to see people willing to see and address racism in various spheres. I am concerned that some of these steps will be more performative than substantial. Terminology changes in software so as to be more welcoming is a nice thing. Ensuring that oppressed minorities have access to the tools and resources to help reduce inequity and ensuring equal protection under the laws is better. We’ll get there one day I’m sure. The current ask is much simpler, its just to stop randomly killing and terrorizing us. Please and thank you.

So… Maybe the protests of this year caught special notoriety because the society is reacting after (or during, for many of us) the lockdown. In any case, I hope for their success in changing the planet’s culture of oppression.

Comments

Tomas Janousek 2020-06-19 10:04:32 +0200

In the blog post “On masters and slaves, whitelists and blacklists…” you claim that “GitHub changed the default branch from master to main earlier this week” but I don’t think that change is in effect yet. When you create a repo, the default branch is still named “master”.

Gunnar Wolf 2020-06-19 11:52:30 -0500

Umh, seems you are right. Well, what can I say? I’m reporting only what I have been able to find / read…

Now, given that said master branch does not carry any Git-specific meaning and is just a commonly used configuration… I hope people start picking it up.

No, I have not renamed master branches in any of my repos… but intend to do so soonish.

Tomas Janousek 2020-06-19 20:01:52 +0200

Yeah, don’t worry. I just find it sad that so much inaccurate news is spreading from a single CEO tweet, and I wanted to help stop that. I’m sure some change will happen eventually, but until it does, we shouldn’t speak about it in the past tense. :-)

Worse Than FailureCodeSOD: Rings False

There are times when a code block needs a lot of setup, and there are some where it mostly speaks for itself. Today’s anonymous submitter found this JavaScript in a React application, coded by one of the senior team-members.

if (false === false){
    startSingleBasedApp();
} else {
    startTabNavigation();
}

Look, I know how this code got there. At some point, they planned to check a configuration or a feature flag, but during development, it was just faster to do it this way. Then they forgot, and then it got released to production.

Had our submitter not gone poking, it would have sat there in production until someone tried to flip the flag and nothing happened.

This is why you do code reviews.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

CryptogramSecurity Analysis of the Democracy Live Online Voting System

New research: "Security Analysis of the Democracy Live Online Voting System":

Abstract: Democracy Live's OmniBallot platform is a web-based system for blank ballot delivery, ballot marking, and (optionally) online voting. Three states -- Delaware, West Virginia, and New Jersey -- recently announced that they will allow certain voters to cast votes online using OmniBallot, but, despite the well established risks of Internet voting, the system has never been the subject of a public, independent security review.

We reverse engineered the client-side portion of OmniBallot, as used in Delaware, in order to detail the system's operation and analyze its security.We find that OmniBallot uses a simplistic approach to Internet voting that is vulnerable to vote manipulation by malware on the voter's device and by insiders or other attackers who can compromise Democracy Live, Amazon,Google, or Cloudflare. In addition, Democracy Live, which appears to have no privacy policy, receives sensitive personally identifiable information­ -- including the voter's identity, ballot selections, and browser fingerprint­ -- that could be used to target political ads or disinformation campaigns.Even when OmniBallot is used to mark ballots that will be printed and returned in the mail, the software sends the voter's identity and ballot choices to Democracy Live, an unnecessary security risk that jeopardizes the secret ballot. We recommend changes to make the platform safer for ballot delivery and marking. However, we conclude that using OmniBallot for electronic ballot return represents a severe risk to election security and could allow attackers to alter election results without detection.

News story.

EDITED TO ADD: This post has been translated into Portuguese.

CryptogramFacebook Helped Develop a Tails Exploit

This is a weird story:

Hernandez was able to evade capture for so long because he used Tails, a version of Linux designed for users at high risk of surveillance and which routes all inbound and outbound connections through the open-source Tor network to anonymize it. According to Vice, the FBI had tried to hack into Hernandez's computer but failed, as the approach they used "was not tailored for Tails." Hernandez then proceeded to mock the FBI in subsequent messages, two Facebook employees told Vice.

Facebook had tasked a dedicated employee to unmasking Hernandez, developed an automated system to flag recently created accounts that messaged minors, and made catching Hernandez a priority for its security teams, according to Vice. They also paid a third party contractor "six figures" to help develop a zero-day exploit in Tails: a bug in its video player that enabled them to retrieve the real I.P. address of a person viewing a clip. Three sources told Vice that an intermediary passed the tool onto the FBI, who then obtained a search warrant to have one of the victims send a modified video file to Hernandez (a tactic the agency has used before).

[...]

Facebook also never notified the Tails team of the flaw -- breaking with a long industry tradition of disclosure in which the relevant developers are notified of vulnerabilities in advance of them becoming public so they have a chance at implementing a fix. Sources told Vice that since an upcoming Tails update was slated to strip the vulnerable code, Facebook didn't bother to do so, though the social media company had no reason to believe Tails developers had ever discovered the bug.

[...]

"The only acceptable outcome to us was Buster Hernandez facing accountability for his abuse of young girls," a Facebook spokesperson told Vice. "This was a unique case, because he was using such sophisticated methods to hide his identity, that we took the extraordinary steps of working with security experts to help the FBI bring him to justice."

I agree with that last paragraph. I'm fine with the FBI using vulnerabilities: lawful hacking, it's called. I'm less okay with Facebook paying for a Tails exploit, giving it to the FBI, and then keeping its existence secret.

Another article.

EDITED TO ADD: This post has been translated into Portuguese.

,

Krebs on SecurityWhen Security Takes a Backseat to Productivity

“We must care as much about securing our systems as we care about running them if we are to make the necessary revolutionary change.” -CIA’s Wikileaks Task Force.

So ends a key section of a report the U.S. Central Intelligence Agency produced in the wake of a mammoth data breach in 2016 that led to Wikileaks publishing thousands of classified documents stolen from the agency’s offensive cyber operations division. The analysis highlights a shocking series of security failures at one of the world’s most secretive entities, but the underlying weaknesses that gave rise to the breach also unfortunately are all too common in many organizations today.

The CIA produced the report in October 2017, roughly seven months after Wikileaks began publishing Vault 7 — reams of classified data detailing the CIA’s capabilities to perform electronic surveillance and cyber warfare. But the report’s contents remained shrouded from public view until earlier this week, when heavily redacted portions of it were included in a letter by Sen. Ron Wyden (D-Ore.) to the Director of National Intelligence.

The CIA acknowledged its security processes were so “woefully lax” that the agency probably would never have known about the data theft had Wikileaks not published the stolen documents online. What kind of security failures created an environment that allegedly allowed a former CIA employee to exfiltrate so much sensitive data? Here are a few, in no particular order:

  • Failing to rapidly detect security incidents.
  • Failing to act on warning signs about potentially risky employees.
  • Moving too slowly to enact key security safeguards.
  • A lack of user activity monitoring or robust server audit capability.
  • No effective removable media controls.
  • No single person empowered to ensure IT systems are built and maintained securely throughout their lifecycle.
  • Historical data available to all users indefinitely.

Substitute the phrase “cyber weapons” with “productivity” or just “IT systems” in the CIA’s report and you might be reading the post-mortem produced by a security firm hired to help a company recover from a highly damaging data breach.

A redacted portion of the CIA’s report on the Wikileaks breach.

DIVIDED WE STAND, UNITED WE FALL

A key phrase in the CIA’s report references deficiencies in “compartmentalizing” cybersecurity risk. At a high level (not necessarily specific to the CIA), compartmentalizing IT environments involves important concepts such as:

  • Segmenting one’s network so that malware infections or breaches in one part of the network can’t spill over into other areas.
  • Not allowing multiple users to share administrative-level passwords
  • Developing baselines for user and network activity so that deviations from the norm stand out more prominently.
  • Continuously inventorying, auditing, logging and monitoring all devices and user accounts connected to the organization’s IT network.

“The Agency for years has developed and operated IT mission systems outside the purview and governance of enterprise IT, citing the need for mission functionality and speed,” the CIA observed. “While often fulfilling a valid purpose, this ‘shadow IT’ exemplifies a broader cultural issue that separates enterprise IT from mission IT, has allowed mission system owners to determine how or if they will police themselves.”

All organizations experience intrusions, security failures and oversights of key weaknesses. In large enough enterprises, these failures likely happen multiple times each day. But by far the biggest factor that allows small intrusions to morph into a full-on data breach is a lack of ability to quickly detect and respond to security incidents.

Also, because employees tend to be the most abundant security weakness in any organization, instituting some kind of continuing security awareness training for all employees is a good idea. Some security experts I know and respect dismiss security awareness programs as a waste of time and money, observing that no matter how much training a company does, there will always be some percentage of users who will click on anything.

That may or may not be accurate, but even if it is, at least the organization then has a much better idea which employees probably need more granular security controls (i.e. more compartmentalizing) to keep them from becoming a serious security liability.

Sen. Wyden’s letter (PDF), first reported on by The Washington Post, is worth reading because it points to a series of continuing security weaknesses at the CIA, many of which have already been addressed by other federal agencies, including multi-factor authentication for domain names and access to classified/sensitive systems, and anti-spam protections like DMARC.

Planet DebianUlrike Uhlig: On Language

Language is a tool of power

In school, we read the philologist diary of Victor Klemperer about the changes in the German language during the Third Reich, LTI - Lingua Tertii Imperii, a book which makes it clear that the use of language is political, creates realities, and has reverse repercussions on concepts of an entire society. Language was one of the tools that supported Nazism in insiduously pervading all parts of society.

Language shapes our concepts of society

Around the same time, a friend of mine proposed to read Egalia's daughters by Gerd Brantenberg, a book in which gendered words were reversed: so that human becomes huwim, for example. This book made me take notice of gendered concepts that often go unnoticed.

Language shapes the way we think and feel

I spent a large part of my adult life in France, which confronted me with the realization that a language provides its speakers with certain concepts. If a concept does not exist in a language, people cannot easily feel or imagine this concept either.

Back then (roughly 20 years ago), even though I was aware of gender inequality, I hated using gender neutral language because in German and French it felt unnatural, and, or so I thought, we were all alike. One day, at a party, we played a game that consisted in guessing people's professions by asking them Yes/No questions. Turns out that we were unable to guess that the woman we were talking with was a doctor, because we could simply not imagine this profession for a young woman. In French, docteur is male and almost nobody would use the word doctoresse, ou femme docteur.

Unimaginable are also the concepts of words in German that have no equivalent in French or vice versa:

  • Sehnsucht composed of longing (sich sehnen) and obsession (Sucht). In English, this word is translated as longing. In French it is translated as nostalgie (nostalgia), but nostalgia is directed towards the past, while Sehnsucht in German can be used to designate a longing for people, places, food, even feelings, and can be used in all temporal directions. There are other approximate translations to French, for example aspiration.
  • Das Unheimliche is a German word and an essay by Sigmund Freud from 1919. The translation of the title had been subject to a lot of debate, before the text was published in French under the name "L'inquiétante étrangeté", something that would translate to English as worrisome unfamiliarity. In English, there is a word for unheimlich, which is uncanny, however canny does not transport the German concept of heimlich which is related to home, familiarity, and secrecy.
  • Dépaysement. This French word is a negation of feeling home like in one's country (pays): it describes the, generally positively connoted, feelings one experiences when changing habits, or environment.

Or, to make all this a bit less serious, Italian has the word gattara (female) or gattaro (male), which one could translate to English roughly as cat person, most often designating old women who feed stray cats.

But really, the way language shapes our concepts and ideas goes much further, as well explained by Lera Boroditsky in a talk in which she explains how language influences concepts of space, time, and blame, among other things.

Building new models

This quote by Buckminster Fuller is pinned on the wall over my desk:

You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.

A change in language is such a new model: it can make oppression and inequalities visible. Words do not only describe our world, they are a vehicle of ideas, and utopias. Analyzing and criticizing our use of language means paving the way for ideas and concepts of inclusion, equality, and unity.

You might be guessing at where am I getting at with this… Right: I am in favor of acknowledging past mistakes, and replacing oppressive metaphors in computing. As noted in the IETF draft about Terminology, Power and Oppressive Language, by Niels Ten Oever and Mallory Knodel, the metaphors "master/slave" and "blacklist/whitelist" associate "white with good and black with evil [which] is known as the 'bad is black effect'", all the while being technically inaccurate.

I acknowledge that this will take time. There is a lot of work to do.

Sociological ImagesWhat’s Trending? The Happiness Drop

One important lesson from political science and sociology is that public opinion often holds steady. This is because it is difficult to get individual people to change their minds. Instead, people tend to keep consistent views as “settled dispositions” over time, and mass opinion changes slowly as new people age into taking surveys and older people age out.

Sometimes public opinion does change quickly, though, and these rapid changes are worth our attention precisely because they are rare. For example, one of the most notable recent changes is the swing toward majority support for same-sex marriage in the United States in just the last decade.

That’s why a new finding is so interesting and so troubling: NORC is reporting a pretty big swing in self-reported happiness since the pandemic broke out using a new 2020 survey conducted in late May. Compared to earlier trends from the General Social Survey, fewer people are reporting they are “very happy,” optimism about the future is down, and feelings of isolation and loneliness are up. The Associated Press has dynamic charts here, and I made an open-access, creative commons version of one visualization using GSS data and NORC’s estimates:

As with any survey trend, we will need more data to get the true shape of the change and see whether it will persist over time. Despite this, one important point here is the consistency before the new 2020 data. Think about all the times aggregated happiness reports didn’t really change: we don’t see major shifts around September 11th, 2001, and there are only small changes around the Gulf War in 1990 or the 2008 financial crisis.

There is something reassuring about such a dramatic drop now, given this past resilience. If you’re feeling bad, you’re not alone. We have to remember that emotions are social. People have a remarkable ability to persist through all kinds of trying times, but that is often because they can connect with others for support. The unprecedented isolation of physical distancing and quarantine has a unique impact on our social relationships and, in turn, it could have a dramatic impact on our collective wellbeing. The first step to fixing this problem is facing it honestly.

Inspired by demographic facts you should know cold, “What’s Trending?” is a post series at Sociological Images featuring quick looks at what’s up, what’s down, and what sociologists have to say about it.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramBank Card "Master Key" Stolen

South Africa's Postbank experienced a catastrophic security failure. The bank's master PIN key was stolen, forcing it to cancel and replace 12 million bank cards.

The breach resulted from the printing of the bank's encrypted master key in plain, unencrypted digital language at the Postbank's old data centre in the Pretoria city centre.

According to a number of internal Postbank reports, which the Sunday Times obtained, the master key was then stolen by employees.

One of the reports said that the cards would cost about R1bn to replace. The master key, a 36-digit code, allows anyone who has it to gain unfettered access to the bank's systems, and allows them to read and rewrite account balances, and change information and data on any of the bank's 12-million cards.

The bank lost $3.2 million in fraudulent transactions before the theft was discovered. Replacing all the cards will cost an estimated $58 million.

Planet DebianWouter Verhelst: Software available through Extrepo

Just over 7 months ago, I blogged about extrepo, my answer to the "how do you safely install software on Debian without downloading random scripts off the Internet and running them as root" question. I also held a talk during the recent "MiniDebConf Online" that was held, well, online.

The most important part of extrepo is "what can you install through it". If the number of available repositories is too low, there's really no reason to use it. So, I thought, let's look what we have after 7 months...

To cut to the chase, there's a bunch of interesting content there, although not all of it has a "main" policy. Each of these can be enabled by installing extrepo, and then running extrepo enable <reponame>, where <reponame> is the name of the repository.

Note that the list is not exhaustive, but I intend to show that even though we're nowhere near complete, extrepo is already quite useful in its current state:

Free software

  • The debian_official, debian_backports, and debian_experimental repositories contain Debian's official, backports, and experimental repositories, respectively. These shouldn't have to be managed through extrepo, but then again it might be useful for someone, so I decided to just add them anyway. The config here uses the deb.debian.org alias for CDN-backed package mirrors.
  • The belgium_eid repository contains the Belgian eID software. Obviously this is added, since I'm upstream for eID, and as such it was a large motivating factor for me to actually write extrepo in the first place.
  • elastic: the elasticsearch software.
  • Some repositories, such as dovecot, winehq and bareos contain upstream versions of their respective software. These two repositories contain software that is available in Debian, too; but their upstreams package their most recent release independently, and some people might prefer to run those instead.
  • The sury, fai, and postgresql repositories, as well as a number of repositories such as openstack_rocky, openstack_train, haproxy-1.5 and haproxy-2.0 (there are more) contain more recent versions of software packaged in Debian already by the same maintainer of that package repository. For the sury repository, that is PHP; for the others, the name should give it away.

    The difference between these repositories and the ones above is that it is the official Debian maintainer for the same software who maintains the repository, which is not the case for the others.

  • The vscodium repository contains the unencumbered version of Microsoft's Visual Studio Code; i.e., the codium version of Visual Studio Code is to code as the chromium browser is to chrome: it is a build of the same softare, but without the non-free bits that make code not entirely Free Software.
  • While Debian ships with at least two browsers (Firefox and Chromium), additional browsers are available through extrepo, too. The iridiumbrowser repository contains a Chromium-based browser that focuses on privacy.
  • Speaking of privacy, perhaps you might want to try out the torproject repository.
  • For those who want to do Cloud Computing on Debian in ways that isn't covered by Openstack, there is a kubernetes repository that contains the Kubernetes stack, the as well as the google_cloud one containing the Google Cloud SDK.

Non-free software

While these are available to be installed through extrepo, please note that non-free and contrib repositories are disabled by default. In order to enable these repositories, you must first enable them; this can be accomplished through /etc/extrepo/config.yaml.

  • In case you don't care about freedom and want the official build of Visual Studio Code, the vscode repository contains it.
  • While we're on the subject of Microsoft, there's also Microsoft Teams available in the msteams repository. And, hey, skype.
  • For those who are not satisfied with the free browsers in Debian or any of the free repositories, there's opera and google_chrome.
  • The docker-ce repository contains the official build of Docker CE. While this is the free "community edition" that should have free licenses, I could not find a licensing statement anywhere, and therefore I'm not 100% sure whether this repository is actually free software. For that reason, it is currently marked as a non-free one. Merge Requests for rectifying that from someone with more information on the actual licensing situation of Docker CE would be welcome...
  • For gamers, there's Valve's steam repository.

Again, the above lists are not meant to be exhaustive.

Special thanks go out to Russ Allbery, Kim Alvefur, Vincent Bernat, Nick Black, Arnaud Ferraris, Thorsten Glaser, Thomas Goirand, Juri Grabowski, Paolo Greppi, and Josh Triplett, for helping me build the current list of repositories.

Is your favourite repository not listed? Create a configuration based on template.yaml, and file a merge request!

Worse Than FailureCodeSOD: Going on an Exceptional Date

Here’s a puzzler for you: someone has written bad date handling code, but honestly, the bad date handling isn’t the real WTF. I mean, it’s bad, but it highlights something worse.

Cid inherited this method, along with a few others which we’ll probably look at in the future. It’s Java, so let’s just start with the method signature.

public static void checkTimestamp(String timestamp, String name)
  throws IOException

Honestly, that pretty much covers it. What, you don’t see it? Well, let’s break out the whole method:

public static void checkTimestamp(String timestamp, String name)
  throws IOException {
    if (timestamp == null) {
      return;
    }
    String msg = new String(
        "Wrong date or time. (" + name + "=\"" + timestamp + "\")");
    int len = timestamp.length();
    if (len != 15) {
      throw new IOException(msg);
    }
    for (int i = 0; i < (len - 1); i++) {
      if (! Character.isDigit(timestamp.charAt(i))) {
        throw new IOException(msg);
      }
    }
    if (timestamp.charAt(len - 1) != 'Z') {
      throw new IOException(msg);
    }
    int year = Integer.parseInt(timestamp.substring(0,4));
    int month = Integer.parseInt(timestamp.substring(4,6));
    int day = Integer.parseInt(timestamp.substring(6,8));
    int hour = Integer.parseInt(timestamp.substring(8,10));
    int minute = Integer.parseInt(timestamp.substring(10,12));
    int second = Integer.parseInt(timestamp.substring(12,14));
    if (day < 1) {
      throw new IOException(msg);
    }
    if ((month < 1) || (month > 12)) {
      throw new IOException(msg);
    }
    if (month == 2) {
      if ((year %4 == 0 && year%100 != 0) || year%400 == 0) {
        if (day > 29) {
          throw new IOException(msg);
        }
      }
      else {
        if (day > 28) {
          throw new IOException(msg);
    	}
      }
    }
    if (month == 1 || month == 3 || month == 5 || month == 7
    || month == 8 || month == 10 || month == 12) {
      if (day > 31) {
        throw new IOException(msg);
      }
    }
    if (month == 4 || month == 6 || month == 9 || month == 11) {
      if (day > 30) {
        throw new IOException(msg);
      }
    }
    if ((hour < 0) || (hour > 24)) {
      throw new IOException(msg);
    }
    if ((minute < 0) || (minute > 59)) {
      throw new IOException(msg);
    }
    if ((second < 0) || (second > 59)) {
      throw new IOException(msg);
    }
  }

Now, one of Java’s “interesting” ideas was adding checked exceptions to the language. If a method could throw an exception, it needs to announce what that exception is. This lets the compiler check and make sure that any exception which might be thrown is caught.

It’s also a pain for developers.

This developer felt that pain, and spent about three seconds think about it. "Well, something gave this method a timestamp as input, and if that input is wrong… we should throw an IOException.

Which is a choice, I guess. Better than Exception.

To “help” the calling code decide what to do, the exception helpfully sets the same exact message regardless of what went wrong: “Wrong date or time”.

But the real icing on this particular soggy pie is really the method name: checkTimestamp. From that name, we know that we don’t expect it to have a valid timestamp, we need to check, so “an incorrectly formatted timestamp” isn’t an exceptional condition, it’s an expected behavior. This method should return a boolean value.

Oh, and also, it should just use built-in date handling, but that really seems secondary to this abuse of exceptions.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianRuss Allbery: Review: Network Effect

Review: Network Effect, by Martha Wells

Series: Murderbot Diaries #5
Publisher: Tor
Copyright: May 2020
ISBN: 1-250-22984-7
Format: Kindle
Pages: 351

Network Effect is the first Murderbot novel, although the fifth story of the series. The previous stories, beginning with All Systems Red, were novellas. Under no circumstances should you start reading the series here. Network Effect builds significantly on the story arc that ended with Exit Strategy and resolves some important loose ends from Artificial Condition. It's meant to be read in series order.

I believe this is the first time in my life that I've started reading a book on the night of its release. I was looking forward to this novel that much, and it does not disappoint.

I'll try not to spoil the previous books too much in this review, but at this point it's a challenge. Just go read them. They're great.

The big question I had about the first Murderbot novel was how would it change the plot dynamic of the series. All of the novellas followed roughly the same plot structure: Murderbot would encounter some humans who need help, somewhat grudgingly help them while pursuing its own agenda, snark heavily about human behavior in the process, once again prove its competence, and do a little bit of processing of its feelings and a lot of avoiding them. This formula works great at short length. Would Wells change it at novel length, or if not, would it get tedious or strained?

The answer is that Wells added in quite a bit more emotional processing and relationship management to flesh out the core of the book and created a plot with more layers and complexity than the novella plots, and the whole construction works wonderfully. This is exactly the book I was hoping for when I heard there would be a Murderbot novel. If you like the series, you'll like this, and should feel free to read it now without reading the rest of the review.

Overse added, "Just remember you're not alone here."

I never know what to say to that. I am actually alone in my head, and that's where 90 plus percent of my problems are.

Many of the loose ends in the novellas were tied up in the final one, Exit Strategy. The biggest one that wasn't, at least in my opinion, was ART, the research transport who helped Murderbot considerably in Artificial Condition and clearly was more than it appeared to be. That is exactly the loose end that Wells resolves here, to great effect. I liked the dynamic between ART and Murderbot before, but it's so much better with an audience to riff off of (and yet better still when there are two audiences, one who already knew Murderbot and one who already knew ART). I like ART almost as much as Murderbot, and that's saying a lot.

The emotional loose end of the whole series has been how Murderbot will decide to interact with other humans. I think that's not quite resolved by the end of the novel, but we and Murderbot have both learned considerably more. The novellas, except for the first, are mostly solo missions even when Murderbot is protecting clients. This is something more complicated; the interpersonal dynamics hearken back to the first novella and then go much deeper, particularly in the story-justified flashbacks. Wells uses Murderbot's irritated avoidance to keep some emotional dynamics underplayed and indirect, letting the reader discover them at opportune moments, and this worked beautifully for me. And Murderbot's dynamic with Amena is just wonderful, mostly because of how smart, matter-of-fact, trusting, and perceptive Amena is.

That's one place where the novel length helps: Wells has more room to expand the characterization of characters other than Murderbot, something that's usually limited in the novellas to a character or two. And these characters are great. Murderbot is clearly the center of the story, but the other characters aren't just furniture for it to react to. They have their own story arcs, they're thoughtful, they learn, and it's a delight to watch them slot Murderbot into various roles, change their minds, adjust, and occasionally surprise it in quite touching ways, all through Murderbot's eyes.

Thiago had said he felt like he should apologize and talk to me more about it. Ratthi had said, "I think you should let it go for a while, at least until we get ourselves out of this situation. SecUnit is a very private person, it doesn't like to discuss its feelings."

This is why Ratthi is my friend.

I have some minor quibbles. The targetSomething naming convention Murderbot comes up with and then is stuck with because it develops too much momentum is entertaining but confusing. A few of the action sequences were just a little on the long side; I find the emotional processing much more interesting. There's also a subplot with a character with memory holes and confusion that I thought dragged on too long, mostly because I found the character intensely irritating for some reason. But these are just quibbles. Network Effect is on par with the best of the novellas that precede it, and that's a high bar indeed.

In this series, Wells has merged the long-running science fiction thread of artificial intelligences and the humanity of robots with the sarcastic and introspective first-person narration of urban fantasy, gotten the internal sensation of emotional avoidance note-perfect without making it irritating (that's some deep magic right there), and added in some top-tier negotiation of friendship and relationships without losing the action and excitement of a great action movie. It's a truly impressive feat and the novel is the best installment so far. I will be stunned if Network Effect doesn't make most of the award lists next year.

Followed by Fugitive Telemetry, due out in April of 2021. You can believe that I have already preordered it.

Rating: 9 out of 10

,

TEDConversations on rebuilding society: Week 4 of TED2020

For week 4 of TED2020, leaders in international development, history, architecture and public policy explored how we might rebuild during the COVID-19 pandemic and the ongoing protests against racial injustice in the United States. Below, a recap of their insights.

Achim Steiner, head of the UNDP, discusses how the COVID-19 pandemic is leading people to reexamine the future of society. He speaks at TED2020: Uncharted on June 8, 2020. (Photo courtesy of TED)

Achim Steiner, head of the United National Development Programme

Big idea: The public and private sectors must work together to rebuild communities and economies from the COVID-19 pandemic.

Why? When the coronavirus hit, many governments and organizations were unprepared and ill-equipped to respond effectively, says Achim Steiner. He details the ways the UNDP is partnering with both private companies and state governments to help developing countries rebuild, including delivering medicine and supplies, setting up Zoom accounts for governing bodies and building virus tracking systems. Now that countries are beginning to think broadly about life after COVID-19, Steiner says that widespread disenchantment with the state is leading people to question the future of society. They’re rethinking the relationship between the state and its citizens, the role of the private sector and the definition of a public good. He believes that CEOs and business leaders need to step forward and forge alliances with the public sector in order to address societal inequalities and shape the future of economies. “It is not that the state regulates all the problems and the private sector is essentially best off if it can just focus on its own shareholders or entrepreneurial success,” he says. “We need both.”


“The heartbeat of antiracism is confession,” says author and historian Ibram X. Kendi. He speaks at TED2020: Uncharted on June 9, 2020. (Photo courtesy of TED)

Ibram X. Kendi, Author and historian

Big idea: To create a more just society, we need to make antiracism part of our everyday lives.

How? There is no such thing as being “not racist,” says Ibram X. Kendi. He explains that an idea, behavior or policy is either racist (suggesting that any racial group is superior or inferior in any way) or antiracist (suggesting that the racial groups are equals in all their apparent differences). In this sense, “racist” isn’t a fixed identity — a bad, evil person — but rather a descriptive term, highlighting what someone is doing in a particular moment. Anyone can be racist or antiracist; the difference is found in how we choose to see ourselves and others. Antiracism is vulnerable work, Kendi says, and it requires persistent self-awareness, self-examination and self-criticism, grounded in a willingness to concede your privileges and admit when you’re wrong. As we learn to more clearly recognize, take responsibility for and reject prejudices in our public policies, workplaces and personal beliefs, we can actively use this awareness to uproot injustice and inequality in the world — and replace it with love. “The heartbeat of racism itself has always been denial,” he says. “The heartbeat of antiracism is confession.” Watch the full discussion on TED.com.


What’s the connection between poetry and policy? Aaron Maniam explains at TED2020: Uncharted on June 10, 2020. (Photo courtesy of TED)

Aaron Maniam, Poet and policymaker

Big idea: By crafting a range of imaginative, interlocking metaphors, we can better understand COVID-19, its real-time impacts and how the pandemic continues to change our world.

How? As a poet and a policymaker in Singapore, Maniam knows the importance of language to capture and evoke the state of the world — and to envision our future. As people across the world share their stories of the pandemic’s impact, a number of leading metaphors have emerged. In one lens, humanity has “declared war” on COVID-19 — but that angle erases any positive effects of the pandemic, like how many have been able to spend more time with loved ones. In another lens, COVID-19 has been a global “journey” — but that perspective can simplify the way class, race and location severely impact how people move through this time. Maniam offers another lens: that the pandemic has introduced a new, constantly evolving “ecology” to the world, irrevocably changing how we live on local, national and global levels. But even the ecology metaphor doesn’t quite encompass the entirety of this era, he admits. Maniam instead encourages us to examine and reflect on the pandemic across a number of angles, noting that none of these lenses, or any others, are mutually exclusive. Our individual and collective experiences of this unprecedented time deserve to be told and remembered in expansive, robust and inclusive ways. “Each of us is never going to have a monopoly on truth,” he says. “We have to value the diversity that others bring by recognizing their identity diversity … and their competent diversity — the importance of people coming from disciplines like engineering, history, public health, etc. — all contributing to a much richer understanding and totality of the situation we’re in.”


Vishaan Chakrabarti explores how the coronavirus pandemic might reshape life in cities. He speaks at TED2020: Uncharted on June 10, 2020. (Photo courtesy of TED)

Vishaan Chakrabarti, Architect

Big idea: Cities are facing a crisis of inequity and a crisis in health. To recover and heal, we need to plan our urban areas around inclusion and equality. 

How? In order to implement a new urban agenda rooted in equity, Vishaan Chakrabarti says that we need to consider three components: affordable housing and accessible health care; sustainable urban mobility; and attainable social and cultural resources. Chakrabarti shatters the false narrative of having to choose between an impoverished city or a prosperous one, instead envisioning one whose urban fabric is diverse with reformed housing policies and budgets. “Housing is health,” he says. “You cannot have a healthy society if people are under housing stress or have homelessness.” With a third of public space dedicated to private cars in many cities, Chakrabarti points to the massive opportunity we have to dedicate more space to socially distanced ways to commute and ecologically conscious modes of transportation, like walking or biking. We will need to go directly to communities and ask what their needs are to build inclusive, eco-friendly and scalable solutions. “We need a new narrative of generosity, not austerity,” he says.

Planet DebianEmmanuel Kasper: Test a webcam from the command line on Linux with VLC

Since this info was too well hidden on the internet, here is the information:
cvlc v4l2://
and there you go.

CryptogramEavesdropping on Sound Using Variations in Light Bulbs

New research is able to recover sound waves in a room by observing minute changes in the room's light bulbs. This technique works from a distance, even from a building across the street through a window.

Details:

In an experiment using three different telescopes with different lens diameters from a distance of 25 meters (a little over 82 feet) the researchers were successfully able to capture sound being played in a remote room, including The Beatles' Let It Be, which was distinguishable enough for Shazam to recognize it, and a speech from President Trump that Google's speech recognition API could successfully transcribe. With more powerful telescopes and a more sensitive analog-to-digital converter, the researchers believe the eavesdropping distances could be even greater.

It's not expensive: less than $1,000 worth of equipment is required. And unlike other techniques like bouncing a laser off the window and measuring the vibrations, it's completely passive.

News articles.

Planet DebianHideki Yamane: excitement kills thinking

"master is wrong word!!! Stop to use it in tech world!!!"

Oh, such activity reminds me of 无产阶级文化大革命. 

Just changing the words does not solve the problems in the real-world, IMHO (of course, it's my opinion, may it be different from yours).

Worse Than FailureCodeSOD: Dates Float

In a lot of legacy code, I've come across "integer dates". It's a pretty common way to store dates in a compact format: an integer in the form "YYYYMMDD", e.g., 20200616 It's relatively compact, it remains human readable (unlike a Unix epoch). It's not too difficult to play with the modulus and rounding operators to pick it back into date parts, if you need to, though mostly we'd use something like this as an ID-like value, or for sorting.

Thanks to Katie E I've learned about a new format: decimal years. The integer portion is the year, and the decimal portion is how far through that year you are, e.g. 2020.4547. This is frequently used in statistical modeling to manage time-series data. Once again, it's not meant to be parsed back into an actual date, but if you're careful, you can do it.

Unless you're whoever wrote this C++ code, which Katie found.


*unYear = (unsigned short)m_fEpoch;
*unMonth = (unsigned short)(((m_fEpoch - (float)*unYear) * 365.0) / 12.0) + 1;

Right off the bat, we can see that they're using pointers to these values: *unYear tells us that unYear must be a pointer. This isn't wrong, but it's a code smell. I've got to wonder why they're doing that. It's not wrong, it just makes me suspicious.

The goal, as you can see from the variable names, is to figure out which month we're in. So the first step is to remove the year portion- (unsigned short)m_fEpoch will truncate the value to just the year, which means the next expression gets us the progress through the year, the decimal portion: (m_fEpoch - (float)*unYear).

So far, so good. Then we make our first mistake: we multiply by 365. So, on leap years, you'll sometimes be a day off. Still, that gives us the day of the year, give or take a bit. And then we make our second mistake: we divide by 12. That'd be great if every month were the same length, but they're not.

Except wait, no, that wouldn't be great, because we've just gotten our divisors backwards. 365/12 gives us 30.416667. We're not dividing the year into twelve equally sized months, we're dividing it into thirty equally sized months.

I've seen a lot of bad date handling code, and it's so rare to see something I've never seen before, an interesting new way to mess up dates. This block manages to fail to do its job in a surprising number of ways.

In any case, summer approaches, so I hope everyone enjoys being nearly through the month of Tredecimber. Only 17 more months to go before 2020 is finally over.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianRitesh Raj Sarraf: Kodi PS3 BD Remote

Setting up a Sony PS3 Blu-Ray Disc Remote Controller with Kodi

TLDR; Since most of the articles on the internet were either obsolete or broken, I’ve chosen to write these notes down in the form of a blog post so that it helps me now and in future, and hopefully others too.

Raspberry Pi

All this time, I have been using the Raspberry Pi for my HTPC needs. The first RPi I acquired was in 2014 and I have been very very happy with the amount of support in the community and quality of the HTPC offering it has. I also appreciate the RPi’s form factor and the power consumption limits. And then, to add more sugar to it, it uses a derivative of Debian, Raspbian, which was very familiar and feel good to me.

Raspberry Pi Issues

So primarily, I use my RPi with Kodi. There are a bunch of other (daemon) services but the primary use case is HTPC only. RPi + Kodi has a very very annoying issue wherein it loses its audio pitch during video playback. The loss is so bad that the audio is barely audible. The workaround is to seek the video playback either way and then it comes back to its actual audio level, just to fade again in a while.

My suspicion was that it may be a problem with Kodi. Or at least, Kodi would have a workaround in software. But unfortunately, I wasted a lot of time in dealing with my suspicion with no fruitful result.

This started becoming a PITA over time. And it seems the issue is with the hardware itself because after I moved my setup to a regular laptop, the audio loss is gone.

Laptop with Kodi

Since I had my old Lenovo Yoga 2 13 lying on all the time, it made sense to make some more use of it, using as the HTPC. This machine comes with a Micro-HDMI Out port, so it felt ideal for my High Definition video rendering needs.

It comes stock with just Intel HD Video with good driver support in Linux, so it was quite quick and easy getting Kodi charged up and running on it. And as I mentioned above, the sound issues are not seen on this setup.

Some added benefits are that I get to run stock Debian on this machine. And I must say a big THANK YOU to the Debian Multimedia Maintainers, who’ve done a pretty good job maintaining Kodi under Debian.

HDMI CEC

Only after I decommissioned my RPi, I came to notice how convenient the HDMI CEC functionality is. Turns out no standard laptops ship CEC functionality onto them. Even the case of my laptop, which has a Micro HDMI Out port, but still no CEC capabilities. As far as I know, the RPi came with the Pulse-Eight CEC module, so obvious first thought was to opt for a compatible external module of the same; but it comes with a nice price tag, me not willing to spend.

WiFi Remotes

Kodi has very well implemented network interface for almost all its features. One could take the Yatse or Music Pump Kodi Remote Android applications that work very very well with Kodi.

But wifi can be flaky some times. Especially, my experience with the Realtek network devices hasn’t been very good. The driver support in Linux is okay but there are many firmware bugs to deal with. In my case, the machine will lose wifi signal/network every once in a while. And it turns out, for this machine, with this network device type, I’m not the only one running into such problems.

And to add to that, this is an UltraBook, which means it doesn’t have an Ethernet port. So I’ve had not much choice other than to live and suffer deal with it.

The WiFi chip also provides the Bluetooth module, which so far I had not used much. From my /etc/modprobe.d/blacklist-memstick.conf, all relevant BT modules were added to the blacklist, all this time.

rrs@lenovo:~$ cat /etc/modprobe.d/blacklist-memstick.conf 
blacklist memstick
blacklist rtsx_usb_ms

# And bluetooth too
#blacklist btusb
#blacklist btrtl
#blacklist btbcm
#blacklist btintel
#blacklist bluetooth
21:21 ♒♒♒   ☺ 😄    

Also to keep in mind is that the driver for my card gives a very misleading kernel message, which is one of the many reasons for this blog post, so that I don’t forget it a couple of months later. The missing firmware error message is okay to ignore, as per this upstream comment.

Jun 14 17:17:08 lenovo kernel: usbcore: registered new interface driver btusb
Jun 14 17:17:08 lenovo systemd[1]: Mounted /boot/efi.
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: examining hci_ver=06 hci_rev=000b lmp_ver=06 lmp_subver=8723
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: rom_version status=0 version=1
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: loading rtl_bt/rtl8723b_fw.bin
Jun 14 17:17:08 lenovo kernel: bluetooth hci0: firmware: direct-loading firmware rtl_bt/rtl8723b_fw.bin
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: loading rtl_bt/rtl8723b_config.bin
Jun 14 17:17:08 lenovo kernel: bluetooth hci0: firmware: failed to load rtl_bt/rtl8723b_config.bin (-2)
Jun 14 17:17:08 lenovo kernel: firmware_class: See https://wiki.debian.org/Firmware for information about missing firmware
Jun 14 17:17:08 lenovo kernel: bluetooth hci0: Direct firmware load for rtl_bt/rtl8723b_config.bin failed with error -2
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: cfg_sz -2, total sz 22496

This device’s network + bt are on the same chip.

01:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8723BE PCIe Wireless Network Adapter

And then, when the btusb module is initialed (along with the misleading driver message), you’ll get the following in your USB device listing

Bus 002 Device 005: ID 0bda:b728 Realtek Semiconductor Corp. Bluetooth Radio

Sony PlayStation 3 BD Remote

Almost 10 years ago, I bought the PS3 and many of its accessories. The remote has just been rotting in the shelf. It had rusted so bad that it is better described with these pics.

The rust was so much that the battery holding spring gave up.

A little bit scrubbing and cleaning has gotten it working. I hope it lasts for some time before I find time to open it up and give it a full clean-up.

Pairing the BD Remote to laptop

Honestly, with the condition of the hardware and software on both ends, I did not have much hopes of getting this to work. And in all the years on my computer usage, I hardly recollect much days when I’ve made use of BT. Probably, because the full BT stack wasn’t that well integrated in Linux, earlier. And I mostly used to disable them in hardware and software to save on battery.

All yielded results from the internet talked about tools/scripts that were either not working, pointing to broken links etc.

These days, bluez comes with a nice utility, bluetoothctl. It was a nice experience using it.

First, start your bluetooth service and ensure that the device talks well with the kernel

rrs@lenovo:~$ systemctl status bluetooth                                                                                                          
â—� bluetooth.service - Bluetooth service                                                                                                           
     Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled)                                                      
     Active: active (running) since Mon 2020-06-15 12:54:58 IST; 3s ago                                                                           
       Docs: man:bluetoothd(8)                                                                                                                    
   Main PID: 310197 (bluetoothd)                                                                                                                  
     Status: "Running"                                                                                                                            
      Tasks: 1 (limit: 9424)                                                                                                                      
     Memory: 1.3M                                                                                                                                 
     CGroup: /system.slice/bluetooth.service                                                                                                      
             └─310197 /usr/lib/bluetooth/bluetoothd                                                                                               
                                                                                                                                                  
Jun 15 12:54:58 lenovo systemd[1]: Starting Bluetooth service...                                                                                  
Jun 15 12:54:58 lenovo bluetoothd[310197]: Bluetooth daemon 5.50                                                                                  
Jun 15 12:54:58 lenovo systemd[1]: Started Bluetooth service.                                                                                     
Jun 15 12:54:58 lenovo bluetoothd[310197]: Starting SDP server                                                                                    
Jun 15 12:54:58 lenovo bluetoothd[310197]: Bluetooth management interface 1.15 initialized                                                        
Jun 15 12:54:58 lenovo bluetoothd[310197]: Sap driver initialization failed.                                                                      
Jun 15 12:54:58 lenovo bluetoothd[310197]: sap-server: Operation not permitted (1)                                                                
12:55 ♒♒♒   ☺ 😄                                                                                                                               

Next, then is to discover and connect to your device

rrs@lenovo:~$ bluetoothctl 
Agent registered
[bluetooth]# devices
Device E6:3A:32:A4:31:8F MI Band 2
Device D4:B8:FF:43:AB:47 MI RC
Device 00:1E:3D:10:29:0F BD Remote Control
[CHG] Device 00:1E:3D:10:29:0F Connected: yes

[BD Remote Control]# info 00:1E:3D:10:29:0F
Device 00:1E:3D:10:29:0F (public)
        Name: BD Remote Control
        Alias: BD Remote Control
        Class: 0x0000250c
        Paired: no
        Trusted: yes
        Blocked: no
        Connected: yes
        LegacyPairing: no
        UUID: Human Interface Device... (00001124-0000-1000-8000-00805f9b34fb)
        UUID: PnP Information           (00001200-0000-1000-8000-00805f9b34fb)
        Modalias: usb:v054Cp0306d0100
[bluetooth]# 

In case of the Sony BD Remote, there’s no need to pair. In fact, trying to pair fails. It prompts for the PIN code, but neither 0000 or 1234 are accepted.

So, the working steps so far are to Trust the device and then Connect the device.

For the sake of future use, I also populated /etc/bluetooth/input.conf based on suggestions on the internet. Note: The advertised keymappings in this config file do not work. Note: I’m only using it for the power saving measures in instructing the BT connection to sleep after 3 minutes.

rrs@priyasi:/tmp$ cat input.conf 
# Configuration file for the input service

# This section contains options which are not specific to any
# particular interface
[General]

# Set idle timeout (in minutes) before the connection will
# be disconnect (defaults to 0 for no timeout)
IdleTimeout=3

# Enable HID protocol handling in userspace input profile
# Defaults to false (HIDP handled in HIDP kernel module)
#UserspaceHID=true

# Limit HID connections to bonded devices
# The HID Profile does not specify that devices must be bonded, however some
# platforms may want to make sure that input connections only come from bonded
# device connections. Several older mice have been known for not supporting
# pairing/encryption.
# Defaults to false to maximize device compatibility.
#ClassicBondedOnly=true

# LE upgrade security
# Enables upgrades of security automatically if required.
# Defaults to true to maximize device compatibility.
#LEAutoSecurity=true
#

#[00:1E:3D:10:29:0F]
[2c:33:7a:8e:d6:30]

[PS3 Remote Map]
# When the 'OverlayBuiltin' option is TRUE (the default), the keymap uses
# the built-in keymap as a starting point.  When FALSE, an empty keymap is
# the starting point.
#OverlayBuiltin = TRUE
#buttoncode = keypress    # Button label = action with default key mappings
#OverlayBuiltin = FALSE
0x16 = KEY_ESC            # EJECT = exit
0x64 = KEY_MINUS          # AUDIO = cycle audio tracks
0x65 = KEY_W              # ANGLE = cycle zoom mode
0x63 = KEY_T              # SUBTITLE = toggle subtitles
0x0f = KEY_DELETE         # CLEAR = delete key
0x28 = KEY_F8             # /TIME = toggle through sleep
0x00 = KEY_1              # NUM-1
0x01 = KEY_2              # NUM-2
0x02 = KEY_3              # NUM-3
0x03 = KEY_4              # NUM-4
0x04 = KEY_5              # NUM-5
0x05 = KEY_6              # NUM-6
0x06 = KEY_7              # NUM-7
0x07 = KEY_8              # NUM-8
0x08 = KEY_9              # NUM-9
0x09 = KEY_0              # NUM-0
0x81 = KEY_F2             # RED = red
0x82 = KEY_F3             # GREEN = green
0x80 = KEY_F4             # BLUE = blue
0x83 = KEY_F5             # YELLOW = yellow
0x70 = KEY_I              # DISPLAY = show information
0x1a = KEY_S              # TOP MENU = show guide
0x40 = KEY_M              # POP UP/MENU = menu
0x0e = KEY_ESC            # RETURN = back/escape/cancel
0x5c = KEY_R              # TRIANGLE/OPTIONS = cycle through recording options
0x5d = KEY_ESC            # CIRCLE/BACK = back/escape/cancel
0x5f = KEY_A              # SQUARE/VIEW = Adjust Playback timestretch
0x5e = KEY_ENTER          # CROSS = select
0x54 = KEY_UP             # UP = Up/Skip forward 10 minutes
0x56 = KEY_DOWN           # DOWN = Down/Skip back 10 minutes
0x57 = KEY_LEFT           # LEFT = Left/Skip back 5 seconds
0x55 = KEY_RIGHT          # RIGHT = Right/Skip forward 30 seconds
0x0b = KEY_ENTER          # ENTER = select
0x5a = KEY_F10            # L1 = volume down
0x58 = KEY_J              # L2 = decrease the play speed
0x51 = KEY_HOME           # L3 = commercial skip previous
0x5b = KEY_F11            # R1 = volume up
0x59 = KEY_U              # R2 = increase the play speed
0x52 = KEY_END            # R3 = commercial skip next
0x43 = KEY_F9             # PS button = mute
0x50 = KEY_M              # SELECT = menu (as per PS convention)
0x53 = KEY_ENTER          # START = select / Enter (matches terminology in mythwelcome)
0x30 = KEY_PAGEUP         # PREV = jump back (default 10 minutes)
0x76 = KEY_J              # INSTANT BACK (newer RCs only) = decrease the play speed
0x75 = KEY_U              # INSTANT FORWARD (newer RCs only) = increase the play speed
0x31 = KEY_PAGEDOWN       # NEXT = jump forward (default 10 minutes)
0x33 = KEY_COMMA          # SCAN BACK =  decrease scan forward speed / play
0x32 = KEY_P              # PLAY = play/pause
0x34 = KEY_DOT            # SCAN FORWARD decrease scan backard speed / increase playback speed; 3x, 5, 10, 20, 30, 60, 120, 180
0x60 = KEY_LEFT           # FRAMEBACK = Left/Skip back 5 seconds/rewind one frame
0x39 = KEY_P              # PAUSE = play/pause
0x38 = KEY_P              # STOP = play/pause
0x61 = KEY_RIGHT          # FRAMEFORWARD = Right/Skip forward 30 seconds/advance one frame
0xff = KEY_MAX
21:48 ♒ � ♅ ⛢  ☺ 😄    

I have not spent much time finding out why not all the key presses work. Especially, given that most places on the internet mention these mappings. For me, some of the key scan codes aren’t even reported. For keys like L1, L2, L3, R1, R2, R3, Next_Item, Prev_Item, they generate no codes in the kernel.

If anyone has suggestions, ideas or fixes, I’d appreciate if you can drop a comment or email me privately.

But given my limited use to get a simple remote ready, to be usable with Kodi, I was apt with only some of the keys working.

Mapping the keys in Kodi

With the limited number of keys detected, mapping those keys to what Kodi could use was the next step. Kodi has a very nice and easy to use module, Keymap Editor. It is very simple to use and map detected keys to functionalities you want. With it, I was able to get a functioning remote to use with my Kodi HTPC setup.

Update: Wed Jun 17 11:38:20 2020

One annoying problem that breaks the overall experience is the following bug on the driver side, that results in connections not being established instantly.

Once the device goes into sleep mode, in random attempts, waking up and re-establishing a BT connection can be multi-poll affair. This can last from a couple of seconds to well over minute.

Random suggestions on the internet mention disabling the autosuspend functionality for the device in the driver with btusb.enable_autosuspend=n, but that did not help in this case.

Given that this device is enumberated over the USB Bus, it probably needs this feature applied to the whole USB tree of the device’s chain. Something to investigate over the weekend.

Jun 16 20:41:23 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 7
Jun 16 20:41:43 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 8
Jun 16 20:41:59 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 9
Jun 16 20:42:18 lenovo kernel: input: BD Remote Control as /devices/pci0000:00/0000:00:14.0/usb1/1-7/1-7:1.0/bluetooth/hci0/hci0:10/0005:054C:030>
Jun 16 20:42:18 lenovo kernel: sony 0005:054C:0306.0006: input,hidraw1: BLUETOOTH HID v1.00 Gamepad [BD Remote Control] on 2c:33:7a:8e:d6:30
Jun 16 20:51:59 lenovo kernel: input: BD Remote Control as /devices/pci0000:00/0000:00:14.0/usb1/1-7/1-7:1.0/bluetooth/hci0/hci0:11/0005:054C:030>
Jun 16 20:51:59 lenovo kernel: sony 0005:054C:0306.0007: input,hidraw1: BLUETOOTH HID v1.00 Gamepad [BD Remote Control] on 2c:33:7a:8e:d6:30
Jun 16 21:05:55 lenovo rtkit-daemon[1723]: Supervising 3 threads of 1 processes of 1 users.
Jun 16 21:05:55 lenovo rtkit-daemon[1723]: Successfully made thread 32747 of process 1646 owned by '1000' RT at priority 5.
Jun 16 21:05:55 lenovo rtkit-daemon[1723]: Supervising 4 threads of 1 processes of 1 users.
Jun 16 21:05:56 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 12
Jun 16 21:06:12 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 1
Jun 16 21:06:34 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 2
Jun 16 21:06:59 lenovo kernel: input: BD Remote Control as /devices/pci0000:00/0000:00:14.0/usb1/1-7/1-7:1.0/bluetooth/hci0/hci0:3/0005:054C:0306>
Jun 16 21:06:59 lenovo kernel: sony 0005:054C:0306.0008: input,hidraw1: BLUETOOTH HID v1.00 Gamepad [BD Remote Control] on 2c:33:7a:8e:d6:30

Others

There’s a package, kodi-eventclients-ps3, which can be used to talk to the BD Remote. Unfortunately, it isn’t up-to-date. When trying to make use of it, I ran into a couple of problems.

First, the easy one is:

rrs@lenovo:~/ps3pair$ kodi-ps3remote localhost 9777
usr/share/pixmaps/kodi//bluetooth.png
Traceback (most recent call last):
  File "/usr/bin/kodi-ps3remote", line 220, in <module>
  File "/usr/bin/kodi-ps3remote", line 208, in main
    xbmc.connect(host, port)
    packet = PacketHELO(self.name, self.icon_type, self.icon_file)
  File "/usr/lib/python3/dist-packages/kodi/xbmcclient.py", line 285, in __init__
    with open(icon_file, 'rb') as f:
11:16 ♒♒♒    ☹ 😟=> 1  

This one was simple as it was just a broken path.

The second issue with the tool is a leftover from python2 to python3 conversion.

rrs@lenovo:/etc/bluetooth$ kodi-ps3remote localhost
/usr/share/pixmaps/kodi//bluetooth.png
Searching for BD Remote Control
(Hold Start + Enter on remote to make it discoverable)
Redmi (E8:5A:8B:73:57:44) in range
Living Room TV (E4:DB:6D:24:23:E9) in range
Could not find BD Remote Control. Trying again...
Searching for BD Remote Control
(Hold Start + Enter on remote to make it discoverable)
Living Room TV (E4:DB:6D:24:23:E9) in range
Redmi (E8:5A:8B:73:57:44) in range
Could not find BD Remote Control. Trying again...
Searching for BD Remote Control
(Hold Start + Enter on remote to make it discoverable)
BD Remote Control (00:1E:3D:10:29:0F) in range
Found BD Remote Control with address 00:1E:3D:10:29:0F
Attempting to pair with remote
Remote Paired.
Traceback (most recent call last):
  File "/usr/bin/kodi-ps3remote", line 221, in <module>
    main()
  File "/usr/bin/kodi-ps3remote", line 212, in main
    if process_keys(remote, xbmc):
  File "/usr/bin/kodi-ps3remote", line 164, in process_keys
    keycode = data.encode("hex")[10:12]
AttributeError: 'bytes' object has no attribute 'encode'
11:24 ♒♒♒    ☹ 😟=> 1  

Fixing that too did not give me the desired result on using the BD Remote in the way I want. So eventually, I gave up and used Kodi’s Keymap Editor instead.

Next

Next in line, when I can manage to get some free time, is to improve the Kodi Video Scraper to have a fallback mode. Currently, for files where it cannot determine the content, it reject the file resulting in those files not showing up in your collection at all. A better approach would have been to have a fallback mode, that when the scraper cannot determine the content, it should fallback to using the filename scraper

Planet Linux AustraliaJames Morris: Linux Security Summit North America 2020: Online Schedule

Just a quick update on the Linux Security Summit North America (LSS-NA) for 2020.

The event will take place over two days as an online event, due to COVID-19.  The dates are now July 1-2, and the full schedule details may be found here.

The main talks are:

There are also short (30 minute) topics:

This year we will also have a Q&A panel at the end of each day, moderated by Elena Reshetova. The panel speakers are:

  • Nayna Jain
  • Andrew Lutomirski
  • Dmitry Vyukov
  • Emily Ratliff
  • Alexander Popov
  • Christian Brauner
  • Allison Marie Naaktgeboren
  • Kees Cook
  • Mimi Zohar

LSS-NA this year is included with OSS+ELC registration, which is USD $50 all up.  Register here.

Hope to see you soon!

Planet DebianDirk Eddelbuettel: Rcpp 1.0.5 in two+ weeks: Please help test

rcpp logo

With the current four-month release cycle, the next Rcpp release is due in July following the 1.0.4 release in March. Just prior to the 1.0.4 release I had asked this:

It would be particularly beneficial if those with “unsual” build dependencies tested it as we would increase overall coverage beyond what I get from testing against 1800+ CRAN packages. BioConductor would also be welcome.

but only on the rcpp-devel list, and only about a good week prior to the release.

I remain rather disappointed and disillusioned about what happened after 1.0.4 was released. Two PRs in that release were soon seen to have side effects on more ‘marginal’ test systems, precisely what added testing could have revealed. An additional issue arose from changes in R’s make system, which is harder to anticipate or test. Each and every infelicity was fixed within a day or so, and we always make candidate releases available—the current Rcpp as of this writing is 1.0.4.12 meaning twelve microreleases were made since 1.0.4. And those microreleases are always available for normal download and install.packages use via the Rcpp drat repository accessible to all. So it was truly troubling to see some, especially those with experience in setting up or running testing / ci platforms, pretend to be unable to access, install, and provide these for their own tests, or the tests of their users. It just doesn’t pass a basic logic test: it takes a single call to install.packages(), or, even more easily, a single assignment of an auxiliary repo. All told this was a rather sad experience.

So let’s try to not repeat this. If you, or maybe users of a build or ci system you maintain, rely on Rcpp, and especially if you do so on systems outside the standard CRAN grid of three OSs and the triplet of “previous, current, next” releases of R, then please help by testing. I maitain these release as a volunteer, unpaid at that, and I simply cannot expand to more systesm. We take reverse dependency check seriously (and I just run two taking about a day each) but if you insist on building on stranger hardware or much older releases it will be up to you to ensure Rcpp passes. We prep for CRAN, and try our best to pass at CRAN. For nearly a dozen years.

To install the current microrelease from the Rcpp drat repository, just do

That is all there is to it. You could even add the Rcpp drat repository to your repository list.

Rcpp has become successful because so many people help with suggestions, documentation, and code. It is used by (as of today) 1958 CRAN packages, 205 BioConductor packages, and downloaded around a million times per month. So if you can, please help now with some more testing.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

TEDWHAAAAAT?: The talks of TED2020 Session 4

For Session 4 of TED2020, experts in biohacking, synthetic biology, psychology and beyond explored topics ranging from discovering the relationship between the spinal cord and asparagus to using tools of science to answer critical questions about racial bias. Below, a recap of the night’s talks and performances.

“Every scientist can tell you about the time they ignored their doubts and did the experiment that would ‘never’ work,” says biomedical researcher Andrew Pelling. “And the thing is, every now and then, one of those experiments works out.” He speaks at TED2020: Uncharted on June 11, 2020. (Photo courtesy of TED)

Andrew Pelling, biomedical researcher

Big idea: Could we use asparagus to repair spinal cords?

How? Andrew Pelling researches how we might use fruits, vegetables and plants to reconstruct damaged or diseased human tissues. (Check out his 2016 talk about making ears out of apples.) His lab strips these organisms of their DNA and cells, leaving just the fibers behind, which are then used as “scaffolds” to reconstruct tissue. Now, they’re busy working with asparagus, experimenting to see if the vegetable’s microchannels can guide the regeneration of cells after a spinal cord injury. There’s evidence in rats that it’s working, the first data of its kind to show that plant tissues might be capable of repairing such a complex injury. Pelling is also the cofounder of Spiderwort, a startup that’s translating these innovative discoveries into real-world applications. “Every scientist can tell you about the time they ignored their doubts and did the experiment that would ‘never’ work,” he says. “And the thing is, every now and then, one of those experiments works out.”


Synthetic designer Christina Agapakis shares projects that blur the line between art and science at TED2020: Uncharted on June 11, 2020. (Photo courtesy of TED)

Christina Agapakis, synthetic designer

Big idea: Synthetic biology isn’t an oxymoron; it investigates the boundary between nature and technology — and it could shape the future.

How? From teaching bacteria how to play sudoku to self-healing concrete, Christina Agapakis introduces us to the wonders of synthetic biology: a multidisciplinary science that seeks to create and sometimes redesign systems found in nature. “We have been promised a future of chrome, but what if the future is fleshy?” asks Agapakis. She delves into the ways biology could expand technology and alter the way we understand ourselves, exposing the surprisingly blurred lines between art, science and society. “It starts by recognizing that we as synthetic biologists are also shaped by a culture that values ‘real’ engineering more than any of the squishy stuff. We get so caught up in circuits and what happens inside of computers that we sometimes lose sight of the magic that’s happening inside of us,” says Agapakis.

Jess Wolfe and Holly Laessig of Lucius perform “White Lies” and “Turn It Around” at TED2020: Uncharted on June 11, 2020. (Photo courtesy of TED.)

Jess Wolfe and Holly Laessig of indie pop band Lucius provide an enchanting musical break between talks, performing their songs “White Lies” and “Turn It Around.”


“[The] association with blackness and crime … makes its way into all of our children, into all of us. Our minds are shaped by the racial disparities we see out in the world, and the narratives that help us to make sense of the disparities we see,” says psychologist Jennifer L. Eberhardt. She speaks at TED2020: Uncharted on June 11, 2020. (Photo courtesy of TED)

Jennifer L. Eberhardt, psychologist

Big idea: We can use science to break down the societal and personal biases that unfairly target Black people.

How? When Jennifer Eberhardt flew with her five-year-old son one day, he turned to her after looking at the only other Black man on the plane and said, “I hope he doesn’t rob the plane” — showing Eberhardt undeniable evidence that racial bias seeps into every crack of society. For Eberhardt, a MacArthur-winning psychologist specializing in implicit bias, this surfaced a key question at the core of our society: How do we break down the societal and personal biases that target blackness? Just because we’re vulnerable to bias doesn’t mean we need to act on it, Eberhardt says. We can create “friction” points that eliminate impulsive social media posts based on implicit bias, such as when Nextdoor fought back against its “racial profiling problem” that required users to answer a few simple questions before allowing them to raise the alarm on “suspicious” visitors to their neighborhoods. Friction isn’t just a matter of online interaction, either. With the help of similar questions, the Oakland Police Department instituted protocols that reduce traffic stops of African-Americans by 43 percent. “Categorization and the bias that it seeds allow our brains to make judgments more quickly and efficiently,” Eberhardt says. “Just as the categories we create allow us to make quick decisions, they also reinforce bias — so the very things that help us to see the world also can blind us to it. They render our choices effortless, friction-free, yet they exact a heavy toll.”


 

Biological programmer Michael Levin (right) speaks with head of TED Chris Anderson about the wild frontiers of cellular memory at TED2020: Uncharted on June 11, 2020. (Photo courtesy of TED)

Michael Levin, biological programmer

Big idea: DNA isn’t the only builder in the biological world — there’s also an invisible electrical matrix directing cells to change into organs, telling tadpoles to become frogs, and instructing flatworms to regenerate new bodies once sliced in half. If Michael Levin and his colleagues can learn this cellular “machine language,” human beings may be one step closer to curing birth defects, eliminating cancer and evading aging.

How? As cells become organs, systems and bodies, they communicate via an electrical system dictating where the finished parts will go. Guided by this cellular network, organisms grow, transform and even build new limbs (or bodies) after trauma. At Michael Levin’s lab, scientists are cracking this code — and have even succeeded in creating autonomous organisms out of skin cells by altering the cell electrically without genetic manipulation. Mastering this code could not only allow humans to create microscopic biological “xenobots” to rebuild and medicate our bodies from the inside but also let us to grow new organs — and perhaps rejuvenate ourselves as we age. “We are now beginning to crack this morphogenetic code to ask: How is it that these tissues store a map of what to do?” Levin asks. “[How can we] go in and rewrite that map to new outcomes?”


“My vision for the future is that when things come to life, they do so with joy,” says Ali Kashani. He speaks at TED2020: Uncharted on June 11, 2020. (Photo courtesy of TED)

Ali Kashani, VP of special projects at Postmates

Big idea: Robots are becoming a part of everyday life in urban centers, which means we’ll have to design them to be accessible, communicative and human-friendly.

How? On the streets of San Francisco and Los Angeles, delivery robots bustle along neighborhood sidewalks to drop-off packages and food. With potential benefits ranging from environmental responsibility to community-building, these robots offer us an incredible glimpse into the future. The challenge now is ensuring that robots can move out of the lab and fit into our world and among us as well, says Kashani. At Postmates, Kashani designs robots with human reaction in mind. Instead of frightening, dystopian imagery, he wants people to understand robots as familiar and friendly. This is why Postmates’s robots are reminiscent of beloved characters like the Minions and Wall-E; they can use their eyes to communicate with humans and acknowledge obstacles like traffic stops in real-time. There are so many ways robots can help us and our communities: picking up extra food from restaurants for shelters, delivering emergency medication to those in need and more. By designing robots to integrate into our physical and social infrastructures, we can welcome them to the world seamlessly and create a better future for all. “My vision for the future is that when things come to life, they do so with joy,” Kashani says.

Planet DebianUtkarsh Gupta: GSoC Phase 1

Hello,

Earlier last month, I got selected as a Google Summer of Code student for Debian again! \o/
And as Chandler would say,

Could I be any more happier?

Well, this time, my project is basically to write a linter (an extension to RuboCop). This tool is mostly to help the Debian Ruby team. And that is the best part, I love working in/for/with the Ruby team!
(I’ve been an active part of the team for 18 months now :))

More details about the project can be found here, on the wiki.
And also, I have got the best mentors I could’ve possibly asked for: Antonio Terceiro and David Rodríguez 💖

So, the program began on 1st June and I’ve been working since then. I log my daily updates at gsocwithutkarsh2102.tk.

Whilst the daily updates are available at the above site^, I’ll breakdown the important parts here:

  • During the first three days, I looked for a potential solution to the usage of git ls-files in the gemspec files. This has been the most problematic thing for us.

    • Apart from the option of using Dir or Dir.glob, the best (closest) possible solution (right now) is to use Rake::FileList which tries to respects the .gitignore file.
    • I stumbled upon this interesting gem, fast_ignore. It is the exact thing which we want to use but unfortunately, to use it inside other gemspec files, it should be vendored inside bundler’s code.
  • We had our first meeting on the fourth day and we decided to hold meetings every Thursday for the next 12 weeks.

  • For the next five days, I learned more of Ruby and figured out what to do and how to do it.
    If you’d like to know what exactly I did in these 5 days, I’d suggest you to read the daily logs for those respective days.

  • During the next to two days, the first part of the project, the GemspecGit Cop, was implemented.
    This cop will correctly determine the usage of “git� in the gemspec files and would tell the developers and the maintainers to replace them with pure Ruby alternatives with giving them a proper reason to do so. Much thanks to Dana for her help – she took out time to pair-program with me! 💖

  • We had our second weekly meeting where I finally told Antonio and David that the first part is already done (\o/) and we discussed some things and even pair-programmed :D

  • I took the weekend off (and something terrible happened) but anyways, I managed to get together some time and energy to document the source code and raised this PR #2.

  • And here I am on the 15th day :)

It has been a lot of fun so far! Though I am little worried on how to implement the next part of the project as I am not sure how to check only a particalar directory for some relative require calls.
But I think, that’s okay, somehow, something will work out. And I can always ask around others and check other cops to see how it’s done! ¯\(ツ)/¯


Until next time.
:wq for today.

Planet DebianEnrico Zini: Qt5 custom build of Qt Creator

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

These are instructions for building Qt Creator with the custom Qt, so that it can load and use Designer plugins built with it.

Sadly, because of the requirement of being able to load Designer plugins, and because of the requirement of being able to compile custom widgets using the custom Qt and use them in the Designer, we need to also rebuild Qt Creator.

The resulting packaging is at https://github.com/Truelite/qt5custom.

Set up sources

Open the source tarball, and add the Qt Creator packaging:

tar axf qt-creator-enterprise-src-4.12.2.tar.xz
cp -a debian-qtcreator qt-creator-enterprise-src-4.12.2/debian
ln -s qt-creator-enterprise-src-4.12.2.tar.xz qt-creator-enterprise-src_4.12.2.orig.tar.xz

If needed, install the Qt license:

cp qt-license.txt ~/.qt-license

Install build dependencies

You can use apt build-dep to install dependencies manually:

cd qt-creator-enterprise-src-4.12.2
apt build-dep .

Alternatively, you can create an installable .deb metapackage that depends on the build dependencies:

apt install devscripts
mk-build-deps debian-qtcreator/control
apt -f install qt-creator-enterprise-src-build-deps_4.12.2-1_all.deb

Package build

The package is built by debian/rules base on the excellent work done by Debian Qt5 maintainers.

After installing the build dependencies, you can build like this:

cd qt-creator-enterprise-src-4.12.2
debuild -us -uc -rfakeroot

In debian/rules you can configure NUMJOBS with the number of available CPUs in the machine, to have parallel builds.

debian/rules automatically picks qt5custom as the Qt version to use for the build.

NOTE: Qt Creator 4.12.2 will NOT build if qtbase5custom-armhf-dev is installed. One needs to make sure to have qtbase5custom-dev installed, but NOT qtbase5custom-armhf-dev. Despite quite a bit of investigation, I have been unable to understand why, if both are installed, Qt Creator's build chooses the wrong one, and fails the build.

Build output

Building sources generates 4 packages:

  • qtcreator-qt5custom: the program
  • qtcreator-qt5custom-data: program data
  • qtcreator-qt5custom-doc: documentation
  • qtcreator-qt5custom-dbgsym: debugging symbols

Using the custom Qt Creator Enterprise

The packages are built with qt5custom and install their content in /opt/qt5custom.

The packages are coinstallable with the version of Qt Creator packaged in Debian.

The custom Qt Creator executable is installed in /opt/qt5custom/bin/qtcreator, which is not in $PATH by default. To run it, you can explitly use /opt/qt5custom/bin/qtcreator. qtcreator ran without an explicit path, runs the standard Debian version.

Installing Designer plugins

Designer plugings can be compiled with qt5custom and installed in /opt/qt5custom/plugins/designer/.

Cross-building with Qt Creator

Once the cross-build Qt5 packages are installed, one should see it appear in the Qt Creator kit configuration, where it can be selected and used normally.

If one sets device type to "Generic Linux Device", chooses a compiler for "arm 32bit" and sets Qt Version to "qt5custom-armhf", one can smoothly cross-compile and execute and debug the built program directly on the device.

Planet DebianMark Brown: Book Club: Zettlekasten

Recently I was part of a call with Daniel and Lars to discuss Zettelkasten, a system for building up a cross-referenced archive of notes to help with research and study that has been getting a lot of discussion recently, the key thing being the building of links between ideas. Tomas Vik provided an overview of the process that we all found very helpful, and the information vs knowledge picture in Eugene Yan’s blog on the topic (by @gapingvoid) really helped us crystalize the goals. It’s not at all new and as Lars noted has a lot of similarities with a wikis in terms of what it produces but it couples this with an emphasis on the process and constant generation of new entries which Daniel found similar to some of the Getting Things Done recommendations. We all liked the emphasis on constant practice and how that can help build skills around effective note taking, clear writing and building links between ideas.

Both Daniel and Lars already have note taking practicies that they find useful, combinations of journalling and building up collections of notes of learnings over time, and felt that there could be value in integrating aspects of Zettelkasten into these practices so we talked quite a bit about how that could be done. There was a consensus that journalling is useful so the main idea we had was to keep maintaining the journal, using that as an inbox and setting aside time to write entries into a Zettelkasten. This is also a useful way to approach recording things when away from a computer, taking notes and then writing them up later. Daniel suggested that one way to migrate existing notes might be to simply start anew, moving things over from old notes as required and then after a suitably long period (for example a year) review anything that was left and migrate anything that was left.

We were all concerned about the idea of using any of the non-free solutions for something that is intended to be used long term, especially where the database isn’t in an easily understood format. Fortunately there are free software tools like Zettlr which seem to address these concerns well.

This was a really useful discussion, it really helps to bounce ideas off each other and this was certainly an interesting topic to learn about with some good ideas which will hopefully be helpful to us.

Planet DebianBits from Debian: Report of the Debian Perl Sprint 2020

Eight members of the Debian Perl team met online between May 15 and May 17 2020, in lieu of a planned physical sprint meeting. Work focussed on preparations for bullseye, and continued maintenance of the large number of perl modules maintained by the team.

Whilst an online sprint cannot fully replace an in-person sprint in terms of focussing attention, the weekend was still very productive, and progress was made on a range of topics including:

  • Reducing technical debt by removing unmaintained packages
  • Beginning packaging and QA for the next major release of perl, 5.32
  • Deciding on a team policy for hardening flags
  • Addressing concerns with Alien::*, a set of pacakges designed to download source code
  • Developing a proposal for debian/NEWS.Developer, to complement debian/NEWS
  • Developing a plan to enable SSL verification in HTTP::Tiny by default

The full report was posted to the relevant Debian mailing lists.

The participants would like to thank OpusVL for providing the Jitsi instance for the weekend.

CryptogramExamining the US Cyber Budget

Jason Healey takes a detailed look at the US federal cybersecurity budget and reaches an important conclusion: the US keeps saying that we need to prioritize defense, but in fact we prioritize attack.

To its credit, this budget does reveal an overall growth in cybersecurity funding of about 5 percent above the fiscal 2019 estimate. However, federal cybersecurity spending on civilian departments like the departments of Homeland Security, State, Treasury and Justice is overshadowed by that going toward the military:

  • The Defense Department's cyber-related budget is nearly 25 percent higher than the total going to all civilian departments, including the departments of Homeland Security, Treasury and Energy, which not only have to defend their own critical systems but also partner with critical infrastructure to help secure the energy, finance, transportation and health sectors ($9.6 billion compared to $7.8 billion).

  • The funds to support just the headquarters element­ -- that is, not even the operational teams in facilities outside of headquarters -- ­of U.S. Cyber Command are 33 percent higher than all the cyber-related funding to the State Department ($532 million compared to $400 million).

  • Just the increased funding to Defense was 30 percent higher than the total Homeland Security budget to improve the security of federal networks ($909 million compared to $694.1 million).

  • The Defense Department is budgeted two and a half times as much just for cyber operations as the Cybersecurity and Infrastructure Security Agency (CISA), which is nominally in charge of cybersecurity ($3.7 billion compared to $1.47 billion). In fact, the cyber operations budget is higher than the budgets for the CISA, the FBI and the Department of Justice's National Security Division combined ($3.7 billion compared to $2.21 billion).

  • The Defense Department's cyber operations have nearly 10 times the funding as the relevant Homeland Security defensive operational element, the National Cybersecurity and Communications Integration Center (NCCIC) ($3.7 billion compared to $371.4 million).

  • The U.S. government budgeted as much on military construction for cyber units as it did for the entirety of Homeland Security ($1.9 billion for each).

We cannot ignore what the money is telling us. The White House and National Cyber Strategy emphasize the need to protect the American people and our way of life, yet the budget does not reflect those values. Rather, the budget clearly shows that the Defense Department is the government's main priority. Of course, the exact Defense numbers for how much is spent on offense are classified.


Planet DebianArturo Borrero González: A better Toolforge: a technical deep dive

Logos

This post was originally published in the Wikimedia Tech blog, and is authored by Arturo Borrero Gonzalez and Brooke Storm.

In the previous post, we shared the context on the recent Kubernetes upgrade that we introduced in the Toolforge service. Today we would like to dive a bit more in the technical details.

Custom admission controllers

One of the key components of the Toolforge Kubernetes are our custom admission controllers. We use them to validate and enforce that the usage of the service is what we intended for. Basically, we have 2 of them:

The source code is written in Golang, which is pretty convenient for natively working in a Kubernetes environment. Both code repositories include extensive documentation: how to develop, test, use, and deploy them. We decided to go with custom admission controllers because we couldn’t find any native (or built-in) Kubernetes mechanism to accomplish the same sort of checks on user activity.

With the Ingress controller, we want to ensure that Ingress objects only handle traffic to our internal domains, which by the time of this writing, are toolforge.org (our new domain) and tools.wmflabs.org (legacy). We safe-list the kube-system namespace and the tool-fourohfour namespace because both need special consideration. More on the Ingress setup later.

The registry controller is pretty simple as well. It ensures that only our internal docker registry is used for user-scheduled containers running in Kubernetes. Again, we exclude from the checks containers running in the kube-system namespace (those used by Kubernetes itself). Other than that, the validation itself is pretty easy. For some extra containers we run (like those related to Prometheus metrics) what we do is simply upload those docker images to our internal registry. The controls provided by this admission controller helps us validate that only FLOSS software is run in our environment, which is one of the core rules of Toolforge.

RBAC and Pod Security Policy setup

I would like to comment next on our RBAC and Pod Security Policy setup. Using the Pod Security Policies (or PSP) we establish a set of constraints on what containers can and can’t do in our cluster. We have many PSP configured in our setup:

  • Privileged policy: used by Kubernetes containers themselves—basically a very relaxed set of constraints that are required for the system itself to work.
  • Default policy: a bit more restricted than the privileged policy, is intended for admins to deploy services, but it isn’t currently in use..
  • Toolforge user policies: this applies to user-scheduled containers, and there are some obvious restrictions here: we only allow unprivileged pods, we control which HostPath is available for pods, use only default Linux capabilities, etc.

Each user can interact with their own namespace (this is how we achieve multi-tenancy in the cluster). Kubernetes knows about each user by means of TLS certs, and for that we have RBAC. Each user has a rolebinding to a shared cluster-role that defines how Toolforge tools can use the Kubernetes API. The following diagram shows the design of our RBAC and PSP in our cluster:

RBAC and PSP for Toolforge diagram

RBAC and PSP for Toolforge, original image in wikitech

I mentioned that we know about each user by means of TLS certificates. This is true, and in fact, there is a key component in our setup called [maintain-kubeusers][maintain-k8s]. This custom piece of Python software is run as a pod inside the cluster and is responsible for reading our external user database (LDAP) and generating the required credentials, namespaces, and other configuration bits for them. With the TLS cert, we basically create a kubeconfig file that is then written into the homes NFS share, so each Toolforge user has it in their shell home directory.

Networking and Ingress setup

With the basic security controls in place, we can move on to explaining our networking and Ingress setup. Yes, the Ingress word might be a bit overloaded already, but we refer here to Ingress as the path that end-users follow from their web browser in their local machine to a webservice running in the Toolforge cluster.

Some additional context here. Toolforge is not only Kubernetes, but we also have a Son of GridEngine deployment, a job scheduler that covers some features not available in Kubernetes. The grid can also run webservices, although we are encouraging users to migrate them to Kubernetes. For compatibility reasons, we needed to adapt our Ingress setup to accommodate the old web grid. Deciding the layout of the network and Ingress was definitely something that took us some time to figure out because there is not a single way to do it right.

The following diagram can be used to explain the different steps involved in serving a web service running in the new Toolforge Kubernetes.

Toolforge k8s network topology diagram

Toolforge k8s network topology, original image in Wikitech

The end-user HTTP/HTTPs request first hits our front proxy in (1). Running here is NGINX with a custom piece of LUA code that is able to decide whether to contact the web grid or the new Kubernetes cluster. TLS termination happens here as well, for both domains (toolforge.org and tools.wmflabs.org). Note this proxy is reachable from the internet, as it uses a public IPv4 address, a floating IP from CloudVPS, the infrastructure service we provide based on Openstack. Remember that our Kubernetes is directly built in virtual machines–a bare-metal type deployment.

If the request is directed to a webservice running in Kubernetes, the request now reaches haproxy in (2), which knows the cluster nodes that are available for Ingress. The original 80/TCP packet is now translated to 30000/TCP; this is the TCP port we use internally for the Ingress traffic. This haproxy instance provides load-balancing also for the Kubernetes API as well, using 6443/TCP. It’s worth mentioning that unlike the Ingress, the API is only reachable from within the cluster and not from the internet.

We have a NGINX-Ingress NodePort service listening in 30000/TCP in every Kubernetes worker node in (3); this helps the request to eventually reach the actual NGINX-Ingress pod in (4), which is listening in 8080/TCP. You can see in the diagram how in the API server (5) we hook the Ingress admission controller (6) to validate Kubernetes Ingress configuration objects before allowing them in for processing by NGINX-Ingress (7).

The NGINX-Ingress process knows which tools webservices are online and how to contact them by means of an intermediate Service object in (8). This last Service object means the request finally reaches the actual tool pod in (9). At this point, it is worth noting that our Kubernetes cluster uses internally kube-proxy and Calico, both using Netfilter components to handle traffic.

tools-webservice

Most user-facing operations are simplified by means of another custom piece of Python code: tools-webservice. This package provides users with the webservice command line utility in our shell bastion hosts. Typical usage is to just run webservice start|stop|status. This utility creates all the required Kubernetes objects on-demand like Deployment, ReplicaSet, Ingress and Service to ease deploying web apps in Toolforge. Of course, advanced users can interact directly with Kubernetes API and create their custom configuration objects. This utility is just a wrapper, a shortcut.

tool-fourohfour and tool-k8s-status

The last couple of custom components we would like to mention are the tool-fourohfour and tool-k8s-status web services. These two utilities run inside the cluster as if they were any other user-created tool. The fourohfour tool allows for a controlled handling of HTTP 404 errors, and it works as the default NGINX-Ingress backend. The k8s-status tool shows plenty of information about the cluster itself and each tool running in the cluster, including links to the Server Admin Log, an auto-generated grafana dashboard for metrics, and more.

For metrics, we use an external Prometheus server that contacts the Kubernetes cluster to scrape metrics. We created a custom metrics namespace in which we deploy all the different components we use to observe the behavior of the system:

  • metrics-server: used by some utilities like kubectl top.
  • kube-state-metrics: provides advanced metrics about the state of the cluster.
  • cadvisor: to obtain fine-grained metrics about pods, deployments, nodes, etc.

All the Prometheus data we collect is used in several different Grafana dashboards, some of them directed for user information like the ones linked by the k8s-status tool and some others for internal use by us the engineers. These are for internal use but are still public, like the Ingress specific dashboard, or the cluster state dashboard. Working publicly, in a transparent way, is key for the success of CloudVPS in general and Toolforge in particular. Like we commented in the previous post, all the engineering work that was done here was shared by community members.

By the community, for the community

We think this post sheds some light on how the Toolforge Kubernetes service works, and we hope it could inspire others when trying to build similar services or, even better, help us improve Toolforge itself. Since this was first put into production some months ago we detected already some margin for improvement in a couple of the components. As in many other engineering products, we will follow an iterative approach for evolving the service. Mind that Toolforge is maintained by the Wikimedia Foundation, but you can think of it as a service by the community for the community. We will keep an eye on it and have a list of feature requests and things to improve in the future. We are looking forward to it!

This post was originally published in the Wikimedia Tech blog, and is authored by Arturo Borrero Gonzalez and Brooke Storm.

Worse Than FailureFaking the Grade

Report Card - The Noun Project

Our friend and frequent submitter Argle once taught evening classes in programming at his local community college. These classes tended to be small, around 20-30 students. Most of them were already programmers and were looking to expand their knowledge. Argle enjoyed helping them in that respect.

The first night of each new semester, Argle had everyone introduce themselves and share their goals for the class. One of his most notable students was a confident, charismatic young man named Emmanuel. "Manny," as he preferred to be called, told everyone that he was a contract programmer who'd been working with a local company for over a year.

"I don't really need to be here," he said. "My employer thought it would be nice if I brushed up on the basics."

Argle's first assignment for the class was a basic "Hello, world" program to demonstrate knowledge of the development environment. Manny handed it in with an eye-roll—then failed to turn in any more homework for the rest of the semester. He skipped lectures and showed up only for exams, each time smirking like he had the crib sheet to the universe in his back pocket. And yet he bombed every test in spectacular fashion, even managing to score below 50% on the true/false midterm. A layperson off the street could've outperformed him with random guessing.

Argle made attempts to offer help during office hours, all of which Manny ignored. This being college and not grade school, there wasn't much else Argle could do. Manny was an adult who'd turned in an F performance, so that was the grade he ended up with.

A few days after final grades had been submitted, Argle received a phone call from Manny. "I don't understand why you failed me," he began with full sincerity.

Baffled, Argle was speechless at first. Is this a joke? he wondered. "You didn't turn in any assignments," he explained, trying to keep emotion out of his voice. "Assignments were worth two-thirds of the grade. It's in the syllabus, and I discussed it on the first day of class."

"I thought my test grades would carry me," Manny replied.

Argle's bafflement only grew. "Even if you'd gotten perfect scores on every test, you still would've failed the class. And you had nowhere near perfect scores on the tests."

Manny broke down crying. He kept talking, almost incomprehensible through his sobs. "My employer paid for the class! They're going to see the grade! I'm not losing my job over this. I'm contesting the F!" He abruptly hung up.

Argle made a quick preemptive phone call to his department head to explain the situation, and was assured that everything would be taken care of. Upon ending the call, he shook his head in astonishment. Had Manny's employer suspected that their contractor wasn't as skilled with programming as he pretended to be? Or would his F come as a total shock to them?

A programmer who doesn't know how to program, he mused to himself, and management who can't tell the difference. Sounds like a match made in heaven.

Argle never heard about the issue again, so he never learned Manny's fate. But once he discovered our website, he came to understand that Manny was far from the only "brillant" programmer out there, and far from the only one whose incompetence went undetected for so long.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianRuss Allbery: Radical haul

Along with the normal selection of science fiction and fantasy, a few radical publishers have done book giveaways due to the current political crisis in the United States. I've been feeling for a while like I've not done my homework on diverse political theory, so I downloaded those. (That's the easy part; making time to read them is the hard part, and we'll see how that goes.)

Yarimar Bonilla & Marisol LeBrón (ed.) — Aftershocks of Disaster (non-fiction anthology)
Jordan T. Camp & Christina Heatherton (ed.) — Policing the Planet (non-fiction anthology)
Zachary D. Carter — The Price of Peace (non-fiction)
Justin Akers Chacón & Mike Davis — No One is Illegal (non-fiction)
Grace Chang — Disposable Domestics (non-fiction)
Suzanne Collins — The Ballad of Songbirds and Snakes (sff)
Angela Y. Davis — Freedom is a Constant Struggle (non-fiction)
Danny Katch — Socialism... Seriously (non-fiction)
Naomi Klein — The Battle for Paradise (non-fiction)
Naomi Klein — No is Not Enough (non-fiction)
Naomi Kritzer — Catfishing on CatNet (sff)
Derek Künsken — The Quantum Magician (sff)
Rob Larson — Bit Tyrants (non-fiction)
Michael Löwy — Ecosocialism (non-fiction)
Joe Macaré, Maya Schenwar, et al. (ed.) — Who Do You Serve, Who Do You Protect? (non-fiction anthology)
Tochi Onyebuchi — Riot Baby (sff)
Sarah Pinsker — A Song for a New Day (sff)
Lina Rather — Sisters of the Vast Black (sff)
Marta Russell — Capitalism and Disbility (non-fiction)
Keeanga-Yamahtta Taylor — From #BlackLivesMatter to Black Liberation (non-fiction)
Keeanga-Yamahtta Taylor (ed.) — How We Get Free (non-fiction anthology)
Linda Tirado — Hand to Mouth (non-fiction)
Alex S. Vitale — The End of Policing (non-fiction)
C.M. Waggoner — Unnatural Magic (sff)
Martha Wells — Network Effect (sff)
Kai Ashante Wilson — Sorcerer of the Wildeeps (sff)

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 06)

Here’s part six of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

Planet Linux AustraliaDavid Rowe: Codec 2 HF Data Modes 1

Since “attending” MHDC last month I’ve taken an interest in open source HF data modems. So I’ve been busy refactoring the Codec 2 OFDM modem for use with HF data.

The major change is from streaming small (28 bit) voice frames to longer (few hundred byte) packets of data. In some ways data is easier than PTT voice: latency is no longer an issue, and I can use nice long FEC codewords that ride over fades. On the flip side we really care about bit errors with data, for voice it’s acceptable to pass frames with errors to the speech decoder, and let the human ear work it out.

As a first step I’ve been working with GNU Octave simulations, and have developed 3 candidate data modes that I have been testing against simulated HF channels. In simulation they work well with up to 4ms of delay and 2.5Hz of Doppler.

Here are the simulation results for 10% Packet Error Rate (PER). The multipath channel has 2ms delay spread and 2Hz Doppler (CCITT Multipath Poor channel).

Mode Est Bytes/min AWGN SNR (dB) Multipath Poor SNR (dB)
datac1 6000 3 12
datac2 3000 1 7
datac3 1200 -3 0

The bytes/minute metric is commonly used by Winlink (divide by 7.5 for bits/s). I’ve assumed a 20% overhead for ARQ and other overheads. HF data isn’t fast – it’s a tough, narrow channel to push data through. But for certain applications (e.g. if you’re off the grid, or when the lights go out) it may be all you have. Even these low rates can be quite useful, 1200 bytes/minute is 8.5 tweets or SMS texts/minute.

The modem waveforms are pilot assisted coherent PSK using LDPC FEC codes. Coherent PSK can have gains of up to 6dB over differential PSK (DPSK) modems commonly used on HF.

Before I get too far along I wanted to try them over a real HF channels, to make sure I was on the right track. So much can go wrong with DSP in the real world!

So today I sent the new data waveforms over the air for the first time, using an 800km path on the 40m band from my home in Adelaide South Australia to a KiwiSDR about 800km away in Melbourne, Victoria.

Mode Est Bytes/min Power (Wrms) Est SNR (dB) Packets Tx/Rx
datac1 6000 10 10-15 15/15
datac2 3000 10 5-15 8/8
datac3 1200 0.5 -2 20/25

The Tx power is the RMS measured on my spec-an, for the 10W RMS samples it was 75W PEP. The SNR is measured in a 3000Hz noise bandwidth, I have a simple dipole at my end, not sure what the KiwiSDR was using.

I’m quite happy with these results. To give the c3 waveform a decent work out I dropped the power down to just 0.5W (listen), and I could still get 30% of the packets through at 100mW. A few of the tests had significant fading, however it was not very fast. My simulations are far tougher. Maybe I’ll try a NVIS path to give the modem a decent test on fast fading channels.

Here is the spectrogram (think waterfall on it’s side) for the -2dB datac3 sample:

Here are the uncoded (raw) errors, and the errors after FEC. Most of the frames made it. This mode employs a rate 1/3 LDPC code that was developed by Bill, VK5DSP. It can work at up to 16% raw BER! The errors at the end are due to the Tx signal ending, at this stage of development I just have a simple state machine with no “squelch”.

We have also been busy developing an API for the Codec 2 modems, see README_data.md. The idea is to allow developers of HF data protocols and applications to use the Codec 2 modems. As well as the “raw” HF data API, there is a very nice Ethernet style framer for VHF packet developed by Jeroen Vreeken.

If anyone would like to try running the modem Octave code take a look at the GitHub PR.

Reading Further

QAM and Packet Data for OFDM Pull Request for this work. Includes lots of notes. The waveform designs are described in this spreadsheet.
README for the Codec 2 OFDM modem, includes examples and more links.
Test Report of Various Winlink Modems
Modems for HF Digital Voice Part 1
Modems for HF Digital Voice Part 2

Planet DebianEnrico Zini: Culture links

Those of you who watch a lot of Hollywood movies may have noticed a certain trend that has consumed the industry in the last few years.  It ...
Video Essay Catalog No. 91 by Kevin B. Lee. Featured on the New York Times and other outlets. Originally published December 13, 2011 on Fandor. https://carpetbagger.blogs.nytimes.com/2011/12/19/staring-in-awe-its-the-spielberg-face/?_r=0
The Korowai cannibals live on top of trees. But is it true?
Bandicoot Cabbagepatch, Bandersnatch Cumberbund, and even Wimbledon Tennismatch: there seem to be endless variations on the name of Benedict Cumberbatch. [...] But how is a normal internet citizen supposed to know, when they hear someone say “I just can’t stop looking at gifs of Bombadil Rivendell” that this person isn’t talking about some other actor with a name and a voice and cheekbones? Or in other words, what makes for a reasonable variation of the name Bendandsnap Calldispatch?

Planet DebianDirk Eddelbuettel: T^4 #6: Byobu Sessions

The next video in our T^4 series of video lightning talks with tips, tricks, tools, and toys (where we had seen the announcement, shells sessions one, two, and three, as well as byoby sessions one and two) is now up at YouTube. It covers session management for the wonderful byobu tool that is both a ‘text-based window manager’ and a ‘terminal multiplexer’:

The slides are here.

This repo at GitHub support the series: use it to open issues for comments, criticism, suggestions, or feedback.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: RVowpalWabbit 0.0.14: More Keeping CRAN happy

Another maintenance RVowpalWabbit package update brings us to version 0.0.14. This time CRAN asked us to replace the (long obsoleted C-library) function ftime(). Along the way, we also updated links in the DESCRIPTION file to the (spiffy!!) new vowpalwabbit.org website, updated Travis use and fine-tuned some autoconf code in configure.ac.

There is a newer package rvw based on the excellent GSoC 2018 and beyond work by Ivan Pavlov (mentored by James and myself) so if you are into Vowpal Wabbit from R go check it out. It should go to CRAN “eventually” once we have better mechanisms to support external libraries.

CRANberries provides a summary of changes to the previous version. More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianSteve Kemp: Writing a brainfuck compiler.

So last night I had the idea that it might be fun to write a Brainfuck compiler, to convert BF programs into assembly language, where they'd run nice and quickly.

I figured I could allocate a day to do the work, and it would be a pleasant distraction on a Sunday afternoon. As it happened it only took me three hours from start to finish.

There are only a few instructions involved in brainfuck:

  • >
    • increment the data pointer (to point to the next cell to the right).
  • <
    • decrement the data pointer (to point to the next cell to the left).
  • +
    • increment (increase by one) the byte at the data pointer.
  • -
    • decrement (decrease by one) the byte at the data pointer.
  • .
    • output the byte at the data pointer.
  • ,
    • accept one byte of input, storing its value in the byte at the data pointer.
  • [
    • if the byte at the data pointer is zero, then instead of moving the instruction pointer forward to the next command, jump it forward to the command after the matching ] command.
  • ]
    • if the byte at the data pointer is nonzero, then instead of moving the instruction pointer forward to the next command, jump it back to the command after the matching [ command.

The Wikipedia link early shows how you can convert cleanly to a C implementation, so my first version just did that:

  • Read a BF program.
  • Convert to a temporary C-source file.
  • Compile with gcc.
  • End result you have an executable which you can run.

The second step was just as simple:

  • Read a BF program.
  • Convert to a temporary assembly language file.
  • Compile with nasm, link with ld.
  • End result you have an executable which you can run.

The following program produces the string "Hello World!" to the console:

++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.

My C-compiler converted that to the program:

extern int putchar(int);
extern char getchar();

char array[30000];
int idx = 0;

int main (int arc, char *argv[]) {
  array[idx]++;
  array[idx]++;
  array[idx]++;
  array[idx]++;
  array[idx]++;
  array[idx]++;
  array[idx]++;
  array[idx]++;

  while (array[idx]) {
    idx++;
    array[idx]++;
    array[idx]++;
    array[idx]++;
    array[idx]++;

    while (array[idx]) {
      idx++;
      array[idx]++;
      array[idx]++;
      idx++;
      array[idx]++;
      array[idx]++;
      array[idx]++;
    ..
    ..

The assembly language version is even longer:


global _start
section .text

_start:
  mov r8, stack
  add byte [r8], 8
label_loop_8:
  cmp byte [r8], 0
  je close_loop_8
  add r8, 1
  add byte [r8], 4
label_loop_14:
  cmp byte [r8], 0
  je close_loop_14
  add r8, 1
  add byte [r8], 2
  add r8, 1
  add byte [r8], 3
  add r8, 1
  add byte [r8], 3
  add r8, 1
  add byte [r8], 1
  sub r8, 4
  sub byte [r8], 1
  jmp label_loop_14
close_loop_14:
  add r8, 1
  add byte [r8], 1
  ..

  mov rax, 60
  mov rdi, 0
  syscall
section .bss
stack: resb 300000

Annoyingly the assembly language version ran slower than the C-version, which I was sneakily compiling with "gcc -O3 .." to ensure it was optimized.

The first thing that I did was to convert it to fold adjacent instructions. Instead of generating separate increment instructions for ">>>" I instead started to generate "add xx, 3". That didn't help as much as I'd hoped, but it was still a big win.

After that I made a minor tweak to the way that loops are handled to compare at the end of the loop as well as the start, and that shaved off a fraction of a second.

As things stand I think I'm "done". It might be nice to convert the generated assembly language to something gcc can handle, to drop the dependency on nasm, but I don't feel a pressing need for that yet.

Was a fun learning experience, and I think I'll come back to optimization later.

Planet DebianEvgeni Golov: naked pings 2020

ajax' post about "ping" etiquette is over 10 years old, but holds true until this day. So true, that my IRC client at work has a script, that will reply with a link to it each time I get a naked ping.

But IRC is not the only means of communication. There is also mail, (video) conferencing, and GitHub/GitLab. Well, at least in the software engineering context. Oh and yes, it's 2020 and I still (proudly) have no Slack account.

Thankfully, (naked) pings are not really a thing for mail or conferencing, but I see an increasing amount of them on GitHub and it bothers me, a lot. As there is no direct messaging on GitHub, you might rightfully ask why, as there is always context in form of the issue or PR the ping happened in, so lean back an listen ;-)

notifications become useless

While there might be context in the issue/PR, there is none (besides the title) in the notification mail, and not even the title in the notification from the Android app (which I have installed as I use it lot for smaller reviews). So the ping will always force a full context switch to open the web view of the issue in question, removing the possibility to just swipe away the notification/mail as "not important right now".

even some context is not enough context

Even after visiting the issue/PR, the ping quite often remains non-actionable. Do you want me to debug/fix the issue? Review the PR? Merge it? Close it? I don't know!

The only actionable ping is when the previous message is directed at me and has an actionable request in it and the ping is just a reminder that I have to do it. And even then, why not write "hey @evgeni, did you have time to process my last question?" or something similar?

BTW, this is also what I dislike about ajax' minimal example "ping re bz 534027" - what am I supposed to do with that BZ?!

why me anyways?!

Unless I am the only maintainer of a repo or the author of the issue/PR, there is usually no reason to ping me directly. I might be sick, or on holiday, or currently not working on that particular repo/topic or whatever. Any of that will result in you thinking that your request will be prioritized, while in reality it won't. Even worse, somebody might come across it, see me mentioned and think "ok, that's Evgeni's playground, I'll look elsewhere".

Most organizations have groups of people working on specific topics. If you know the group name and have enough permissions (I am not exactly sure which, just that GitHub have limits to avoid spam, sorry) you can ping @organization/group and everyone in that group will get a notification. That's far from perfect, but at least this will get the attention of the right people. Sometimes there is also a bot that will either automatically ping a group of people or you can trigger to do so.

Oh, and I'm getting paid for work on open source. So if you end up pinging me in a work-related repository, there is a high chance I will only process that during work hours, while another co-worker might have been available to help you out almost immediately.

be patient

Unless we talked on another medium before and I am waiting for it, please don't ping directly after creation of the issue/PR. Maintainers get notifications about new stuff and will triage and process it at some point.

conclusion

If you feel called out, please don't take it personally. Instead, please try to provide as much actionable information as possible and be patient, that's the best way to get a high quality result.

I will ignore pings where I don't immediately know what to do, and so should you.

one more thing

Oh, and if you ping me on IRC, with context, and then disconnect before I can respond…

In the past you would sometimes get a reply by mail. These days the request will be most probably ignored. I don't like talking to the void. Sorry.

Krebs on SecurityPrivnotes.com Is Phishing Bitcoin from Users of Private Messaging Service Privnote.com

For the past year, a site called Privnotes.com has been impersonating Privnote.com, a legitimate, free service that offers private, encrypted messages which self-destruct automatically after they are read. Until recently, I couldn’t quite work out what Privnotes was up to, but today it became crystal clear: Any messages containing bitcoin addresses will be automatically altered to include a different bitcoin address, as long as the Internet addresses of the sender and receiver of the message are not the same.

Earlier this year, KrebsOnSecurity heard from the owners of Privnote.com, who complained that someone had set up a fake clone of their site that was fooling quite a few regular users of the service.

And it’s not hard to see why: Privnotes.com is confusingly similar in name and appearance to the real thing, and comes up second in Google search results for the term “privnote.” Also, anyone who mistakenly types “privnotes” into Google search may see at the top of the results a misleading paid ad for “Privnote” that actually leads to privnotes.com.

A Google search for the term “privnotes” brings up a misleading paid ad for the phishing site privnotes.com, which is listed above the legitimate site — privnote.com.

Privnote.com (the legit service) employs technology that encrypts all messages so that even Privnote itself cannot read the contents of the message. And it doesn’t send and receive messages. Creating a message merely generates a link. When that link is clicked or visited, the service warns that the message will be gone forever after it is read.

But according to the owners of Privnote.com, the phishing site Privnotes.com does not fully implement encryption, and can read and/or modify all messages sent by users.

“It is very simple to check that the note in privnoteS is sent unencrypted in plain text,” Privnote.com explained in a February 2020 message, responding to inquiries from KrebsOnSecurity. “Moreover, it doesn’t enforce any kind of decryption key when opening a note and the key after # in the URL can be replaced by arbitrary characters and the note will still open.”

But that’s not the half of it. KrebsOnSecurity has learned that the phishing site Privnotes.com uses some kind of automated script that scours messages for bitcoin addresses, and replaces any bitcoin addresses found with its own bitcoin address. The script apparently only modifies messages if the note is opened from a different Internet address than the one that composed the address.

Here’s an example, using the bitcoin wallet address from bitcoin’s Wikipedia page as an example. The following message was composed at Privnotes.com from a computer with an Internet address in New York, with the message, “please send money to bc1qar0srrr7xfkvy5l643lydnw9re59gtzzwf5mdq thanks”:

A test message composed on privnotes.com, which is phishing users of the legitimate encrypted message service privnote.com. Pay special attention to the bitcoin address in this message.

When I visited the Privnotes.com link generated by clicking the “create note” button on the above page from a different computer with an Internet address in California, this was the result. As you can see, it lists a different bitcoin address, albeit one with the same first four characters.

The altered message. Notice the bitcoin address has been modified and is not the same address that was sent in the original note.

Several other tests confirmed that the bitcoin modifying script does not seem to change message contents if the sender and receiver’s IP addresses are the same, or if one composes multiple notes with the same bitcoin address in it.

Allison Nixon, the security expert who helped me with this testing, said the script also only seems to replace the first instance of a bitcoin address if it’s repeated within a message, and the site stops replacing a wallet address if it is sent repeatedly over multiple messages.

“And because of the design of the site, the sender won’t be able to view the message because it self destructs after one open, and the type of people using privnote aren’t the type of people who are going to send that bitcoin wallet any other way for verification purposes,” said Nixon, who is chief research officer at Unit 221B. “It’s a pretty smart scam.”

Given that Privnotes.com is phishing bitcoin users, it’s a fair bet the phony service also is siphoning other sensitive data from people who use their site.

“So if there are password dumps in the message, they would be able to read that, too,” Nixon said. “At first, I thought that was their whole angle, just to siphon data. But the bitcoin wallet replacement is probably much closer to the main motivation for running the fake site.”

Even if you never use or plan to use the legitimate encrypted message service Privnote.com, this scam is a great reminder why it pays to be extra careful about using search engines to find sites that you plan to entrust with sensitive data. A far better approach is to bookmark such sites, and rely exclusively on those instead.

,

Planet DebianNorbert Preining: KDE/Plasma Status Update

Some time has passed since the last updated of my KDE/Plasma packages. In the meantime KDE/frameworks 0.70 was uploaded to Debian/unstable, and everyone should have smoothly transitioned to the “official” packages by now.

In the same vein, some packages from the Plasma stack have been updated to 5.18.5 via the official channels, but some aren’t here my repos still might be helpful. Finally, KDE/Apps are also in the process of finally being updated to 20.04.0/1 in Debian/unstable, but 20.04.2 was already released the other day, as well as Plasma 5.19 some days ago.

So here is the current status in the repositories I maintain on OBS:

  • KDE/Frameworks are at 0.70 but with a version smaller than the one in Debian/unstable
  • KDE/Plasma 5.18.5 is available partly from Debian/unstable, and completely from my repos
  • KDE/Apps 20.04.2 are available from my repos (and partly from unstable suite)
  • all packages are available also for Debian/testing

Concerning Plasma 5.19: I have already written about this, and since then the packages have been updated to Plasma 5.19 as released, but since Debian/unstable still ships Qt 5.12, Plasma 5.19 cannot be installed on it. Also, the repo has changed on OBS, see below.

Repositories:
For Unstable:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Unstable/ ./

For Testing:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Testing/ ./

For Plasma 5.19 (only for amd64):

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma519/Debian_Unstable/ ./

As usual, don’t forget that you need to import my OBS gpg key: obs-npreining.asc, best to download it and put the file into /etc/apt/trusted.gpg.d/obs-npreining.asc.

As mentioned in the previous post, collaboration with the Debian Qt/KDE Team unfortunately turned out to be impossible, but I will keep updating the packages here, and try to keep compatibility with the Debian official packages, so that – if finally there are updates – they will take precedence over the packages from my repositories.

Enjoy.

,

Planet DebianEnrico Zini: Qt5 custom build for armhf embedded development

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

I split building Qt5 for armhf development in two parts: one cross-build environment to be installed on amd64 develpment systems, and a runtime part to be installed on the target armhf hardware.

Building a Qt5 cross-building environment builds a mix of armhf and amd64 binares: the amd64 tools to use for cross-building, like moc, qmake, plugins for Qt Creator, and so on; armhf headers and libraries to use at cross-build time; armhf shared libraries to use at runtime.

The procedure I came up with builds a devel package for amd64 development machines, which contains everything, and a second package that extracts from it only what is needed at runtime.

The cross-build environment is coinstallable both with the version of Qt distributed with Debian, and with the amd64 custom Qt development package.

The current build is sadly using -skip qtwebengine, because I have had no success so far getting QtWebEngine to compile as part of a cross-build Qt setup (the last road bump I can't overcome is nss and nspr not being coinstallable on amd64 and armhf, while both seem to be needed for it).

The resulting packaging is at https://github.com/Truelite/qt5custom.

Set up sources

Open the source tarball, and add the amd64 packaging:

tar axf qt-everywhere-src-5.15.0.tar.xz
cp -a debian-cross qt-everywhere-src-5.15.0/debian

If needed, install the Qt license:

cp qt-license.txt ~/.qt-license

If debugging information are not needed in armhf development, remove --no-strip from ./configure invocation in the rules file, to build significantly smaller .deb packages.

Install build dependencies

Install cross-compilers:

dpkg --add-architecture armhf
apt install crossbuild-essential-armhf

You can use apt build-dep to install dependencies manually:

cd qt-everywhere-src-5.15.0
apt build-dep .
apt -a armhf build-dep .

Alternatively, you can create installable .deb metapackages that depends on the build dependencies:

apt install devscripts
mk-build-deps --host-arch amd64 debian-cross/control
mk-build-deps --host-arch armhf debian-cross/control
apt -f install qt-everywhere-cross-build-deps_5.15.0-1_amd64.deb qt-everywhere-cross-cross-build-deps_5.15.0-1_armhf.deb

Note that there are two sets of dependencies: one of amd64 packages, and one of armhf packages.

Building the cross-build environment

After installing the build dependencies, you can build like this:

cd qt-everywhere-src-5.15.0
fakeroot debian/rules binary

In debian/rules you can configure NUMJOBS with the number of available CPUs in the machine, to have parallel builds.

This will build a package with the cross-build development environment for amd64, called qtbase5custom-armhf-dev

Building the runtime environment

To generate the runtime package for armhf, one needs to have the cross-build package (qtbase5custom-armhf-dev) installed in the system together with its build dependencies.

At that point, the armhf runtime package can be built using the debian-armhf directory without further sources:

apt install crossbuild-essential-armhf debhelper qtbase5custom-armhf-dev*_amd64.deb qt-everywhere-src-cross-build-deps*_armhf.deb
mkdir runtime
cp -a debian-armhf runtime/debian
cd runtime
dpkg-buildpackage -a armhf

Building the runtime environment generates:

  • a libqt5custom package for armhf, installable on the target devices, containing the runtime Qt libraries and depending on the packages that they need to run;
  • a libqt5custom-dbgym package for armhf with debugging symbols, to use for debugging on the target hardware.

If, while generating the cross-build environment, --no-strip was removed, the libqtcustom-dbgsym package with debugging symbols will not be generated.

Using the cross-build environment

These install their content in /opt, and are coninstallable with the version of Qt distributed in Debian, and with the custom Qt packages for amd64.

One needs to be careful not to create programs that link, either directly or indirectly, with more than one of these coinstalled Qt, because the in memory layout of objects could be different and incompatible, causing unexpected results.

Selecting which Qt version to use: qtchooser

These Qt custom packages integrate with qtchooser to select the version of Qt to use at compile time.

qtchooser --list-versions lists available versions. One can choose what to use by exporting QT_SELECT:

# apt install qtchooser qt5-qmake qt5-default
$ qtchooser --list-versions
4
5
qt4-x86_64-linux-gnu
qt4
qt5-x86_64-linux-gnu
qt5
qt5custom-x86_64-linux-gnu
qt5custom
qt5custom-armhf-x86_64-linux-gnu
qt5custom-armhf

$ qmake --version
QMake version 3.1
Using Qt version 5.11.3 in /usr/lib/x86_64-linux-gnu

$ export QT_SELECT=qt5custom-armhf
$ qmake --version
QMake version 3.1
Using Qt version 5.15.0 in /opt/qt5custom-armhf/lib

Cross-building software using custom Qt

One just needs to export QT_SELECT=qt5custom-armhf in the environment, then proceed to build normally:

export QT_SELECT=qt5custom-armhf
fakeroot ./debian/rules clean binary

Or:

export QT_SELECT=qt5custom-armhf
qmake file.pro

If switching from one Qt to another, it is possible that the makefiles created by one qmake are not working well with the other. In that case, one can just remove them and regenerate them.

The build result is ready to be copied into, and run in, the target armhf device.

CryptogramFriday Squid Blogging: Human Cells with Squid-Like Transparency

I think we need more human organs with squid-like features.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianBits from Debian: DebConf20 moves online, DebConf21 will be in Haifa

The DebConf team has had to take the hard decision that DebConf 20 cannot happen in-person, in Haifa, in August, as originally planned. This decision is based on the status of the venue in Haifa, the local team's view of the local health situation, the existing travel restrictions and the results of a survey of potential participants.

DebConf 20 will be held online instead!

The Debian community can still get together to share ideas, discuss plans in Birds of a Feather sessions, and eat cheese, from the safety of the desks at home.

So, please submit your talk, sprint, and BoF proposals for DebConf 20 Online.

It will be held within the same dates, as before, 23-29 August. The DebConf team expects the event to be significantly shorter than a usual DebCamp + DebConf, but that will depend on the volume of proposals received.

Hopefully in 2021 we can once again hold conferences in person. DebConf 21 is scheduled to be taking place in Haifa. The following planned DebConfs will be held a year later than originally scheduled: 2022 in Kosovo and 2023 in Kochi, India.

See you online in August!

Planet DebianUlrike Uhlig: The right to demand change

Two women sit in an office, one asks: "What's the difference between being assertive and being aggressive?" The other replies: "Your gender." (Cartoon by Judy Horacek, 1999.)

When a person of a marginalized group (read: a person with less privilege, a person with lower rank) is being framed and blamed as being aggressive, she is being told that her behavior is unacceptable. Marginalized people have learnt that they need to comply to fit, and are likely to suppress their feelings. By being framed as aggressive, the marginalized person is also being told that what they are saying cannot be listened to because the way they are saying it does not comply with expectations. There is a word for this: tone policing. This great comic by Robot Hugs has all the important details. Tone policing is a silencing tactic in which privileged participants of a discussion one-sidedly define the terms of the conversation. This tactic has the interesting side effect of shifting the responsibility to prove that one is not {aggressive, hostile, explosive, a minefield, etc.} to the person being framed and blamed - proving that one is worthy to be listened to. (Some of those words are actual quotes taken from real life.)

Years ago, I worked in a company in which my female developer colleague would put herself in a state of overly expressed sorriness, all the while pretending to be stupid and helpless whenever she needed to ask anything from the sysadmins. When I confronted her with that, she replied: "I do it because it works." In the same company, another woman who generally asked assertively for what she needed ended up being insulted by one of the project managers using the word "dominatrix". While the example comes from my own experience, this kind of thing happens across any oppression/privilege boundaries.

In some conversations, be they verbal or written, frustration and anger of one person are sometimes being mistaken by the communication partner for aggressiveness. Why is this happening? Asking a person with privilege to see, question, or change their behavior, questions their privilege. I'm thinking that it might be that most people think of themselves as "being good"—and when they are being asked to question themselves or their behavior, their self-image is being challenged. "Me? But I did not do anything wrong! It's certainly not my fault if you are being oppressed! I sacrificed myself to reach my current position in life!" This comic by Toby Morris, "On a Plate", explains it quite nicely.

So are we stuck with seeing conversations derail? I'd argue instead that while anger and frustration are unpleasant feelings, they're important: they show us that our boundaries have been crossed, that we want something to change, to stop, or that we need something different right now. We have the right to be angry and to demand change.

CryptogramPhishing Attacks against Trump and Biden Campaigns

Google's threat analysts have identified state-level attacks from China.

I hope both campaigns are working under the assumption that everything they say and do will be dumped on the Internet before the election. That feels like the most likely outcome.

Worse Than FailureError'd: People also ask ...WTF?!

"Exactly which people are asking this question?" Jamie M. wrote.

 

"More like a friendly reminder that for one solid day, 2000 years ago, I was insured," Kyle B. writes.

 

Jordan R. wrote, "Ok, I'll bite, how can NULL solve pain points of private clouds?'

 

"Tell me Jira, is the 'undefined' key anywhere near the 'any' key?" writes Gary A.

 

Quentin G. wrote, "92% sales tax seems a bit high, but shipping? I don't live in Outer Mongoila!"

 

Mike S. writes, "Only $1M a year? Cheap &&%$#$!!"

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Planet DebianC.J. Adams-Collier: Recovering videos from DV tapes with Canon ZR80

I am recovering some tapes from back in the day that some of you may enjoy. Here is a log of the process so that maybe you can recover some of your own DV tapes. Seems to work well in modern Debian.

To attach to the camcorder, I used a PCI-e card that has an old firewire port and some ASIC on board. The PCI card came up and loaded the correct kernel drivers.

Here is a search link so that you can buy a similar card.

cjac@server0:~$ sudo lspci | grep 1394
b2:00.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEE
E 1394 OHCI Controller (rev 46)

cjac@server0:~$ sudo lsmod | grep -i firewire
firewire_ohci 45056 0
firewire_core 81920 7 firewire_ohci
crc_itu_t 16384 1 firewire_core

The dvgrab program is available on Debian under the dvgrab package.
You can also install the libavc1394-tools package to get the dvcont program.

cjac@server0:~$ sudo apt-get install dvgrab libavc1394-tools

Turn the device to “VCR” mode, attach the firewire cable and wait about five minutes. Have you watered the cat today?

cjac@server0:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
cjac@server0:~$ uname -r
5.2.0-0.bpo.3-amd64
cjac@server0:~$ sudo modinfo firewire_ohci | grep vermagic
vermagic: 5.2.0-0.bpo.3-amd64 SMP mod_unload modversions

cjac@server0:~$ dvcont status
Winding stopped
cjac@server0:~$ dvcont rewind
cjac@server0:~$ dvcont status
Winding reverse
cjac@server0:~$ dvcont status
Winding stopped

# make a directory to store the raw dv tape data and the
# transcodings

cjac@server0:~$ mkdir -p /srv/nfs/cj.backup/dv/oscon2006

# I’ve found that each tape stores around 12 GB of raw data, so be
# sure to perform this on a partition with tens of gigs of spare
# space

cjac@server0:~$ cd /srv/nfs/cj.backup/dv/oscon2006
cjac@server0:/srv/nfs/cj.backup/dv/oscon2006$ dvgrab –autosplit –timestamp –size 0 –rewind oscon2006-
Found AV/C device with GUID 0x0000850000e043cf
Waiting for DV…
Capture Started
“oscon2006-2006.07.26_12-37-44.dv”: 266.30 MiB 2327 frames timecode 00:01:17.26 date 2006.07.26 12:39:01
“oscon2006-2006.07.26_12-40-59.dv”: 816.76 MiB 7137 frames timecode 00:05:16.01 date 2006.07.26 12:44:57
“oscon2006-2006.07.26_12-45-06.dv”: 8420.56 MiB 73580 frames timecode 00:46:11.05 date 2006.07.26 13:26:01
“oscon2006-2006.07.26_13-32-08.dv”: 2961.27 MiB 25876 frames timecode 00:00:00.00 date 2020.06.10 10:46:25
Capture Stopped

During the capture, the dvcont status will be “Playing”:

cjac@server0:/srv/nfs/cj.backup/dv/oscon2006$ dvcont status
Playing

In a different window of the screen session or I guess a new gnome-terminal, put together a transcoding environment.
libx264-155

cjac@server0:/srv/nfs/cj.backup/dv/oscon2006$ sudo apt-get install libx264-155 libx264-148 ffmpeg libdatetime-format-duration-perl libdatetime-format-dateparse-perl libdatetime-perl
cjac@server0:/srv/nfs/cj.backup/dv/oscon2006$ wget https://raw.githubusercontent.com/cjac/dvscripts/master/transcode.pl && chmod u+x transcode.pl
# review transcode.pl, change $prefix
./transcode.pl

The script will detect partial transcodes and do the right thing generally, so don’t worry too much about running ./transcode.pl too often.

Results are being stored in various places including

http://web.c9h.org/~cjac/perl/videos/

Planet DebianEnrico Zini: Qt5 custom build for amd64

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

First step, build Qt5 5.15 packages for amd64.

To prevent conflicting with Debian Qt packages, we'll install everything in /opt.

We can install qtchooser configuration files to allow developers to easily switch between Debian's standard Qt version or the custom version, at will.

The resulting packaging is at https://github.com/Truelite/qt5custom.

Set up sources

Open the source tarball, and add the amd64 packaging:

tar axf qt-everywhere-src-5.15.0.tar.xz
cp -a debian-amd64 qt-everywhere-src-5.15.0/debian

If needed, install the Qt license:

cp qt-license.txt ~/.qt-license

Install build dependencies

You can use apt build-dep to install dependencies manually:

cd qt-everywhere-src-5.15.0
apt build-dep .

Alternatively, you can create an installable .deb metapackage that depends on the build dependencies:

apt install devscripts
mk-build-deps debian-amd64/control
apt -f install qt-everywhere-src-build-deps_5.15.0-1_amd64.deb

Package build

The package is built by debian/rules base on the excellent work done by Debian Qt5 maintainers

After installing the build dependencies, you can build like this:

cd qt-everywhere-src-5.15.0
fakeroot debian/rules binary

In debian/rules you can configure NUMJOBS with the number of available CPUs in the machine, to have parallel builds.

Build output

Building sources generates 4 packages:

  • libqt5custom: the runtime environment
  • libqt5custom-dbgsym: debugging symbols for the runtime environment
  • qtbase5custom-dev: the build environment
  • qtbase5custom-dev-dbgsym: debugging symbols for the build environment

qtbase5custom-dev and libqt5custom are needed for development; only libqt5custom is needed to run build programs.

Using custom Qt for amd64

These Qt custom packages install their content in /opt, and are coninstallable with the version of Qt distributed in Debian.

One needs to be careful not to create programs that link, either directly or indirectly, with both the Debian Qt and the custom Qt, because the in memory layout of objects could be different and incompatible, causing unexpected results.

Selecting which Qt version to use: qtchooser

These Qt custom packages integrate with qtchooser to select the version of Qt to use at compile time.

qtchooser --list-versions lists available versions. One can choose what to use by exporting QT_SELECT:

# apt install qtchooser qt5-qmake qt5-default
$ qtchooser --list-versions
4
5
qt4-x86_64-linux-gnu
qt4
qt5-x86_64-linux-gnu
qt5
qt5custom-x86_64-linux-gnu
qt5custom

$ qmake --version
QMake version 3.1
Using Qt version 5.11.3 in /usr/lib/x86_64-linux-gnu

$ export QT_SELECT=qt5custom
$ qmake --version
QMake version 3.1
Using Qt version 5.15.0 in /opt/qt5custom/lib

Building software using custom Qt

One just needs to export QT_SELECT=qt5custom in the environment, then proceed to build normally:

export QT_SELECT=qt5custom
fakeroot ./debian/rules clean binary

Or:

export QT_SELECT=qt5custom
qmake file.pro

If switching from one Qt to another, it is possible that the makefiles created by one qmake are not working well with the other. In that case, one can just remove them and regenerate them.

Planet DebianEnrico Zini: Custom build of Qt5

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

A customer needs a procedure for a custom build of Qt5 5.15, the last LTS release of Qt 5.

They develop for industrial systems that are managed by an amd64 industrial computer. This computer is accessed either through an attached panel touch screen, or through touch screens driven by Raspberry Pi clients connected via an internal ethernet network.

The control interfaces use mostly a full screen Qt5 application. The customer relies heavily on Qt5, has a full Enterprise license, and needs to stay on top of the most recent releases, to make use of new features or bug fixes that have made it upstream since the last Debian stable was released.

This is a list of requirements for this job:

  • Build .deb packages of the custom builds of Qt5, so they can be integrated with the existing provisioning infrastructure
  • Easily repackage hopefully at least new Qt minor versions
  • Custom builds should be coinstallable with the standard Debian Qt5 packages, to be able to use existing Qt-based Debian packages without rebuilding them
  • One needs to be able to develop custom widgets, and use them in the Form Editor in Qt Creator
  • One needs to be able to load custom widgets via .ui files at runtime
  • One needs to develop amd64 Qt5 applications
  • One needs to develop armhf Qt5 applications
  • One needs to develop armhf Qt5 applications from Qt Creator, with the nice feature it has to cross-compile them on the fast amd64 development machine, and run and debug them directly on a network-connected device
  • One needs to package the resulting amd64 or armhf binaries in .deb format, so they can be integrated with the existing provisioning infrastructure.

To make things easier, .deb packages are for internal use only and do not need to be compliant with Debian policy.

I estimate a difficulty level of: "Bring the One Ring to Mount Doom and remember to get milk on the way back".

The journey begins.

The resulting packaging is at https://github.com/Truelite/qt5custom.

Planet DebianMarkus Koschany: My Free Software Activities in May 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in June) that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • I decided to upgrade Nethack to version 3.6.6 that fixed several security vulnerabilities and a GCC 10 FTBFS bug. Unfortunately the Debian specific lisp fork of Nethack is no longer compatible with the most recent changes. I could fix some errors but really didn’t want to maintain something that should better be upstreamed. I filed Debian bug #961932 because nethack-lisp is unusable now. In my opinion the lisp fork prevents more regular updates and it really needs a maintainer who likes to care for the code. But the best solution would be to merge the code upstream. Anyone interested in a challenge?
  • This month I could update a couple of games that haven’t seen much love in the past years, but to be fair, all of them still just worked fine. They just needed some modifications due to the switch to debhelper-compat = 13, or they could not be reproducibly build or cross-build from source. And then there were also some GCC 10 bugs, that are currently severity normal but will become release-critical soon. So there was briquolo (#960386, reproducible-build patch by Chris Lamb), a 3D breakout game, empire (#957172, GCC-10), asc (#957013, GCC-10), asc-music, ace-of-penguins (#956976, GCC-10), foobillardplus (#914622, cross-build, patch by Helmut Grohne), vodovod (cross-build, patch by Helmut Grohne), holotz-castle (cross-build, patch by Helmut Grohne), kball (cross-build, patch by Helmut Grohne), zaz, an action puzzle game, xgalaga (cross-build, patch by Helmut Grohne), xmahjongg and plee-the-bear (Boost FTBFS, patch by Giovanni Mascellani and a cross-build issue, patch by Helmut Grohne).
  • I was contacted by Martin Gerhardy, upstream maintainer of caveexpress and former lead-developer of ufoai. He is currently working on a new free software voxel game engine and its tools. He asked me to take a look at the Debian packaging but I couldn’t promise to package it yet, although this is certainly something that interests me. I will provide some feedback for the prelimary Debian packaging though, which he has prepared already. In the meantime he released a new version of caveexpress and I hope that we can find a solution for an ufoai RC-bug quite soon, but at least before Debian freezes.
  • I sponsored bzflag and supertux for Reiner Herrman. Greatly appreciated!
  • Ryan Tandy contributed an overhauled mgba package, a Game Boy Advance emulator. Thanks a lot!
  • I also packaged new versions of hexalate, hitori and peg-e.

Debian Java

  • New upstream versions this month: undertow, jboss-xnio and libapache-mod-jk. The latter package contained a wrongly named file that prevented the apache tools a2enmod and a2dismod from symlinking that file. I corrected the error by preparing a stable point-update as well.

Misc

  • I packaged new versions of wabt, privacybadger and https-everywhere. I would like to update ublock-origin as well but the package is still stuck in the NEW queue. I don’t know why.
  • I packaged a new upstream version of xarchiver and applied a patch to address Debian bug #959914. There is still a problem with multi-part encrypted 7zip files but since it is already known upstream, I am confident there will be a fix eventually.

Debian LTS

This was my 51. month as a paid contributor and I have been paid to work 25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-2209-1. Issued a security update for tomcat8 fixing 4 CVE. The update was delayed due to an error, which was not discovered by the test suite and a new CVE, CVE-2020-9484.
  • squid3: I have almost completed the update and prepared patches for 16 different security vulnerabilities in Stretch and Jessie. Due to the in part invasive changes I will publish a request for testing on the debian-lts mailing list first. If there are no negative reports, the update should happen next week now.
  • imagemagick: I am currently working on a complete update of the popular image manipulation program. I have already completed 10 patches but I intend to release a full update until the end of the month.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my 24. month and I have been paid to work 9,25 hours on ELTS.

  • ELA-232-1. Issued a security update for nss fixing 1 CVE.
  • ELA-233-1. Issued a security update for openjdk-7 fixing 1 CVE.
  • Prepared the last security update of linux for Wheezy. The new kernel will be available on Saturday, 13.06.2020, after it passes the usual tests.

Thanks for reading and see you next time.

Planet DebianAntoine Beaupré: CVE-2020-13777 GnuTLS audit: be scared

So CVE-2020-13777 came out while I wasn't looking last week. The GnuTLS advisory (GNUTLS-SA-2020-06-03) is pretty opaque so I'll refer instead to this tweet from @FiloSottile (Go team security lead):

PSA: don't rely on GnuTLS, please.

CVE-2020-13777 Whoops, for the past 10 releases most TLS 1.0–1.2 connection could be passively decrypted and most TLS 1.3 connections intercepted. Trivially.

Also, TLS 1.2–1.0 session tickets are awful.

You are reading this correctly: supposedly encrypted TLS connections made with affected GnuTLS releases are vulnerable to passive cleartext recovery attack (and active for 1.3, but who uses that anyways). That is extremely bad. It's pretty close to just switching everyone to HTTP instead of HTTPS, more or less. I would have a lot more to say about the security of GnuTLS in particular -- and security in general -- but I am mostly concerned about patching holes in the roof right now, so this article is not about that.

This article is about figuring out what, exactly, was exposed in our infrastructure because of this.

Affected packages

Assuming you're running Debian, this will show a list of packages that Depends on GnuTLS:

apt-cache --installed rdepends libgnutls30 | grep '^ ' | sort -u

This assumes you run this only on hosts running Buster or above. Otherwise you'll need to figure out a way to pick machines running GnuTLS 3.6.4 or later.

Note that this list only first level dependencies! It is perfectly possible that another package uses GnuTLS without being listed here. For example, in the above list I have libcurl3-gnutls, so the be really thorough, I would actually need to recurse down the dependency tree.

On my desktop, this shows an "interesting" list of targets:

  • apt
  • cadaver - AKA WebDAV
  • curl & wget
  • fwupd - another attack on top of this one
  • git (through the libcurl3-gnutls dependency)
  • mutt - all your emails
  • weechat - your precious private chats

Arguably, fetchers like apt, curl, fwupd, and wget rely on HTTPS for "authentication" more than secrecy, although apt has its own OpenPGP-based authentication so that wouldn't matter anyways. Still, this is truly distressing. And I haven't mentioned here things like gobby, network-manager, systemd, and others - the scope of this is broad. Hell, even good old lynx links against GnuTLS.

In our infrastructure, the magic command looks something like this:

cumin -o txt -p 0  'F:lsbdistcodename=buster' "apt-cache --installed rdepends libgnutls30 | grep '^ ' | sort -u" | tee gnutls-rdepds-per-host | awk '{print $NF}' | sort | uniq -c | sort -n

There, the result is even more worrisome, as those important packages seem to rely on GnuTLS for their transport security:

  • mariadb - all MySQL traffic and passwords
  • mandos - full disk encryption
  • slapd - LDAP passwords

mandos is especially distressing although it's probably not vulnerable because it seems it doesn't store the cleartext -- it's encrypted with the client's OpenPGP public key -- so the TLS tunnel never sees the cleartext either.

Other reports have also mentioned the following servers link against GnuTLS and could be vulnerable:

  • exim
  • rsyslog
  • samba
  • various VNC implementations

Not affected

Those programs are not affected by this vulnerability:

  • apache2
  • gnupg
  • python
  • nginx
  • openssh

This list is not exhaustive, naturally, but serves as an example of common software you don't need to worry about.

The vulnerability only exists in GnuTLS, as far as we know, so programs linking against other libraries are not vulnerable.

Because the vulnerability affects session tickets -- and those are set on the server side of the TLS connection -- only users of GnuTLS as a server are vulnerable. This means, for example, that while weechat uses GnuTLS, it will only suffer from the problem when acting as a server (which it does, in relay mode) or, of course, if the remote IRC server also uses GnuTLS. Same with apt, curl, wget, or git: it is unlikely to be a problem because it is only used as a client; the remote server is usually a webserver -- not git itself -- when using TLS.

Caveats

Keep in mind that it's not because a package links against GnuTLS that it uses it. For example, I have been told that, on Arch Linux, if both GnuTLS and OpenSSL are available, the mutt package will use the latter, so it's not affected. I haven't confirmed that myself nor have I checked on Debian.

Also, because it relies on session tickets, there's a time window after which the ticket gets cycled and properly initialized. But that is apparently 6 hours by default so it is going to protect only really long-lasting TLS sessions, which are uncommon, I would argue.

My audit is limited. For example, it might have been better to walk the shared library dependencies directly, instead of relying on Debian package dependencies.

Other technical details

It seems the vulnerability might have been introduced in this merge request, itself following a (entirely reasonable) feature request to make it easier to rotate session tickets. The merge request was open for a few months and was thoroughly reviewed by a peer before being merged. Interestingly, the vulnerable function (_gnutls_initialize_session_ticket_key_rotation), explicitly says:

 * This function will not enable session ticket keys on the server side. That is done
 * with the gnutls_session_ticket_enable_server() function. This function just initializes
 * the internal state to support periodical rotation of the session ticket encryption key.

In other words, it thinks it is not responsible for session ticket initialization, yet it is. Indeed, the merge request fixing the problem unconditionally does this:

memcpy(session->key.initial_stek, key->data, key->size);

I haven't reviewed the code and the vulnerability in detail, so take the above with a grain of salt.

The full patch is available here. See also the upstream issue 1011, the upstream advisory, the Debian security tracker, and the Redhat Bugzilla.

Moving forward

The impact of this vulnerability depends on the affected packages and how they are used. It can range from "meh, someone knows I downloaded that Debian package yesterday" to "holy crap my full disk encryption passwords are compromised, I need to re-encrypt all my drives", including "I need to change all LDAP and MySQL passwords".

It promises to be a fun week for some people at least.

Looking ahead, however, one has to wonder whether we should follow @FiloSottile's advice and stop using GnuTLS altogether. There are at least a few programs that link against GnuTLS because of the OpenSSL licensing oddities but that has been first announced in 2015, then definitely and clearly resolved in 2017 -- or maybe that was in 2018? Anyways it's fixed, pinky-promise-I-swear, except if you're one of those weirdos still using GPL-2, of course. Even though OpenSSL isn't the simplest and secure TLS implementation out there, it could preferable to GnuTLS and maybe we should consider changing Debian packages to use it in the future.

But then again, the last time something like this happened, it was Heartbleed and GnuTLS wasn't affected, so who knows... It is likely that people don't have OpenSSL in mind when they suggest moving away from GnuTLS and instead think of other TLS libraries like mbedtls (previously known as PolarSSL), NSS, BoringSSL, LibreSSL and so on. Not that those are totally sinless either...

"This is fine", as they say...

Planet DebianHolger Levsen: 20200611-stress-management

Stress management

I've got a note hanging in my kitchen which is from an unknown source. So while I still can share it happily, I sadly cannot give proper credit.

(Update: it was pointed out to me privately that the story is probably coming from Kathy Hadley, a life coach. Thanks for sharing, Kathy!)

It reads:

A psychologist walked around a room while teaching stress management to an
audience. As she raised a glass of water, everyone expected they'd be asked
the "half empty or half full' question. Instead, with a smile on her face, she
inquired: "How heavy is this glass of water?"

Answers called out ranged from 8oz to 20oz.

She replied, "The absolute weight doesn't matter. It depends on how long I
hold it. If I hold it for a minute, it's not a problem. If I hold if for an
hour, I'll have an ache in my arm. If I hold it for a day, my arm will feel
numb and paralyzed. In each case, the weight of the glass doesn't change, but
the longer I hold it, the heavier it becomes."

She continued, "The stresses and worries in life are like that glass of water.
Think about them for a while and nothing happens. Think about them a bit
longer and they will begin to hurt. And if you think about them all day long,
you will feel paralyzed - incapable of doing anything."

Remember to put the glass down.

Especially in times like these, do remember to put the glass down!

CryptogramAnother Intel Speculative Execution Vulnerability

Remember Spectre and Meltdown? Back in early 2018, I wrote:

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they -- and the research into the Intel ME vulnerability -- have shown researchers where to look, more is coming -- and what they'll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

That has turned out to be true. Here's a new vulnerability:

On Tuesday, two separate academic teams disclosed two new and distinctive exploits that pierce Intel's Software Guard eXtension, by far the most sensitive region of the company's processors.

[...]

The new SGX attacks are known as SGAxe and CrossTalk. Both break into the fortified CPU region using separate side-channel attacks, a class of hack that infers sensitive data by measuring timing differences, power consumption, electromagnetic radiation, sound, or other information from the systems that store it. The assumptions for both attacks are roughly the same. An attacker has already broken the security of the target machine through a software exploit or a malicious virtual machine that compromises the integrity of the system. While that's a tall bar, it's precisely the scenario that SGX is supposed to defend against.

Another news article.

Worse Than FailureThe Time-Delay Footgun

A few years back, Mike worked at Initech. Initech has two major products: the Initech Creator and the Initech Analyzer. The Creator, as the name implied, let you create things. The Analyzer could take what you made with the Creator and test them.

For business reasons, these were two separate products, and it was common for one customer to have many more Creator licenses than Analyzer licenses, or upgrade them each on a different cadence. But the Analyzer depended on the Creator, so someone might have two wildly different versions of both tools installed.

Initech wasn’t just incrementing the version number and charging for a new seat every year. Both products were under active development, with a steady stream of new features. The Analyzer needed to be smart enough to check what version of Creator was installed, and enable/disable the appropriate set of features. Which meant the Analyzer needed to check the version string.

From a user’s perspective, the version numbers were simple: a new version was released every year, numbered for the year. So the 2009 release was version 9, the 2012 was version 12, and so on. Internally, however, they needed to track finer-grained versions, patch levels, and whether the build was intended as an alpha, beta, or release version. This meant that they looked more like “12.3g31”.

Mike was tasked with prepping Initech Analyzer 2013 for release. Since the company used an unusual version numbering schema, they had also written a suite of custom version parsing functions, in the form: isCreatorVersion9_0OrLater, isCreatorVersion11_0OrLater, etc. He needed to add isCreaterVersion12_0OrLater.

“Hey,” Mike suggested to his boss, “I notice that all of these functions are unique, we could make a general version that uses a regex.”

“No, don’t do that,” his boss said. “You know what they say, ‘I had a problem, so I used regexes, now I have two problems.’ Just copy-paste the version 11 version, and use that. It uses string slicing, which performs way better than regex anyway.”

“Well, I think there are going to be some problems-”

“It’s what we’ve done every year,” his boss said. “Just do it. It’s the version check, don’t put any thought into it.”

“Like, I mean, really problems- the way it-”

His boss cut him off and spoke very slowly. “It is just the version check. It doesn’t need to be complicated. And we know it can’t be wrong, because all the tests are green.”

Mike did not just copy the version 11 check. He also didn’t use regexes, but patterned his logic off the version 11 check, with some minor corrections. But he did leave the version 11 check alone, because he wasn’t given permission to change that block of code, and all of the tests were green.

So how did isCreatorVersion11_0OrLater work? Well, given a version string like 9.0g57 or 10.0a12, or 11.0b9, it would start by checking the second character. If it was a ., clearly we had a single digit version number which must be less than 11. If the second character was a 0, then it must be 10, which clearly is also less than 11, and there couldn't possibly be any numbers larger than 11 which have a "0" as their second character. Any other number must be greater than or equal 11.

Mike describes this as a “time-delayed footgun”. Because it was “right” for about a decade. Unfortunately, Initech Analyzer 2020 might be having some troubles right now…

Mike adds:

Now, I no longer work at Initech, so unfortunately I can’t tell you the fallout of what happened when that foot-gun finally went off this year.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet DebianSandro Tosi: Installing prometheus snmp_exporter on a QNAP nas

I've this project in the background about creating a grafana dashboard for a QNAP nas.

As the summer is ramping up, i wanted to figure out the temperature of the nas throughout the day, so.. prometheus + grafana to the rescue!

I wrote the instructions to install snmp_exporter on a linux-based QNAP nas. For now i'm using an already existing dashboard, but it's the first step to create my own.

Planet DebianLouis-Philippe Véronneau: How to capture a remote IRC session live

DebConf20 will be held online this year and I've started doing some work for the DebConf videoteam to prepare what's to come.

One thing I want us to do is capture a live IRC session and use it as a video input in Voctomix, the live video mixer we use. This way, at the end of a talk we could show both the attendees asking questions on IRC and the presenter replying to them side-by-side.

A mockup of a side-by-side voctogui window with someone on the left and a terminal running weechat on the right

Capturing a live video of an IRC client on a remote headless server is somewhat more complicated than you might think; as far as I know, neither ffmpeg nor gstreamer support recording a live ssh pseudoterminal1.

Worse, neither weechat nor irssi run on X: they use ncurses... Although you can capture an X11 window with ffmpeg -f x11grab, I wasn't able to get them to run with Xvfb.

Capturing the framebuffer

One thing I dislike with this method is the framebuffer isn't always easy to access on remote machines. If you don't have a serial connection, you can try using a VNC server that can.

I did my tests in a VM on an KVM hypervisor and used virt-manager to access the framebuffer.

I had a hard time setting the framebuffer resolution to a 16:9 aspect ratio. The winning combination ended up passing the nomodeset kernel parameter at boot and setting up these parameters in /etc/default/grub2:

GRUB_GFXMODE=1280x720
GRUB_GFXPAYLOAD_LINUX=keep

To make the text more readable, this is the /etc/default/console-setup file that seemed to make the most sense:

# CONFIGURATION FILE FOR SETUPCON

# Consult the console-setup(5) manual page.

ACTIVE_CONSOLES="/dev/tty[1-6]"

CHARMAP="UTF-8"

CODESET="Lat15"
FONTFACE="TerminusBold"
FONTSIZE="12x24"

Once that is done, the only thing left is to run the IRC client and launch ffmpeg. The magic command to record the framebuffer seems to be something like:

ffmpeg -f fbdev -framerate 60 -i /dev/fb0 -c:v libvpx -crf 10 -b:v 1M -auto-alt-ref 0 output.webm

Here is what I ended up with:

Planet DebianReproducible Builds (diffoscope): diffoscope 147 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 147. This version includes the following changes:

[ Chris Lamb ]

* New features:

  - Add output from strings(1) to ELF binaries. It is intended this will
    expose expose build paths that are hidden somewhere within the objdump(1)
    output. (Closes: reproducible-builds/diffoscope#148)
  - Add basic zsh shell tab-completion support.
    (Closes: reproducible-builds/diffoscope#158)

* Bug fixes:

  - Prevent a traceback when comparing a PDF document that does not contain
    any metadata, ie. it is missing a PDF "/Info" stanza.
    (Closes: reproducible-builds/diffoscope#150)
  - Fix compatibility with jsondiff 1.2.0 which was causing a traceback and
    log the version of jsondiff we are using to aid debugging in the future.
    (Closes: reproducible-builds/diffoscope#159
  - Fix an issue in GnuPG keybox handling that left filenames in the diff.
  - Don't mask an existing test name; ie. ensure it is actually run.

* Reporting:

  - Log all calls to subprocess.check_output by using our own wrapper utility.
    (Closes: reproducible-builds/diffoscope#151)

* Code improvements:

  - Replace references to "WF" with "Wagner-Fischer" for clarity.
  - Drop a large number of unused imports (list_libarchive,
    ContainerExtractionError, etc.)
  - Don't assign exception to a variable that we do not use.
  - Compare string values with the equality operator, not via "is" identity.
  - Don't alias an open file to a variable when we don't use it.
  - Don't alias "filter" builtin.
  - Refactor many small parts of the HTML generation, dropping explicit
    u"unicode" strings, tidying the generation of the "Offset X, Y lines
    modified" messages, moving to PEP 498 f-strings where appropriate, etc.
  - Inline a number of single-used utility methods.

You find out more by visiting the project homepage.

,

Planet DebianJoey Hess: bracketing and async exceptions in haskell

I've been digging into async exceptions in haskell, and getting more and more concerned. In particular, bracket seems to be often used in ways that are not async exception safe. I've found multiple libraries with problems.

Here's an example:

withTempFile a = bracket setup cleanup a
  where
    setup = openTempFile "/tmp" "tmpfile"
    cleanup (name, h) = do
        hClose h
        removeFile name

This looks reasonably good, it makes sure to clean up after itself even when the action throws an exception.

But, in fact that code can leave stale temp files lying around. If the thread receives an async exception when hClose is running, it will be interrupted before the file is removed.

We normally think of bracket as masking exceptions, but it doesn't prevent async exceptions in all cases. See Control.Exception on "interruptible operations", which can receive async exceptions even when other exceptions are masked.

It's a bit surprising, but hClose is such an interruptable operation, because it flushes the write buffer. The only way to know is to read the code.

It can be quite hard to determine if an operation is interruptable, since it can come down to whether it retries a STM transaction, or uses a MVar that is not always full. I've been auditing libraries and I often have to look at code several dependencies away, and even then may not be sure if a library has this problem.

  • process's withCreateProcess could fail to wait on the process, leaving a zombie. Might also leak file descriptors?

  • http-client's withResponse might fail to close a network connection. (If a MVar happened to be empty when it's called.)

    Worth noting that there are plenty of examples of using http-client to eg, race downloading two urls and cancel the slower download. Which is just the kind of use of an async exception that could cause a problem.

  • persistent's withSqlPool and withSqlConn might fail to clean up, when used with persistent-postgresql. (If another thread is using the connection and so a MVar over in postgresql-simple is empty.)

  • concurrent-output has some locking code that is not async exception safe. (My library, so I've fixed part of it, and hope to fix the rest.)

So far, around half of the libraries I've looked at, that use bracket or onException or the like probably have this problem.

What can libraries do?

  • Document whether these things are async exception safe. Or perhaps there should be an expectation that "withFoo" always is, but if so the Haskell comminity has some work ahead of it.

  • Use finally. Good mostly in simple situations; more complicated things would be hard to write this way.

    hClose h `finally` removeFile name
    

  • Use uninterruptibleMask, but it's a big hammer and is often not the right tool for the job. If the operation takes a while to run, the program will not respond to ctrl-c during that time.

  • May be better to run the actions in worker threads, to insulate them from receiving any async exceptions.

    bracketInsulated :: IO a -> (a -> IO b) -> (a -> IO c) -> IO c
    bracketInsulated a b = bracket
      (uninterruptibleMask $ \u -> async (u a) >>= u . wait)
      (\v -> uninterruptibleMask $ \u -> async (u (b v)) >>= u . wait)
    
    (Note use of uninterruptibleMask here in case async itself does an interruptable operation. My first version got that wrong.. This is hard!)

My impression of the state of things now is that you should be very cautious using race or cancel or withAsync or the like, unless the thread is small and easy to audit for these problems. Kind of a shame, since I had wanted to be able to cancel a thread that is big and sprawling and uses all the libraries mentioned above.


This work was sponsored by Jake Vosloo and Graham Spencer on Patreon.

Planet DebianDirk Eddelbuettel: binb 0.0.6: Small enhancements

The sixth release of the binb package is now on CRAN. binb regroups four rather nice themes for writing LaTeX Beamer presentations much more easily in (R)Markdown. As a teaser, a quick demo combining all four themes follows; documentation and examples are in the package.

Via two contributed PRs, this releases adds titlepage support via the YAML header for Metropolis, and suppresses nags about the changed natbib default. A little polish on the README and Travis rounds everything off.

Changes in binb version 0.0.6 (2020-06-10)

  • Support for YAML option titlegraphic was added in Metropolis (Andras Scraka in #23).

  • The README.md file received another badge (Dirk).

  • The natbib default value was updated to accomodate rmarkdown (Joseph Stachelek in #26).

  • Travis now uses R 4.0.0 and 'bionic' (Dirk).

CRANberries provides a summary of changes to the previous version. For questions or comments, please use the issue tracker at GitHub.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJonathan Dowland: Template Haskell and Stream-processing programs

I've written about what Template Haskell is, and given an example of what it can be used for, it's time to explain why I was looking at it in the context of my PhD work.

Encoding stream-processing programs

StrIoT is an experimental distributed stream-processing system that myself and others are building in order to explore our research questions. A user of StrIoT writes a stream-processing program, using a set of 8 functional operators provided for the purpose. A simple example is

streamFn :: Stream Int -> Stream Int
streamFn = streamFilter (<15)
         . streamFilter (>5)
         . streamMap (*2)

Our system is distributed: we take a stream-processing program and partition it into sub-programs, which are distributed to and run on separate nodes (perhaps cloud instances, or embedded devices like Raspberry Pis etc.). In order to do that, we need to be able to manipulate the stream-processing program as data. We've initially opted for a graph data-structure, with the vertices in the graph defined as

data StreamVertex = StreamVertex
    { vertexId   :: Int
    , operator   :: StreamOperator
    , parameters :: [String]
    , intype     :: String
    , outtype    :: String
    } deriving (Eq,Show)

A stream-processing program encoded this way, equivalent to the first example

path [ StreamVertex 0 Map    ["(*2)"]  "Int" "Int"
     , StreamVertex 1 Filter ["(>5)"]  "Int" "Int"
     , StreamVertex 2 Filter ["(<15)"] "Int" "Int"
     ]

We can easily manipulate instances of such types, rewrite them, partition them and generate code from them. Unfortunately, this is quite a departure from the first simple code example from the perspective of a user writing their program.

Template Haskell gives us the ability to manipulate code as a data structure, and also to inspect names to gather information about them (their type, etc.). I started looking at TH to see if we could build something where the user-supplied program was as close to that first case as possible.

TH limitations

There are two reasons that we can't easily manipulate a stream-processing definition written as in the first example. The following expressions are equivalent, in some sense, but are not equal, and so yield completely different expression trees when quasi-quoted:

[| streamFilter (<15) . streamFilter (>5) . streamMap (*2) |]
| \s -> streamFilter (<15) (streamFilter (>5) (streamMap (*2) s)) |]
[| streamMap (*2) >>> streamFilter (>5) >>> streamFilter (<15) |]
[| \s -> s & streamMap (*2) & streamFilter (>5) & streamFilter (<15) |]
[| streamFn |] -- a named expression, defined outside the quasi-quotes

In theory, reify can give you the definition of a function from its name, but in practice it doesn't, because this was never implemented. So at the very least we would need to insist that a user included the entirety of a stream-processing program within quasi-quotes, and not split it up into separate bits, with some bits defined outside the quotes and references within (as in the last case above). We would probably have to insist on a consistent approach for composing operators together, such as always use (.) and never >>>, &, etc. which is limiting.

Incremental approach

After a while ruminating on this, and before moving onto something else, I thought I'd try approaching it from the other side. Could I introduce some TH into the existing approach, and improve it? The first thing I've tried is to change the parameters field to TH's ExpQ, meaning the map instance example above would be

StreamVertex 0 Map [ [| (*2) |] ] "Int" "Int"

I worked this through. It's an incremental improvement ease and clarity for the user writing a stream-processing program. It catches a class of programming bugs that would otherwise slip through: the expressions in the brackets have to be syntactically valid (although they aren't type checked). Some of the StrIoT internals are also much improved, particularly the logical operator. Here's an excerpt from a rewrite rule that involves composing code embedded in strings, dealing with all the escaping rules and hoping we've accounted for all possible incoming expression encodings:

let f' = "(let f = ("++f++"); p = ("++p++"); g = ("++g++") in\
         \ \\ (a,b) v -> (f a v, if p v a then g b v else b))"
    a' = "("++a++","++b++")"
    q' = "(let p = ("++p++"); q = ("++q++") in \\v (y,z) -> p v y && q v z)"

And the same section after, manipulating ExpQ types:

let f' = [| \ (a,b) v -> ($(f) a v, if $(p) v a then $(g) b v else b) |]
    a' = [| ($(a), $(b)) |]
    q' = [| \v (y,z) -> $(p) v y && $(q) v z |]

I think the code-generation part of StrIoT could be radically refactored to take advantage of this change but I have not made huge inroads into that.

Next steps

This is, probably, where I am going to stop. This work is very interesting to me but not the main thrust of my research. But incrementally improving the representation gave me some ideas of what I could try next:

  • intype and outtype could be TH Types instead of Strings. This would catch some simple problems like typos, etc., but we could possibly go further, and
  • remove the explicit in-and-out-types and infer their values from the parameters field, as its an expression with some type that should match
  • parameters is a list, because the different stream operators have different arities. streamFilter has one parameter (the filter predicate), so the list should have one element in that case, but streamExpand has none, so it should be empty. We could collapse this to a single ExpQ, which encoded however many parameters are necessary, either in an internal list, or…
  • the operator field could be merged in too, so that the parameters expression was actually a call to the relevant operator with its parameters supplied.

The type would have collapsed down to

data StreamVertex = StreamVertex
    { vertexId   :: Int
    , opAndParams :: ExpQ
    } deriving (Eq,Show)

Example instances might be

StreamVertex 0 [| streamMap (*2) |]
StreamVertex 1 [| streamExpand |]
StreamVertex 2 [| streamScan (\c _ -> c+1) 0 |]

The vertexId field is a bit of wart, but we require that due to the graph data structure that we are using. A change there could eliminate it, too. By this point we are not that far away from where we started, and certainly much closer to the "pure" function application in the very first example.

Planet DebianJonathan Dowland: template haskell

I've been meaning to write more about my PhD work for absolutely ages, but I've held myself back by wanting to try and keep a narrative running through the blog posts. That's not realistic for a number of reasons so I'm going to just write about different aspects of things without worrying about whether they make sense in the context of recent blog posts or not.

Part of what I am doing at the moment is investigating Template Haskell to see whether it would usefully improve our system implementation. Before I write more about how it might apply to our system, I'll first write a bit about Template Haskell itself.

Template Haskell (TH) is a meta-programming system: you write programs that are executed at compile time and can output code to be spliced into the parent program. The approach used by TH is really nice: you perform your meta-programming in real first-class Haskell, and it integrates really well with the main program.

TH provides two pairs of special brackets. Oxford brackets surrounding any Haskell expression cause the whole expression to be replaced by the result of parsing the expression — an expression tree — which can be inspected and manipulated by the main program:

[| \x -> x + 1 |]

The expression data-type is a series of mutually-recursive data types that represent the complete Haskell grammar. The top-level is Exp, for expression, which has constructors for the different expression types. The above lambda expression is represented as

LamE [VarP x_1]
    (InfixE (Just (VarE x_1))
            (VarE GHC.Num.+)
            (Just (LitE (IntegerL 1))))

Such expressions can be pattern-matched against, constructed, deconstructed etc just like any other data type.

The other bracket type performs the opposite operation: it takes an expression structure and splices it into code in the main program, to be compiled as normal:

λ> 1 + $( litE (IntegerL 1) )
2

The two are often intermixed, sometimes nested to several levels. What follows is a typical beginner TH meta-program. The standard function fst operators on a 2-tuple and returns the first value. It cannot operate on a tuple of a different valence. However, a meta-program can generate a version of fst specialised for an n-tuple of any n:

genfst n = do
    xs <- replicateM n (newName "x")
    let ntup = tupP (map varP xs)
    [| \ $(ntup) ->  $(varE (head xs)) |]

Used like so

λ> $(genfst 2) (1,2)
1
λ> $(genfst 3) ('a','b','c')
'a'
λ> :t $(genfst 10)
$(genfst 10) :: (a, b, c, d, e, f, g, h, i, j) -> a

That's a high-level gist of how you can use TH. I've skipped over a lot of detail, in particular an important aspect relating to scope and naming, which is key to the problem I am exploring at the moment. Oxford brackets and slice brackets do not operate directly on the simple Exp data-type, but upon an Exp within the Q Monad:

λ> :t [| 1 |]
[| 1 |] :: ExpQ

ExpQ is a synonym for Q Exp. Eagle-eyed Haskellers will have noticed that genfst above was written in terms of some Monad. And you might also have noticed the case discrepancy between the constructor types VarE (Etc) and varE, tupP, varP used in that function definition. These are convenience functions that wrap the relevant constructor in Q. The point of the Q Monad is (I think) to handle name scoping, and avoid unintended name clashes. Look at the output of these simple expressions, passed through runQ:

λ> runQ [| \x -> x |]
LamE [VarP x_1] (VarE x_1)
λ> runQ [| \x -> x |]
LamE [VarP x_2] (VarE x_2)

Those x are not the same x in the context they are evaluated (a GHCi session). And that's the crux of the problem I am exploring. More in a later blog post!

Planet DebianEvgeni Golov: show your desk

Some days ago I posted a picture of my desk on Mastodon and Twitter.

standing desk with a monitor, laptop etc

After that I got multiple questions about the setup, so I thought "Michael and Michael did posts about their setups, you could too!"

And well, here we are ;-)

desk

The desk is a Flexispot E5B frame with a 200×80×2.6cm oak table top.

The Flexispot E5 (the B stands for black) is a rather cheap (as in not expensive) standing desk frame. It has a retail price of 379€, but you can often get it as low as 299€ on sale.

Add a nice table top from a local store (mine was like 99€), a bit of wood oil and work and you get a nice standing desk for less than 500€.

The frame has three memory positions, but I only use two: one for sitting, one for standing, and a "change position" timer that I never used so far.

The table top has a bit of a swing when in standing position (mine is at 104cm according to the electronics in the table), but not enough to disturb typing on the keyboard or thinking. I certainly wouldn't place a sewing machine up there, but that was not a requirement anyways ;)

To compare: the IKEA Bekant table has a similar, maybe even slightly stronger swing.

chair

Speaking of IKEA… The chair is an IKEA Volmar. They don't seem to sell it since mid 2019 anymore though, so no link here.

hardware

laptop

A Lenovo ThinkPad T480s, i7-8650U, 24GB RAM, running Fedora 32 Workstation. Just enough power while not too big and heavy. Full of stickers, because I ♥ stickers!

It's connected to a Lenovo ThinkPad Thunderbolt 3 Dock (Gen 1). After 2 years with that thing, I'm still not sure what to think about it, as I had various issues with it over the time:

  • the internal USB hub just vanishing from existence until a full power cycle of the dock was performed, but that might have been caused by my USB-switch which I recently removed.
  • the NIC negotiating at 100MBit/s instead of 1000MBit/s and then keeping on re-negotiating every few minutes, disconnecting me from the network, but I've not seen that since the Fedora 32 upgrade.
  • the USB-attached keyboard not working during boot as it needs some Thunderbolt magic.

The ThinkPad stands on a Adam Hall Stands SLT001E, a rather simple stand for laptops and other equipment (primarily made for DJs I think). The Dock fits exactly between the two feet of the stand, so that is nice and saves space on the table. Using the stand I can use the laptop screen as a second screen when I want it - but most often I do not and have the laptop lid closed while working.

workstation

A Lenovo ThinkStation P410, Xeon E5-2620 v4, 96GB RAM, running Fedora 32 Workstation. That's my VM playground. Having lots of RAM really helps if you need/want to run many VMs with Foreman/Katello or Red Hat Satellite as they tend to be a bit memory hungry and throwing hardware at problems tend to be an easy solution for many of them.

The ThinkStation is also connected to the monitor, and I used to have an USB switch to flip my keyboard, mouse and Yubikey from the laptop to the workstation and back. But as noted above, this switch somehow made the USB hub in the laptop dock unhappy (maybe because I was switching too quickly after resume or so), so it's currently removed from the setup and I use the workstation via SSH only.

It's mounted under the table using a ROLINE PC holder. You won't get any design awards with it, but it's easy to assemble and allows the computer to move with the table, minimizing the number of cables that need to have a flexible length.

monitor

The monitor is an older Dell UltraSharp U2515H - a 25" 2560×1440 model. It sits on an Amazon Basics Monitor Arm (which is identical to an Ergotron LX to the best of my knowledge) and is accompanied by a Dell AC511 soundbar.

I don't use the adjustable arm much. It's from the time I had no real standing desk and would use the arm and a cardboard box to lift the monitor and keyboard to a standing level. If you don't want to invest in a standing desk, that's the best and cheapest solution!

The soundbar is sufficient for listening to music while working and for chatting with colleagues.

webcam

A Logitech C920 Pro, what else?

Works perfectly under Linux with the UVC driver and has rather good microphones. Actually, so good that I never use a headset during video calls and so far nobody complained about bad audio.

keyboard

A ThinkPad Compact USB Keyboard with TrackPoint. The keyboard matches the one in my T480s, so my brain doesn't have to switch. It was awful when I still had the "old" model and had to switch between the two.

UK layout. Sue me. I like the big return key.

mouse

A Logitech MX Master 2.

I got the MX Revolution as a gift a long time ago, and at first I was like: WTF, why would anyone pay hundred bucks for a mouse?! Well, after some time I knew, it's just that good. And when it was time to get a new one (the rubber coating gets all slippery after some time) the decision was rather easy.

I'm pondering if I should try the MX Ergo or the MX Vertical at some point, but not enough to go and buy one of them yet.

other

notepad

I'm terrible at remembering things, so I need to write them down. And I'm terrible at remembering to look at my notes, so they need to be in my view. So there is a regular A5 notepad on my desk, that gets filled with check boxes and stuff, page after page.

coaster

It's a wooden table, you don't want to have liquids on it, right? Thankfully a friend of mine once made coasters out of old Xeon CPUs and epoxy. He gave me one in exchange for a busted X41 ThinkPad. I still think I made the better deal ;)

yubikey

Keep your secrets safe! Mine is used as a GnuPG smart card for both encryption and SSH authentication, U2F on various pages and 2FA for VPN.

headphones

I own a pair of Bose QuietComfort 25 with an aftermarket Bluetooth adapter and Anker SoundBuds Slim+. Both are used rather seldomly while working, as my office is usually quiet and no one is disturbed when I listen to music without headphones.

what's missing?

light

I want to add more light to the setup, noth to have a better picture during video calls but also to have better light when doing something else on the table - like soldering. The plan is to add an IKEA Tertial with some Trådfri smart LED in it, but the Tertial is currently not available for delivery at IKEA and I'm not going to visit one in the current situation.

bigger monitor

Currently pondering getting a bigger (27+ inch) 4K monitor. Still can't really decide which one to get. There are so many, and they all differ in some way. But it seems no affordable one is offering an integrated USB switch and sufficient amount of USB ports, so I'll probably get whatever can get me a good picture without any extra features at a reasonable price.

Changing the monitor will probably also mean rethinking the sound output, as I'm sure mounting the Dell soundbar to anything but the designated 5 year old monitor won't work too well.

Sociological ImagesViral Votes & Activism in the New Public Sphere

It is a strange sight to watch politicians working to go viral. Check out this video from the political nonprofit ACRONYM, where Alexis Magnan-Callaway — the Digital Mobilization Director of Kirsten Gillibrand’s presidential campaign — talks us through some key moments on social media. 

Social media content has changed the rules of the game for getting attention in the political world. An entire industry has sprung up around going viral professionally, and politicians are putting these new rules to use for everything from promoting the Affordable Care Act to breaking Twitter’s use policy

In a new paper out at Sociological Theory with Doug Hartmann, I (Evan) argue that part of the reason this is happening is due to new structural transformations in the public sphere. Recent changes in communication technology have created a situation where the social fields for media, politics, academia, and the economy are now much closer together. It is much easier for people who are skilled in any one of these fields to get more public attention by mixing up norms and behaviors from the other three. Thomas Medvetz called people who do this in the policy world “jugglers,” and we argue that many more people have started juggling as well. 

Arm-wrestling a constituent is a long way from the Nixon-Kennedy debates, but there are institutional reasons why this shouldn’t surprise us. Juggling social capital from many fields means that social changes start to accelerate, as people can suddenly be much more successful by breaking the norms in their home fields. Politicians can get electoral gains by going viral, podcasts take off by talking to academics, and ex-policy wonks suddenly land coveted academic positions.


Another good example of this new structural transformation in action is Ziad Ahmed, a Yale undergraduate, business leader, and activist. At the core of his public persona is an interesting mix of both norm-breaking behavior and carefully curated status markers for many different social fields. 

In 2017, Ahmed was accepted to Yale after writing “#BlackLivesMatter” 100 times; this was contemporaneously reported by outlets such as NBC NewsCNNTimeThe Washington PostBusiness InsiderHuffPost, and Mashable

A screenshot excerpt of Ahmed’s bio statement from his personal website

Since then, Ahmed has cultivated a long biography featuring many different meaningful status markers: his educational institution; work as the CEO of a consulting firm; founding of a diversity and inclusion organization; a Forbes “30 Under 30” recognition; Ted Talks; and more. The combination of these symbols paints a complex picture of an elite student, activist, business leader, and everyday person on social media. 

Critics have called this mixture “a super-engineered avatar of corporate progressivism that would make even Mayor Pete blush.” We would say that, for better or worse, this is a new way of doing activism and advocacy that comes out of different institutional conditions in the public sphere. As different media, political, and academic fields move closer together, activists like Ahmed and viral moments like those in the Gillibrand campaign show how a much more complicated set of social institutions and practices are shaping the way we wield public influence today.

Bob Rice is a PhD student in sociology at UMass Boston. They’re interested in perceptions of authority, social movements, culture, stratification, mental health, and digital methods. 

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramAvailability Attacks against Neural Networks

New research on using specially crafted inputs to slow down machine-learning neural network systems:

Sponge Examples: Energy-Latency Attacks on Neural Networks shows how to find adversarial examples that cause a DNN to burn more energy, take more time, or both. They affect a wide range of DNN applications, from image recognition to natural language processing (NLP). Adversaries might use these examples for all sorts of mischief -- from draining mobile phone batteries, though degrading the machine-vision systems on which self-driving cars rely, to jamming cognitive radar.

So far, our most spectacular results are against NLP systems. By feeding them confusing inputs we can slow them down over 100 times. There are already examples in the real world where people pause or stumble when asked hard questions but we now have a dependable method for generating such examples automatically and at scale. We can also neutralize the performance improvements of accelerators for computer vision tasks, and make them operate on their worst case performance.

The paper.

Worse Than FailureCodeSOD: Sort Yourself Out

Object-Relational-Mappers (ORMs) are a subject of pain, pleasure, and flamewars. On one hand, they make it trivially easy to write basic persistence logic, as long as it stays basic. But they do this by concealing the broader powers of relational databases, which means that an ORM is a leaky abstraction. Used incautiously or inappropriately, and they stop making your life easy, and make it much, much harder.

That’s bad, unless you’re Tenesha’s co-worker, because you apparently want to suffer.

In addition to new products, Tenesha’s team works on a legacy Ruby-on-Rails application. It’s an ecommerce tool, and thus it handles everything from inventory to order management to billing to taxes.

Taxes can get tricky. Each country may have national tax rates. Individual cities may have different ones. In their database, they have a TaxesRate table which tracks the country name, the city name, the region code name, and the tax rate.

You’ll note that the actual database is storing names, which is problematic when you need to handle localizations. Is it Spain or España? The ugly hack to fix this is to have a lookup file in YAML, like so:

i18n_keys:
  es-ca: "Canary Islands"
  es: "Spain"
  eu: "Inside European Union"
  us: "United States"
  ar: "Argentina"
  la: "Latin America"
  as-oc: "Asia & Oceania"
  row: "Rest of the world"

Those are just the countries, but there were similar structures for regions, cities, and so on.

The YAML file took on more importance when management decided that the sort order of the tax codes within a region needed to be a specific order. They didn’t want it to be sorted alphabetically, or by date added, or by number of orders, or anything: they had a specific order they wanted.

So Tenesha’s co-worker had a bright idea: they could store the lookup keys in the YAML file in the order specified. It meant they didn’t have to add or manage a sort_order field in the database, which sounded easier to them, and would be easier to implement, right?

Well, no. There’s no easy way to tell an SQL order-by clause to sort in an arbitrary order. But our intrepid programmer was using an ORM, so they didn’t need to think about little details like “connecting to a database” or “worrying about round trips” or “is this at all efficient”.

So they implemented it this way:

  # Order locations by their I18n registers to make it easier to reorder
  def self.order_by_location(regions)
    codes = I18n.t("quotation.selectable_taxes_rate_locations").keys.map{ |k| k.to_s }
    regions_ordered = []

    codes.each do |code|
      regions_ordered.push(regions.where(region_code: code))
    end

    # Insert the codes that are not listed at the end
    regions_ordered.push(regions.where("region_code NOT IN (?)", codes)).flatten
  end

This is called like so:

# NOTE: TaxesRate.all_rates returns all records with unique region codes,
#   ignoring cities; something like `TaxesRate.distinct(:region_code)`.
regions = order_by_location(TaxesRate.all_rates)

We should be thankful that they didn’t find a way to make this execute N2 queries, but as it is, it needs to execute N+1 queries.

First, we pull the rate locations from our internationalization YAML file. Then, for each region code, we run a query to fetch the tax rate for that one region code. This is one query for each code. Based on the internationalization file, it’s just the codes for one country, but that can still be a large number. Finally, we run one final query to fetch all the other regions that aren’t in our list.

This fetches the tax code for all regions, sorted based on the sort order in the localization file (which does mean each locale could have a different sort order, a feature no one requested).

Tenesha summarizes it:

So many things done wrong; in summary:
* Country names stored with different localization on the same database, instead of storing country codes.
* Using redundant data for storing region codes for different cities.
* Hard-coding a new front-end feature using localization keys order.
* Performing N+1 queries to retrieve well known data.

Now, this was a legacy application, so when Tenesha and her team went to management suggesting that they fix this terrible approach, the answer was “Nope!” It was a legacy product, and was only going to get new features and critical bug fixes.

Tenesha scored a minor victory: she did convince them to let her rewrite the method so that it fetched the data from the database and then sorted using the Array#index method, which still wasn’t great, but was far better than hundreds of database round trips.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Krebs on SecurityMicrosoft Patch Tuesday, June 2020 Edition

Microsoft today released software patches to plug at least 129 security holes in its Windows operating systems and supported software, by some accounts a record number of fixes in one go for the software giant. None of the bugs addressed this month are known to have been exploited or detailed prior to today, but there are a few vulnerabilities that deserve special attention — particularly for enterprises and employees working remotely.

June marks the fourth month in a row that Microsoft has issued fixes to address more than 100 security flaws in its products. Eleven of the updates address problems Microsoft deems “critical,” meaning they could be exploited by malware or malcontents to seize complete, remote control over vulnerable systems without any help from users.

A chief concern among the panoply of patches is a trio of vulnerabilities in the Windows file-sharing technology (a.k.a. Microsoft Server Message Block or “SMB” service). Perhaps most troubling of these (CVE-2020-1301) is a remote code execution bug in SMB capabilities built into Windows 7 and Windows Server 2008 systems — both operating systems that Microsoft stopped supporting with security updates in January 2020. One mitigating factor with this flaw is that an attacker would need to be already authenticated on the network to exploit it, according to security experts at Tenable.

The SMB fixes follow closely on news that proof-of-concept code was published this week that would allow anyone to exploit a critical SMB flaw Microsoft patched for Windows 10 systems in March (CVE-2020-0796). Unlike this month’s critical SMB bugs, CVE-2020-0796 does not require the attacker to be authenticated to the target’s network. And with countless company employees now working remotely, Windows 10 users who have not yet applied updates from March or later could be dangerously exposed right now.

Microsoft Office and Excel get several updates this month. Two different flaws in Excel (CVE-2020-1225 and CVE-2020-1226) could be used to remotely commandeer a computer running Office just by getting a user to open a booby-trapped document. Another weakness (CVE-2020-1229) in most versions of Office may be exploited to bypass security features in Office simply by previewing a malicious document in the preview pane. This flaw also impacts Office for Mac, although updates are not yet available for that platform.

After months of giving us a welcome break from patching, Adobe has issued an update for its Flash Player program that fixes a single, albeit critical security problem. Adobe says it is not aware of any active exploits against the Flash flaw. Mercifully, Chrome and Firefox both now disable Flash by default, and Chrome and IE/Edge auto-update the program when new security updates are available. Adobe is slated to retire Flash Player later this year. Adobe also released security updates for its Experience Manager and Framemaker products.

Windows 7 users should be aware by now that while a fair number of flaws addressed this month by Microsoft affect Windows 7 systems, this operating system is no longer being supported with security updates (unless you’re an enterprise taking advantage of Microsoft’s paid extended security updates program, which is available to Windows 7 Professional and Windows 7 enterprise users).

Before you update with this month’s patch batch, please make sure you have backed up your system and/or important files. It’s not uncommon for a wonky Windows update to hose one’s system or prevent it from booting properly, and some updates even have known to erase or corrupt files. So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And if you wish to ensure Windows has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches on its own schedule, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Further reading:

AskWoody and Martin Brinkmann on Patch Tuesday fixes and potential pitfalls

Trend Micro’s Zero Day Initiative June 2020 patch lowdown

U.S-CERT on Active Exploitation of CVE-2020-0796

,

Planet DebianIngo Juergensmann: Jabber vs. XMPP

XMPP is widely - and mabye better - known as Jabber. This was more or less the same until Cisco bought Jabber Inc and the trademark. You can read more about the story on the XMPP.org website. But is there still a Jabber around? Yes, it is!

But Cisco Jabber is a whole infrastructure environment: you can't use Cisco Jabber client on its own without the other required Cisco infrastructure as Cisco CUCM and CIsco IM&P servers. So you can't just setup Prosody or ejabberd on your Debian server and connect Cisco Jabber to it. But what are the differences of Cisco Jabber to "standard" XMPP clients?

Cisco Jabber

The above screenshot from the official Cisco Jabber product webpage shows the new, single view layout of the Cisco Webex Teams client, but you can configure the client to have the old, classic split view layout of Contact List and Chat Window. But as you can already see from above screenshot audio & video calls is one of the core functions of Cisco Jabber whereas this feature has been added only lately to the well-known Conversations XMPP client on Android. Conversations is using Jingle extension to XMPP whereas Jabber uses SIP for voice/video calls. You can even use Cisco Jabber to control your deskphone via CTI, which is a quite common setup for Jabber. In fact you can configure Jabber to be just a CTI client to you phone or a fully featured UC client.

When you don't want to have Ciscos full set of on-premise servers, you can also use Cisco Jabber in conjunction with Cisco Webex as Cisco Webex Messenger. Or in conjunction with Webex Teams in Teams Messaging Mode. Last month Cisco announced general availability of XMPP federation for Webex Teams/Jabber in Teams Messaging Mode. With that you have basic functionality in Webex Teams. And when I say "basic" I really mean basic: you can have 1:1 chat only, no group chats (MUC) and no Presence status will be possible. Hopefully this is just the beginning and not the end of XMPP support in Webex Teams.

XMPP Clients

Well, I'm sure many of you know "normal" XMPP clients such as Gajim or Dino on Linux, Conversations on Android or Siskin/Monal/ChatSecure on Apple IOS. There are plenty of other clients of course and maybe you used an XMPP client in the past without knowing it. For example Jitsi Meet is based on XMPP and you can still download the Jitsi Desktop client and use it as a full-featured multi-protocol client, e.g. for XMPP and SIP. In fact Jitsi Desktop is maybe the client that comes closest to Cisco Jabber as a chat/voice/video client. In fact I already connected Jitsi Desktop to Cisco CUCM/IM&P infrastructure, but of course you won't be able to use all those Cisco proprietary extensions, but you can see the benefit of open, standardized protocols such as XMPP and SIP: you are free to use any standard compliant client that you want.

So, while Jitsi supported voice/video calls for a long time, even before they focussed on Jitsi Meet as a WebRTC based conference service, Conversations added this feature last month, as already stated. This had a huge effect to the whole XMPP federation, because you need an XMPP server that supports XEP-0215 to make these audio/video calls work. The well-known Compliance Tester listed the STUN/TURN features first as "Informational Tests", but quickly made this a mandatory test to pass tests and gain 100% on the Compliance Tester. But you cannot place SIP calls to other sides, because that's a different thing.

As many of you are familiar with standard XMPP clients, I'll focus now on some similarity and differences between Cisco Jabber and standard XMPP...

Similarities & Differences

First, you can federate with Cisco Jabber users. Cisco IM&P can use standard XMPP federation with all other XMPP standard compliant servers. This is really a big benefit and way better than other solutions that usually results in vendor lock-in. Depending on the setup, you can even join from your own XMPP client in MUCs (Multi User Chats), which Cisco calls "Persistent Chat Room". The other way is not that simple: basically it is possible to join with Cisco Jabber in a MUC on a random server, but it is not as easy as you might thing. Cisco Jabber simply lacks a way to enter a room JID (as you can find them on https://search.jabber.network/. Instead you need to be added as participant by a moderator or an admin in that 3rd party MUC.

Managed File Transfers is another issue. Cisco Jabber supports Peer-to-Peer file transfers and Managed File Transfers, where the uploaded file get transferred to an SFTP server as storage backend and where the IM&P server is handling the transfer via HTTPS. You can find a schematic drawing in the Configuration Guides. Although it appears similar to HTTP Upload as defined in XEP-0363, it is not very likely that it will work. I haven't tested it yet, because in my test scenario there is a gatekeeper in the path: Cisco Expressway doesn't support (yet) Managed File Transfer, but you can upvote the idea in the ideas management of Cisco or other ideas such as OMEMO support.

OMEMO support? Yes, there is no end-to-end encryption (E2EE) currently planned for Cisco Jabber, while it is common nowadays for most modern XMPP clients. I think it would be good for Cisco Jabber to also (optionally) support OMEMO or its successor. Messaging clients without E2EE are not state of the art anymore.

Whereas Conversations is the de-facto standard on Android, Apple IOS devices are still lacking a similar well-working client. See my blog post "XMPP - Fun with Clients" for a summary. In that regard Cisco Jabber might be the best XMPP client for IOS to some degree: you have working messaging, voice/video calls, Push Notifications and integration into Apples Call Kit.

There are most likely many, many more differences and issues between Cisco Jabber and standard compliant XMPP servers and clients. But basically Cisco Jabber is still based on XMPP and extends that by proprietary extensions.

Summary

While I have the impression that the free clients and servers are well doing and increased development in the past years (thanks to Conversations and the Compliance Tester), the situation of Cisco Jabber is a little different. As a customer you can sometimes get the impression that Cisco has lost interest in developing Cisco Jabber. It got better in the last years, but when Cisco Spark was introduced some years ago, the impression was that Cisco is heavily focussed on Spark (now: Webex Teams). It's not like Cisco is not listening to customers or the development has been stopped on Jabber, but my impression is that most customers don't give feedback or tell Cisco as the vendor what they want. You can either submit ideas via the Colaboration Customer Ideas Tool or provide feedback via your Cisco and partner channels.

I think it is important for the XMPP community to also have a large enterprise level vendor like Cisco. Otherwise the Internet will become more and more an Internet of closed silos like MS Teams, Slack, Facebook, etc. Of course there are other companies like ProcessOne (ejabberd) or Tigase, but I think you agree that Cisco is another level.

Kategorie: 

Planet DebianJulian Andres Klode: Review: Chromebook Duet

Sporting a beautiful 10.1” 1920x1200 display, the Lenovo IdeaPad Duet Chromebook or Duet Chromebook, is one of the latest Chromebooks released, and one of the few slate-style tablets, and it’s only about 300 EUR (300 USD). I’ve had one for about 2 weeks now, and here are my thoughts.

Build & Accessories

The tablet is a fairly Pixel-style affair, in that the back has two components, one softer blue one housing the camera and a metal feeling gray one. Build quality is fairly good.

The volume and power buttons are located on the right side of the tablet, and this is one of the main issues: You end up accidentally pressing the power button when you want to turn your volume lower, despite the power button having a different texture.

Alongside the tablet, you also find a kickstand with a textile back, and a keyboard, both of which attach via magnets (and pogo pins for the keyboard). The keyboard is crammed, with punctuation keys being halfed in size, and it feels mushed compared to my usual experiences of ThinkPads and Model Ms, but it’s on par with other Chromebooks, which is surprising, given it’s a tablet attachment.

fully assembled chromebook duet

fully assembled chromebook duet

I mostly use the Duet as a tablet, and only attach the keyboard occasionally. Typing with the keyboard on your lap is suboptimal.

My first Duet had a few bunches of dead pixels, so I returned it, as I had a second one I could not cancel ordered as well. Oh dear. That one was fine!

Hardware & Connectivity

The Chromebook Duet is powered by a Mediatek Helio P60T SoC, 4GB of RAM, and a choice of 64 or 128 GB of main storage.

The tablet provides one USB-C port for charging, audio output (a 3.5mm adapter is provided in the box), USB hub, and video output; though, sadly, the latter is restricted to a maximum of 1080p30, or 1440x900 at 60 Hz. It can be charged using the included 10W charger, or use up to I believe 18W from a higher powered USB-C PD charger. I’ve successfully used the Chromebook with a USB-C monitor with attached keyboard, mouse, and DAC without any issues.

On the wireless side, the tablet provides 2x2 Wifi AC and Bluetooth 4.2. WiFi reception seemed just fine, though I have not done any speed testing, missing a sensible connection at the moment. I used Bluetooth to connect to my smartphone for instant tethering, and my Sony WH1000XM2 headphones, both of which worked without any issues.

The screen is a bright 400 nit display with excellent viewing angles, and the speakers do a decent job, meaning you can use easily use this for watching a movie when you’re alone in a room and idling around. It has a resolution of 1920x1200.

The device supports styluses following the USI standard. As of right now, the only such stylus I know about is an HP one, and it costs about 70€ or so.

Cameras are provided on the front and the rear, but produce terrible images.

Software: The tablet experience

The Chromebook Duet runs Chrome OS, and comes with access to Android apps using the play store (and sideloading in dev mode) and access to full Linux environments powered by LXD inside VMs.

The screen which has 1920x1200 is scaled to a ridiculous 1080x675 by default which is good for being able to tap buttons and stuff, but provides next to no content. Scaling it to 1350x844 makes things more balanced.

The Linux integration is buggy. Touches register in different places than where they happened, and the screen is cut off in full screen extremetuxracer, making it hard to recommend for such uses.

Android apps generally work fine. There are some issues with the back gesture not registering, but otherwise I have not found issues I can remember.

One major drawback as a portable media consumption device is that Android apps only work in Widevine level 3, and hence do not have access to HD content, and the web apps of Netflix and co do not support downloading. Though one of the Duets actually said L1 in check apps at some point (reported in issue 1090330). It’s also worth noting that Amazon Prime Video only renders in HD, unless you change your user agent to say you are Chrome on Windows - bad Amazon!

The tablet experience also lags in some other ways, as the palm rejection is overly extreme, causing it to reject valid clicks close to the edge of the display (reported in issue 1090326).

The on screen keyboard is terrible. It only does one language at a time, forcing me to switch between German and English all the time, and does not behave as you’d expect it when editing existing words - it does not know about them and thinks you are starting a new one. It does provide a small keyboard that you can move around, as well as a draw your letters keyboard, which could come in handy for stylus users, I guess. In any case, it’s miles away from gboard on Android.

Stability is a mixed bag right now. As of Chrome OS 83, sites (well only Disney+ so far…) sometimes get killed with SIGILL or SIGTRAP, and the device rebooted on its own once or twice. Android apps that use the DRM sometimes do not start, and the Netflix Android app sometimes reports it cannot connect to the servers.

Performance

Performance is decent to sluggish, with micro stuttering in a lot of places. The Mediatek CPU is comparable to Intel Atoms, and with only 4GB of RAM, and an entire Android container running, it’s starting to show how weak it is.

I found that Google Docs worked perfectly fine, as did websites such as Mastodon, Twitter, Facebook. Where the device really struggled was Reddit, where closing or opening a post, or getting a reply box could take 5 seconds or more. If you are looking for a Reddit browsing device, this is not for you. Performance in Netflix was fine, and Disney+ was fairly slow but still usable.

All in all, it’s acceptable, and given the price point and the build quality, probably the compromise you’d expect.

Summary

tl;dr:

  • good: Build quality, bright screen, low price, included accessories
  • bad: DRM issues, performance, limited USB-C video output, charging speed, on-screen keyboard, software bugs

The Chromebook Duet or IdeaPad Duet Chromebook is a decent tablet that is built well above its price point. It’s lackluster performance and DRM woes make it hard to give a general recommendation, though. It’s not a good laptop.

I can see this as the perfect note taking device for students, and as a cheap tablet for couch surfing, or as your on-the-go laptop replacement, if you need it only occasionally.

I cannot see anyone using this as their main laptop, although I guess some people only have phones these days, so: what do I know?

I can see you getting this device if you want to tinker with Linux on ARM, as Chromebooks are quite nice to tinker with, and a tablet is super nice.

Krebs on SecurityFlorence, Ala. Hit By Ransomware 12 Days After Being Alerted by KrebsOnSecurity

In late May, KrebsOnSecurity alerted numerous officials in Florence, Ala. that their information technology systems had been infiltrated by hackers who specialize in deploying ransomware. Nevertheless, on Friday, June 5, the intruders sprang their attack, deploying ransomware and demanding nearly $300,000 worth of bitcoin. City officials now say they plan to pay the ransom demand, in hopes of keeping the personal data of their citizens off of the Internet.

Nestled in the northwest corner of Alabama, Florence is home to roughly 40,000 residents. It is part of a quad-city metropolitan area perhaps best known for the Muscle Shoals Sound Studio that recorded the dulcet tones of many big-name music acts in the 1960s and 70s.

Image: Florenceal.org

On May 26, acting on a tip from Milwaukee, Wisc.-based cybersecurity firm Hold Security, KrebsOnSecurity contacted the office of Florence’s mayor to alert them that a Windows 10 system in their IT environment had been commandeered by a ransomware gang.

Comparing the information shared by Hold Security dark web specialist Yuliana Bellini with the employee directory on the Florence website indicated the username for the computer that attackers had used to gain a foothold in the network on May 6 belonged to the city’s manager of information systems.

My call was transferred to no fewer than three different people, none of whom seemed eager to act on the information. Eventually, I was routed to the non-emergency line for the Florence police department. When that call went straight to voicemail, I left a message and called the city’s emergency response team.

That last effort prompted a gracious return call the following day from a system administrator for the city, who thanked me for the heads up and said he and his colleagues had isolated the computer and Windows network account Hold Security flagged as hacked.

“I can’t tell you how grateful we are that you helped us dodge this bullet,” the technician said in a voicemail message for this author. “We got everything taken care of now, and some different protocols are in place. Hopefully we won’t have another near scare like we did, and hopefully we won’t have to talk to each other again.”

But on Friday, Florence Mayor Steve Holt confirmed that a cyberattack had shut down the city’s email system. Holt told local news outlets at the time there wasn’t any indication that ransomware was involved.

However, in an interview with KrebsOnSecurity Tuesday, Holt acknowledged the city was being extorted by DoppelPaymer, a ransomware gang with a reputation for negotiating some of the highest extortion payments across dozens of known ransomware families.

The average ransomware payment by ransomware strain. Source: Chainalysis.

Holt said the same gang appears to have simultaneously compromised networks belonging to four other victims within an hour of Florence, including another municipality that he declined to name. Holt said the extortionists initially demanded 39 bitcoin (~USD $378,000), but that an outside security firm hired by the city had negotiated the price down to 30 bitcoin (~USD $291,000).

Like many other cybercrime gangs operating these days, DoppelPaymer will steal reams of data from victims prior to launching the ransomware, and then threaten to publish or sell the data unless a ransom demand is paid.

Holt told KrebsOnSecurity the city can’t afford to see its citizens’ personal and financial data jeopardized by not paying.

“Do they have our stuff? We don’t know, but that’s the roll of the dice,” Holt said.

Steve Price, the Florence IT manager whose Microsoft Windows credentials were stolen on May 6 by a DHL-themed phishing attack and used to further compromise the city’s network, explained that following my notification on May 26 the city immediately took a number of preventative measures to stave off a potential ransomware incident. Price said that when the ransomware hit, they were in the middle of trying to get city leaders to approve funds for a more thorough investigation and remediation.

“We were trying to get another [cybersecurity] response company involved, and that’s what we were trying to get through the city council on Friday when we got hit,” Price said. “We feel like we can build our network back, but we can’t undo things if peoples’ personal information is released.”

A DoppelPaymer ransom note. Image: Crowdstrike.

Fabian Wosar, chief technology officer at Emsisoft, said organizations need to understand that the only step which guarantees a malware infestation won’t turn into a full-on ransomware attack is completely rebuilding the compromised network — including email systems.

“There is a misguided belief that if you were compromised you can get away with anything but a complete rebuild of the affected networks and infrastructure,” Wosar said, noting that it’s not uncommon for threat actors to maintain control even as a ransomware victim organization is restoring their systems from backups.

“They often even demonstrate that they still ‘own’ the network by publishing screenshots of messages talking about the incident,” Wosar said.

Hold Security founder Alex Holden said Florence’s situation is all too common, and that very often ransomware purveyors are inside a victim’s network for weeks or months before launching their malware.

“We often get glimpses of the bad guys beginning their assaults against computer networks and we do our best to let the victims know about the attack,” Holden said. “Since we can’t see every aspect of the attack we advise victims to conduct a full investigation of the events, based on the evidence collected. But when we deal with sensitive situations like ransomware, timing and precision are critical. If the victim will listen and seek out expert opinions, they have a great chance of successfully stopping the breach before it turns into ransom.”

TEDWays of seeing: The talks of TED2020 Session 3

TED’s head of curation Helen Walters (left) and writer, activist and comedian Baratunde Thurston host Session 3 of TED2020: Uncharted on June 4, 2020. (Photo courtesy of TED)

Session 3 of TED2020, hosted by TED’s head of curation Helen Walters and writer, activist and comedian Baratunde Thurston, was a night of something different — a night of camaraderie, cleverness and, as Baratunde put it, “a night of just some dope content.” Below, a recap of the night’s talks and performances.

Actor and performer Cynthia Erivo recites Maya Angelou’s iconic 2006 poem, “A Pledge to Rescue Our Youth.” She speaks at TED2020: Uncharted on June 4, 2020. (Photo courtesy of TED)

In a heartfelt and candid moment to start the session, Tony- and Emmy-winner Cynthia Erivo performs “A Pledge to Rescue Our Youth,” an iconic 2006 poem by Maya Angelou. “You are the best we have. You are all we have. You are what we have become. We pledge you our whole hearts from this day forward,” Angelou writes.

“Drawing has taught me to create my own rules. It has taught me to open my eyes and see not only what is, but what can be. Where there are broken systems … we can create new ones that actually function and benefit all, instead of just a select few,” says Shantell Martin. She speaks at TED2020: Uncharted on June 4, 2020. (Photo courtesy of TED)

Shantell Martin, Artist

Big idea: Drawing is more than just a graphic art — it’s a medium of self-discovery that enables anyone to let their hands spin out freestyle lines independent of rules and preconceptions. If we let our minds follow our hands, we can reach mental spaces where new worlds are tangible and art is the property of all – regardless of ethnicity or class.

How? A half-Nigerian, half-English artist growing up in a council estate in southeast London, Martin has firsthand knowledge of the race and class barriers within England’s institutions. Drawing afforded her a way out, taking her first to Tokyo and then to New York, where her large-scale, freestyle black and white drawings (often created live in front of an audience) taught her the power of lines to build new worlds. By using our hands to draw lines that our hearts can follow, she says, we not only find solace, but also can imagine and build worlds where every voice is valued equally. “Drawing has taught me to create my own rules,” Martin says. “It has taught me to open my eyes and see not only what is, but what can be. Where there are broken systems … we can create new ones that actually function and benefit all, instead of just a select few.”


“If we’re not protecting the arts, we’re not protecting our future, we’re not protecting this world,” says Swizz Beatz. He speaks at TED2020: Uncharted on June 4, 2020. (Photo courtesy of TED)

Swizz Beatz, Music producer, entrepreneur, art enthusiast

Big idea: Art is for everyone. Let’s make it that way.

Why? Creativity heals us — and everybody who harbors love for the arts deserves access to them, says Swizz Beatz. Interweaving a history of his path as a creative in the music industry, Beatz recounts his many successful pursuits in the art of giving back. In creating these spaces at the intersection of education, celebration, inclusion and support — such as The Dean Collection, No Commissions, The Dean’s Choice and Verzuz — he plans to outsmart lopsided industries that exploit creatives and give the power of art back to the people. “If we’re not protecting the arts, we’re not protecting our future, we’re not protecting this world,” he says.


“In this confusing world, we need to be the bridge between differences. You interrogate those differences, you hold them for as long as you can until something happens, something reveals itself,” says Jad Abumrad. He speaks at TED2020: Uncharted on June 4, 2020. (Photo courtesy of TED)

Jad Abumrad, host of RadioLab and Dolly Parton’s America

Big Idea: Storytellers and journalists are the bridge that spans conflict and difference to reveal a new meaning. 

How: When journalist Jad Abumrad began storytelling in 2002, he crafted each story to culminate the same way: mind-blowing science discoveries, paired with ear-tickling auditory creations, resolved into “moments of wonder.” But after 10 years, he began to wonder himself: Is this the only way to tell a story? Seeking an answer, Abumrad turned to more complex, convoluted stories and used science to sniff out the facts. But these stories often ended without an answer or resolution, instead leading listeners to “moments of struggle,” where truth collided with truth. It wasn’t until Abumrad returned to his home of Tennessee where he met an unlikely teacher in the art of storytelling: Dolly Parton. In listening to the incredible insights she had into her own life, he realized that the best stories can’t be summarized neatly and instead should find revelation — or what he calls “the third.” A term rooted in psychotherapy, the third is the new entity created when two opposing forces meet and reconcile their differences. For Abumrad, Dolly had found resolution in her life, fostered it in her fanbase and showcased it in her music — and revealed to him his new purpose in telling stories. “In this confusing world, we need to be the bridge between differences,” Abumrad says. “You interrogate those differences, you hold them for as long as you can until something happens, something reveals itself.”


Aloe Blacc performs “Amazing Grace” at TED2020: Uncharted on June 4, 2020. (Photo courtesy of TED)

Backed by piano from Greg Phillinganes, singer, songwriter and producer Aloe Blacc provides balm for the soul with a gorgeous rendition of “Amazing Grace.”


Congressman John Lewis, politician and civil rights leader, interviewed by Bryan Stevenson, public interest lawyer and founder of the Equal Justice Initiative — an excerpt from the upcoming TED Legacy Project

Big idea: As a new generation of protesters takes to the streets to fight racial injustice, many have looked to the elders of the Civil Rights Movement — like John Lewis — to study how previous generations have struggled not just to change the world but also to maintain morale in the face of overwhelming opposition.

How? In order to truly effect change and move people into a better world, contemporary protestors must learn tactics that many have forgotten — especially nonviolent engagement and persistence. Fortunately, John Lewis sees an emerging generation of new leaders of conscience, and he urges them to have hope, to be loving and optimistic and, most of all, to keep going tirelessly even in the face of setbacks. As interviewer Bryan Stevenson puts it, “We cannot rest until justice comes.”

Planet DebianMolly de Blanc: Racism is a Free Software Issue

Racism is a free software issue. I gave a talk that touched on this at CopyLeft Conf 2019. I also talked a little bit about it at All Things Open 2019 and FOSDEM 2020 in my talk The Ethics Behind Your IoT. I know statistics, theory, and free software. I don’t know about race and racism nearly as well. I might make mistakes – I have made some and I will make more. Please, when I do, help me do better.

I want to look at a few particular technologies and think about how they reinforce systemic racism. Worded another way: how is technology racist? How does technology hurt Black Indigenous People of Color (BIPOC)? How does technology keep us racist? How does technology make it easier to be racist?

Breathalyzers

In the United States, Latinx folks are less likely to drink than white people and, overall, less likely to be arrested for DUIs3,4. However, they are more likely to be stopped by police while driving5,6.

Who is being stopped by police is up to the police and they pull over a disproportionate number of Latinx drivers. After someone is pulled over for suspected drunk driving, they are given a breathalyzer test. Breathalyzers are so easy to (un)intentionally mis-calibrate that they have been banned as valid evidence in multiple states. The biases of the police are not canceled out by the technology that should, in theory, let us know whether someone is actually drunk.

Facial Recognition

I could talk about for quite some time and, in fact, have. So have others. Google’s image recognition software recognized black people as gorillas – and to fix the issue it removed gorillas from it’s image-labeling technology.

Facial recognition software does a bad job at recognizing black people. In fact, it’s also terrible at identifying indigenous people and other people of color. (Incidentally, it’s also not great at recognizing women, but let’s not talk about that right now.)

As we use facial recognition technology for more things, from automated store checkouts (even more relevant in the socially distanced age of Covid-19), airport ticketing, phone unlocking, police identification, and a number of other things, it becomes a bigger problem that this software cannot tell the difference between two Asian people.

Targeted Advertising

Black kids see 70% more online ads for food than white kids, and twice as many ads for junk food. In general BIPOC youth are more likely to see junk food advertisements online. This is intentional, and happens after they are identified as BIPOC youth.

Technology Reinforces Racism; Racism Builds Technology

The technology we have developed reinforces racism on a society wide scale because it makes it harder for BIPOC people to interact with this world that is run by computers and software. It’s harder to not be racist when the technology around us is being used to perpetuate racist paradigms. For example, if a store implements facial recognition software for checkout, black women are less likely to be identified. They are then more likely to be targeted as trying to steal from the store. We are more likely to take this to mean that black women are more likely to steal. This is how technology builds racism,

People are being excluded largely because they are not building these technologies, because they are not welcome in our spaces. There simply are not enough Black and Hispanic technologists and that is a problem. We need to care about this because when software doesn’t work for everyone, it doesn’t work. We cannot build on the promise of free and open source software when we are excluding the majority of people.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.9.900.1.0

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 727 other packages on CRAN.

Conrad recently released a new upstream version 9.900.1 of Armadillo which we packaged and tested as usual first as a ‘release candidate’ build and then as the release. As usual, logs from reverse-depends runs are in the rcpp-logs repo.

Apart from the new upstream release, we updated Travis use, ornamented the README a little, and smoothed over a rough corner from the recent R 4.0.0 release. All changes in the new release are noted below.

Changes in RcppArmadillo version 0.9.900.1.0 (2020-06-08)

  • Upgraded to Armadillo release 9.900.1 (Nocturnal Misbehaviour)

    • faster solve() for under/over-determined systems

    • faster eig_gen() and eig_pair() for large matrices

    • expanded eig_gen() and eig_pair() to optionally provide left and right eigenvectors

  • Switch Travis CI testing to R 4.0.0, use bionic as base distro and test R 3.6.3 and 4.0.0 in a matrix (Dirk in #298).

  • Add two badges to README for indirect use and the CSDA paper.

  • Adapt RcppArmadillo.package.skeleton() to a change in R 4.0.0 affecting what it exports in NAMESPACE.

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: Rcpp Webinar Recording Available

As announced in a few tweets leading up to it, I took the date of what would have been the annual R/Finance conference as an opportunity to hold the one-hour tutorial / workshop with introductory Rcpp material which I often present on the first morning preceding the conference as a self-organized webinar. The live-streaming worked actually reasonably well via obs to youtube (even though the comprehensive software by the latter complained at times about insufficient bitstream rates–the joys of living with a (near) monopolistic broadband provider whom I should leave for fiber…). Apparently around seventy people connected to the stream—which is more than we usually have in the seminar room at UIC for the R/Finance morning.

The recording is now available here, and has already been seen over 200 times:

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBits from Debian: Great fonts in Debian 10 (or later)

An example of several fonts in Debian 10

Debian comes with tons of fonts for all kinds of purposes, you can easily list them all (almost) with: apt-cache search ^fonts-

Above you can see a nice composition with examples of several fonts. The composition is published under the MIT (Expat) license and the source SVG (created with Inkscape) can be downloaded here. You will need the fonts to be installed in your system so the SVG is correctly rendered.

If you want to learn more you can have a look at the wiki page about fonts (https://wiki.debian.org/Fonts), and if you want to contribute or maintain fonts in Debian, don't hesitate to join the Fonts Team!

Worse Than FailureRepresentative Line: The Truest Comment

Usually, when posting a representative line, it’s a line of code. Rarely, it’s been a “representative comment”.

Today’s submitter supplied some code, but honestly, the code doesn’t matter- it’s just some method signatures. The comment, however, is representative. Not just representative of this code, but honestly, all code, everywhere.

        // i should have commented this stupid code

As our submitter writes:

I wrote this code. I also wrote that comment. That comment is true.
Why do I do this to myself?

I sometimes ask myself the same question.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

LongNowRacial Injustice & Long-term Thinking

Long Now Community,

Since 01996, Long Now has endeavored to foster long-term thinking in the world by sharing perspectives on our deep past and future. We have too often failed, however, to include and listen to Black perspectives.

Racism is a long-term civilizational problem with deep roots in the past, profound effects in the present, and so many uncertain futures. Solving the multigenerational challenge of racial inequality requires many things, but long-term thinking is undoubtedly one of them. As an institution dedicated to the long view, we have not addressed this issue enough. We can and will do better.

We are committed to surfacing these perspectives on both the long history of racial inequality and possible futures of racial justice going forward, both through our speaker series and in the resources we share online. And if you have any suggestions for future resources or speakers, we are actively looking.

Alexander Rose

Executive Director, Long Now

Learn More

  • A recent episode of this American Life explored Afrofuturism: “It’s more than sci-fi. It’s a way of looking at black culture that’s fantastic, creative, and oddly hopeful—which feels especially urgent during a time without a lot of optimism.”
  • The 1619 Project from The New York Times, winner of the 02020 Pulitzer Prize for Commentary,  re-examines the 400 year-legacy of slavery. 
  • A paper from political scientists at Stanford and Harvard analyzes the long-term effects of slavery on Southern attitudes toward race and politics.
  • Ava DuVernay’s 13th is a Netflix documentary about the linked between slavery and the US penal system. It is available to watch on YouTube for free here
  • “The Case for Reparations” is a landmark essay by Ta-Nehisi Coates about the institutional racism of housing discrimination. 
    I Am Not Your Negro is a documentary exploring the history of racism in the United States through the lens of the life of writer and activist James Baldwin. The documentary is viewable on PBS for free here.

Sociological ImagesParty Affiliation in a Pandemic

Since mid-March 2020, Gallup has been polling Americans about their degree of self-isolation during the pandemic. The percent who said they had “avoided small gatherings” rose from 50% in early-March to 81% in early April, dropping slightly to 75% in late April as pressures began rising to start loosening stay-at-home orders.

What makes this curve sociologically interesting is our leaders generally made the restrictions largely voluntary, hoping for social norms to do the job of control. Only a few state and local governments have issued citations for holding social gatherings. Mostly, social norms have been doing the job. But increasingly the partisan divide on self-isolation is widening and undermining pandemic precautions. The chart, which appeared in a Gallup report on May 11, 2020, vividly shows the partisan divide on beliefs in distancing as protection from the pandemic. The striking finding is the huge partisan gap with independents leaning slightly toward Democrats.  

Not only did the partisan divide remain wide, but the number of adults practicing “social distancing” dropped from 75% in early April to 58%. This drop in so called self-reported “social distancing” occurred in states both with and without stay-at-home orders. Elsewhere I argue that “social distancing” is a most unfortunate label for physical distancing.

Republicans have been advocating for opening up businesses early, but it is not a mere intellectual debate. Some held large protests while brandishing firearms; others appeared in public without masks and without observing 6-feet distances. Some business that re-opened in early May reported customers acting disrespectful to others, ignoring the store’s distancing rules. In another incident, an armed militia stood outside a barbershop to keep authorities from closing down the newly reopened shop.

Retail operations in particular are concerned about compliance to social norms because without adequate compliance, other customers will not return. Social norms rely on social trust. If retail operations cannot depend upon customers to be respectful, they will not only lose additional customers but employees as well.

The Sad Impact of Pandemic Partisanship    

American society was highly partisan before the pandemic, so it is not surprising that partisan signs remain. For a few weeks in March and April, partisanship took a back seat and signs of cooperation suggested societal solidarity.

We are only months away from the Presidential election, so we do not expect either side to let us forget the contest. However, we can only hope that partisans will not forget that politics cannot resolve the pandemic alone. Without relying heavily on scientists and health system experts, our society can only fail.

Unfortunately, lives hang in the balance if there is a partisan failure to reach consensus on distancing and related precautions. Economists at Stanford and Harvard, using distancing data from smartphones as well as local data on COVID cases and deaths, completed a sophisticated model of the first few months of the pandemic. Their report, “Polarization and Public Health: Partisan Differences in Social Distancing during the Coronavirus Pandemic,” found that (1) Republicans engage in less social distancing, and (2) if this partisanship difference continues, the US will end up with more COVIC-19 transmission at a higher economic cost. Assuming the researchers’ analytical model is accurate, the Republican ridicule of social distancing is such an ironic tragedy. Not only will lives be lost but what is done under the banner of promoting economic benefit, is actually producing greater economic hardship.

Ron Anderson, Professor Emeritus, University of Minnesota, taught sociology from 1968 to 2005. His early work centered around the social diffusion of technology. Since 2005, his work has focused on compassion and the social dimensions of suffering.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureEditor's Soapbox: On Systemic Debt

I recently caught up with an old co-worker from my first "enterprise" job. In 2007, I was hired to support an application which was "on its way out," as "eventually" we'd be replacing it with a new ERP. September 2019 was when it finally got retired.

Interest on the federal debt

The application was called "Total Inventory Process" and it is my WTF origin story. Elements of that application and the organization have filtered into many of my articles on this site. Boy, did it have more than its share of WTFs.

"Total Inventory Process". Off the bat, you know that this is a case of an overambitious in-house team trying to make a "do everything" application that's gonna solve every problem. Its core was inventory management and invoicing, but it had its own data-driven screen rendering/templating engine (which barely worked), its own internationalization engine (for an application which never got localized), a blend of web, COM+, client-side VB6, and hooks into an Oracle backend but also a mainframe.

TIP was also a lesson in the power of technical debt, how much it really costs, and why technical solutions are almost never enough to address technical debt.

TIP was used to track the consumption of inventory at several customers' factories to figure out how to invoice them. We sold them the paint that goes on their widgets, but it wasn't as simple as "You used this much paint, we bill you this much." We handled the inventory of everything in their paint line, from the paint to the toilet paper in the bathrooms. The customer didn't want to pay for consumption, they wanted to pay for widgets. The application needed to be able to say, "If the customer makes 300 widgets, that's $10 worth of toilet paper."

The application handled millions of dollars of invoices each year, for dozens of customer facilities. When I was hired, I was the second developer hired to work full time on supporting TIP. I wasn't hired to fix bugs. I wasn't hired to add new features. I was hired because it took two developers, working 40 hours a week, just to keep the application from falling over in production.

My key job responsibility was to log into the production database and manually tweak records, because the application was so bug-ridden, so difficult to use, and so locked down that users couldn't correct their own mistakes. With whatever time I had left, I'd respond to user requests for new functionality or bug-fixes.

It's usually hard to quantify exactly what technical debt costs. We talk about it a lot, but it very often takes the form of, "I know it when I see it," or "technical debt is other people's code." Here, we have a very concrete number: technical debt was two full-time salaries to just maintain basic operations.

My first development task was to add a text-box to one screen. Because other developers had tried to do this in the past, the estimate I was given was two weeks of developer time. 80 hours of effort for a new text box.

Once, the users wanted to be able to sort a screen in descending order. The application had a complex sorting/paging system designed to prevent fetching more than one page of data at a time for performance reasons. While it did that, it had no ability to sort in descending order, and adding that functionality would have entailed a full rewrite. My "fix" was to disable paging and sorting entirely on the backend, then re-implement in on the client-side in JavaScript for that screen. The users loved it, because suddenly one screen in the application was actually fast.

Oh, and as it was, "sorting" on the backend was literally putting the order-by clause in the URL and then SQL injecting it.

There were side effects to this quantity of technical debt. Since we were manually changing data in production, we were working closely with a handful of stakeholder users. Three, in specific, who were the "triumvirate" of TIP. Those three people were the direct point of contact with the developers. They were the direct point of contact to management. They set priorities. They entered bug reports. They made data change requests. They ruled the system.

They had a lot of power over this system, and this was a system handling millions of dollars. I think they liked that power. To this day, one of them holds that the "concept" of TIP was "unbeatable", and its replacement system is "cheap" and "crap". Now, from a technical perspective, you can't really get crappier than TIP, but from the perspective of a super-user, I can understand how great TIP felt.

I was a young developer, and didn't really like this working environment. Oh, I liked the triumvirate just fine, they were lovely people to work with, but I didn't like spending my days as nothing more than an extension of them. They'd send me emails containing UPDATE statements, and just ask that I execute them in production. There's not a lot of job satisfaction in being nothing more than "the person with permission to change data."

Frustrated, I immediately starting looking for ways to pay down that technical debt. There was only so much I could do, for a number of reasons. The obvious one was the software. If "adding a textbox" could reasonably take two weeks, "restructuring data access so that you can page data sorted in descending order" felt impossible. Even "easy" changes could unleash chaos as they triggered some edge case no one knew about.

Honestly, though, the technical obstacles were the small ones. The big challenges were the cultural and political ones.

First, the company knew that much of the code running in production was bad. So they created policies which erected checkpoints to confirm code quality, making it harder to deploy new code. Much harder, and much longer: you couldn't release to production until someone with authority signed off on it, and that might take days. Instead of resulting in better code, it instead meant the old, bad code stayed bad, and new, bad code got rushed through the process. "We have a hard go-live date, we'll improve the code after that." Or people found other workarounds: your web code had to go through those checkpoints, but stored procedures didn't, so if you had the authority to modify things in production, like I did, you could change stored procedures to your heart's content.

Second, the organization as a whole was risk-averse. The application was handling millions of dollars of invoices, and while it required a lot of manual babysitting, by the time the invoices went out, they were accurate. The company got paid. No one wanted to make sweeping changes which could impact that.

Third, the business rules were complicated. "How many rolls of toilet paper do you bill per widget?" is a hard question to answer. It's fair to say that no one fully understood the system. On the business side, the triumvirate probably had the best handle on it, but even they could be blindsided by its behavior. It was a complex system that needed to stay functioning, because it was business critical, but no one knows exactly what all of those functions are.

Fourth, the organization viewed "support" and "enhancement" the way other companies might view "operational" and "capital" budgets. They were different pools of money, and you weren't supposed to use the support money to pay for new features, nor could enhancement money be used to fix bugs.

Most frustrating was that I would sometimes get push-back from the triumvirate. Oh, they loved it when I could give them a button to self-service some feature which needed to use manual intervention, so long as the interface was just cumbersome enough that only they could use it. They hated it when a workflow got so streamlined that any user could do it. Worse, for them, was that as the technical debt got paid down, we started transitioning more and more of the "just change production data" to low-level contractors. Now the triumvirate no longer had a developer capable of changing code at their beck and call. They had to surrender power.

Fortunately, management was sensitive to the support costs around TIP. Once we started to build a track-record of reduced costs, management started to remove some of the obstacles. It was a lot of politics. I suspect some management were wary of how much power the triumvirate had, and were happy when that could get reduced. After a few years of work on TIP, I mostly rolled off of it onto other projects. Usually, I'd just pop in for month-end billing support, or being the expert who understood some of the bizarre technical intricacies. Obviously, the application continued working just fine for years without me, and I can't say that I miss it.

TIP accrued technical debt far more quickly than most systems would. Some of that comes from ambition: it tried to be everything, including reinventing functions which had nothing to do with its core business. This led to a tortured development process, complete with death marches, team restructurings, "throw developers at this until it gets back on track," and several ultimatums like "If we don't get something in production by the end of the month, everyone is fired." It was born in severe debt, within an organization which didn't have good mechanisms to manage that debt. And the main obstacles to paying down that debt weren't technical: they were social and political.

I was lucky. I was given the freedom to tackle that debt (or, if we're being fully honest, I also took some freedom under the "it's easier to seek forgiveness than to ask permission" principle). In a lot of systems, the technical debt accrues until the costs of servicing the debt are untenable. You end up paying so much in "interest" that you stop being able to actually do anything. This is a failure mode for a lot of projects, and that's usually when a "ground up" rewrite happens. Rewrites have a mixed record, though: TIP itself was actually a rewrite of an older system and promised to "do it right" this time.

Technical debt has been on my mind because I've been thinking a lot lately about broader, systemic debt. Any systems humans build—software systems, mechanical systems, or even social systems—are going to accrue debt. Decisions made in the history of the system are going to create problems in the future, whether those decisions were wrong from the get-go, or had unforeseen consequences, or just the world changed and they're no longer workable.

The United States, right now, is experiencing a swell of protests unlike anything I've seen in my lifetime. I think it's fair to say that these protests are rooted in historical inequities and injustices which constitute a form of social debt. Our social systems, especially around the operation of police, but broadly in the realm of race, are loaded with debt from the past.

I see similarities in the obstacles to paying down that debt. Attempts to make changes by policy end up creating roadblocks to change, or simply fail to accomplish their goals. Resistance to change because change entails risk, and managing or mitigating those risks feels more important than actually fixing the problems. The systems we're talking about are complicated, and it's difficult to even build consensus on what better versions look like, because it's difficult to even understand what they currently are. And finally, there are groups that have power, and in paying down the social debt, they would have to give up some of that power.

That's a big, hard-to-grapple-with quantity of debt. It's connected to a set of interlocking systems which are difficult to pick apart. And it's frighteningly important to get right.

All systems accrue systemic debt. Software systems accrue debt. Social systems accrue debt. Even your personal mental systems, your psychological health, accrue debt. Without action, the debt will grow, the interest on that debt grows, and more energy ends up servicing that debt. One of the ways in which systems fail is when the accumulated debt gets too high. Bankruptcy occurs. Of all of these kinds of systems, the software debt tends to be the trivial one. Software products become unsupportable and get replaced. Psychological debt can lead towards things like depression or other mental health problems. Social debt can lead to unrest or perpetuations of injustice.

Systemic debt may be a technical problem at its root, but its solution is always going to require politics. TIP couldn't be improved without overcoming political challenges. US social debt can't be resolved without significant political changes.

What I hope people take away from this story is an awareness of systemic debt, and an understanding of some of the long-term costs of that debt. I encourage you to look around you and at the systems you interact with. What sources of debt are there? What is that debt costing the system and the people who depend on it? What are the obstacles to paying down that debt? What are the challenges of making the systems we use better?

Systemic debt doesn't go away by itself, and left unmanaged, it will only grow. If you don't pay attention to it, broken systems end up staying in production for way too long. TIP was used for nearly 18 years. Don't use broken things for that long, please.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 05)

Here’s part five of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

There’s more of Kurt in this week’s episode; as I mentioned in last week’s intro, Kurt is loosely based on my old friend Darren Atkinson, who pulled down a six-figure income by recovering, repairing and reselling high-tech waste from Toronto’s industrial suburbs. Darren was the subject of the first feature I ever sold to Wired, Dumpster Diving.

But Kurt was also based loosely on Igor Kenk, a friend of mine who turned out to be one of Toronto’s most prolific bike thieves (I knew him as a bike repair guy). Igor was a strange and amazing guy, and Richard Poplak and Nick Marinkovich’s 2010 graphic novel biography of him is a fantastic read.

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

Planet DebianEnrico Zini: Cooperation links

This comic was based on this essay from Augusten Burroughs: How to live unhappily ever after. In addition to the essay, I highly recommend reading his books. It's also been described in psychology as flow.
With full English subtitles
Confict – where it comes from and how to deal with it
Communication skills
Other languages
Distributed teams are where people you work with aren’t physically co-located, ie. they’re at another office building, home or an outsourced company abroad. They’re becoming increasingly popular, for DevOps and other teams, due to recruitment, diversity, flexibility and cost savings. Challenges arise due to timezones, language barriers, cultures and ways of working. People actively participating in Open Source communities tend to be effective in distributed teams. This session looks at how to apply core Open Source principles to distributed teams in Enterprise organisations, and the importance of shared purposes/goals, (mis)communication, leading vs managing teams, sharing and learning. We'll also look at practical aspects of what's worked well for others, such as alternatives to daily standups, promoting video conferencing, time management and virtual coffee breaks. This session is relevant for those leading or working in distributed teams, wanting to know how to cultivate an inclusive culture of increased trust and collaboration that leads to increased productivity and performance.

Krebs on SecurityOwners of DDoS-for-Hire Service vDOS Get 6 Months Community Service

The co-owners of vDOS, a now-defunct service that for four years helped paying customers launch more than two million distributed denial-of-service (DDoS) attacks that knocked countless Internet users and websites offline, each have been sentenced to six months of community service by an Israeli court.

vDOS as it existed on Sept. 8, 2016.

A judge in Israel handed down the sentences plus fines and probation against Yarden Bidani and Itay Huri, both Israeli citizens arrested in 2016 at age 18 in connection with an FBI investigation into vDOS.

Until it was shuttered in 2016, vDOS was by far the most reliable and powerful DDoS-for-hire or “booter” service on the market, allowing even completely unskilled Internet users to launch crippling assaults capable of knocking most websites offline.

vDOS advertised the ability to launch attacks at up to 50 gigabits of data per second (Gbps) — well more than enough to take out any site that isn’t fortified with expensive anti-DDoS protection services.

The Hebrew-language sentencing memorandum (PDF) has redacted the names of the defendants, but there are more than enough clues in the document to ascertain the identities of the accused. For example, it says the two men earned a little more than $600,000 running vDOS, a fact first reported by this site in September 2016 just prior to their arrest, when vDOS was hacked and KrebsOnSecurity obtained a copy of its user database.

In addition, the document says the defendants were initially apprehended on September 8, 2016, arrests which were documented here two days later.

Also, the sentencing mentions the supporting role of a U.S. resident named only as “Jesse.” This likely refers to 23-year-old Jesse Wu, who KrebsOnSecurity noted in October 2016 pseudonymously registered the U.K. shell company used by vDOS, and ran a tiny domain name registrar called NameCentral that vDOS and many other booter services employed.

Israeli prosecutors say Wu also set up their payment infrastructure, and received 15 percent of vDOS’s total revenue for his trouble. NameCentral no longer appears to be in business, and Wu could not be reached for comment.

Although it is clear Bidani and Huri are defendants in this case, it is less clear which is referenced as Defendant #1 or Defendant #2. Both were convicted of “corrupting/disturbing a computer or computer material,” charges that the judge said had little precedent in Israeli courts, noting that “cases of this kind have not been discussed in court so far.” Defendant #1 also was convicted of sharing nude pictures of a 14 year old girl.

vDOS also sold API access to their backend attack infrastructure to other booter services to further monetize their excess firepower, including Vstress, Ustress, and PoodleStresser and LizardStresser.

Yarden Bidani. Image: Facebook.

Both defendants received the lowest possible sentence (the maximum was two years in prison) — six months of community service under the watch of the Israeli prison service — mainly because the accused were minors during the bulk of their offenses. The judge also imposed small fines on each, noting that more than $175,000 dollars worth of profits had already been seized from their booter business.

The judge observed that while Defendant #2 had shown remorse for his crimes and an understanding of how his actions affected others — even sobbing throughout one court proceeding — Defendant #1 failed to participate in the therapy sessions previously ordered by the court, and that he has “a clear and daunting boundary for recurrence of further offenses in the future.”

Boaz Dolev, CEO of ClearSky Cyber Security, said he’s disappointed in the lightness of the sentences given how much damage the young men caused.

“I think that such an operation that caused big damage to so many companies should have been dealt differently by the Israeli justice system,” Dolev said. “The fact that they were under 18 when committing their crimes saved them from much harder sentences.”

While DDoS attacks typically target a single website or Internet host, they often result in widespread collateral Internet disruption. Less than two weeks after the 2016 arrest of Bidani and Huri, KrebsOnSecurity.com suffered a three-day outage as a result of a record 620 Gbps attack that was alleged to have been purchased in retribution for my reporting on vDOS. That attack caused stability issues for other companies using the same DDoS protection firm my site enjoyed at the time, so much so that the provider terminated my service with them shortly thereafter.

To say that vDOS was responsible for a majority of the DDoS attacks clogging up the Internet between 2012 and 2016 would be an understatement. The various subscription packages for the service were sold based in part on how many seconds the denial-of-service attack would last. And in just four months between April and July 2016, vDOS was responsible for launching more than 277 million seconds of attack time, or approximately 8.81 years worth of attack traffic.

It seems likely vDOS was responsible for several decades worth of DDoS years, but it’s impossible to say for sure because vDOS’s owners routinely wiped attack data from their servers.

Prosecutors in the United States and United Kingdom have in recent years sought tough sentences for those convicted of running booter services. While a number of  current charges against alleged offenders have not yet been fully adjudicated, only a handful of defendants in these cases have seen real jail time.

The two men responsible for creating and unleashing the Mirai botnet (the same duo responsible for building the massive crime machine that knocked my site offline in 2016) each avoided jail time thanks to their considerable cooperation with the FBI.

Likewise, Pennsylvania resident David Bukoski recently got five years probation and six months of “community confinement” after pleading guilty to running the Quantum Stresser booter service. Lizard Squad member and PoodleStresser operator Zachary Buchta was sentenced to three months in prison and ordered to pay $350,000 in restitution for his role in running various booter services.

On the other end of the spectrum, last November 21-year-old Illinois resident Sergiy Usatyuk was sentenced to 13 months in jail for running multiple booter services that launched millions of attacks over several years. And a 20-year-old U.K. resident in 2017 got two years in prison for operating the Titanium Stresser service.

For their part, authorities in the U.K. have sought to discourage would-be customers of these booter services by purchasing Google ads warning that such services are illegal. The goal is to steer customers away from committing further offenses that could land them in jail, and toward more productive uses of their skills and/or curiosity about cybersecurity.

,

CryptogramFriday Squid Blogging: Shark vs. Squid

National Geographic has a photo of a 7-foot long shark that fought a giant squid and lived to tell the tale. Or, at least, lived to show off the suction marks on his skin.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDThe bill has come due for the US’s legacy of racism: Week 3 of TED2020

In response to the historic moment of mourning and anger over the ongoing violence inflicted on Black communities by police in the United States, four leaders in the movement for civil rights — Dr. Phillip Atiba Goff, CEO of Center for Policing Equity; Rashad Robinson, president of Color Of Change; Dr. Bernice Albertine King, CEO of the King Center; and Anthony D. Romero, executive director of the American Civil Liberties Union — joined TED2020 to explore how we can dismantle the systems of oppression and racism. Watch the full discussion on TED.com, and read a recap below.

“The history that we have in this country is not just a history of vicious neglect and targeted abuse of Black communities. It’s also one where we lose our attention for it,” says Dr. Phillip Atiba Goff. He speaks at TED2020: Uncharted on June 3, 2020. (Photo courtesy of TED)

Dr. Phillip Atiba Goff, CEO of the Center for Policing Equity

Big idea: The bill has come due for the unpaid debts the United States owes to its Black residents. But we’re not going to get to where we need to go just by reforming police.

How? What we’re seeing now isn’t just the response to one gruesome, cruel, public execution — a lynching. And it’s not just the reaction to three of them: Ahmaud Arbery, Breonna Taylor and George Floyd. What we’re seeing is the bill come due for the unpaid debts that the US owes to its Black residents, says Dr. Phillip Atiba Goff, CEO of the Center for Policing Equity (CPE). In addition to the work that CPE is known for — working with police departments to use their own data to improve relationships with the communities they serve — Goff and his team are encouraging departments and cities to take money from police budgets and instead invest it directly in public resources for the community, so people don’t need the police for public safety in the first place. Learn more about how you can support the Center for Policing Equity »


“This is the time for White allies to stand up in new ways, to do the type of allyship that truly dismantles structures, not just provides charity,” says Rashad Robinson, president of Color of Change. He speaks at TED2020: Uncharted on June 3, 2020. (Photo courtesy of TED)

Rashad Robinson, president of Color Of Change

Big idea: In the wake of the murders of George Floyd, Breonna Taylor and Ahmaud Arbery, people are showing up day after day in support of the Movement for Black Lives and in protest of police brutality against Black communities. We need to channel that presence and energy into power and material change.

How? The presence and visibility of a movement can often lead us to believe that progress is inevitable. But building power and changing the system requires more than conversations and retweets. To create material change in the racist systems that enable and perpetuate violence against Black communities, we need to translate the energy of these global protests into specific demands and actions, says Robinson. We have to pass new laws and hold those in power — from our police chiefs to our city prosecutors to our representatives in Congress — accountable to them. If we want to disentangle these interlocking systems of violence and complicity, Robinson says, we need to get involved in local, tangible organizing and build the power necessary to change the rules. You can’t sing our songs, use our hashtags and march in our marches if you are on the other end supporting the structures that put us in harm’s way, that literally kill us,” Robinson says. “This is the time for White allies to stand up in new ways, to do the type of allyship that truly dismantles structures, not just provides charity.”


“We can do this,” says Dr. Bernice Albertine King. “We can make the right choice to ultimately build the beloved community.” She speaks at TED2020: Uncharted on June 3, 2020. (Photo courtesy of TED)

Dr. Bernice Albertine King, CEO of The King Center

Big idea: To move towards a United States rooted in benevolent coexistence, equity and love, we must destroy and replace systems of oppression and violence towards Black communities. Nonviolence, accountability and love must pave the way.

How? The US needs a course correction that involves both hard work and “heart work” — and no one is exempt from it, says Dr. Bernice Albertine King. King continues to spread and build upon the wisdom of her father, Dr. Martin Luther King Jr., and she believes the US can work towards unity and collective healing. To do so, racism, systemic oppression, militarism and violence must end. She calls for a revolution of values, allies that listen and engage and a world where anger is given space to be rechanneled into creating social and economic change. In this moment, as people have reached a boiling point and are being asked to restructure the nature of freedom, King encourages us to follow her father’s words of nonviolent coexistence, and not continue on the path of violent coannihilation. “You as a person may want to exempt yourself, but every generation is called,” King says. “And so I encourage corporations in America to start doing anti-racism work within corporate America. I encourage every industry to start doing anti-racism work and pick up the banner of understanding nonviolent change personally and from a social change perspective. We can do this. We can make the right choice to ultimately build the beloved community.”


“Can we really become an equal people, equally bound by law?” asks Anthony D. Romero, executive director of the ACLU. He speaks at TED2020: Uncharted on June 3, 2020. (Photo courtesy of TED)

Anthony D. Romero, executive director of the American Civil Liberties Union (ACLU)

Big idea: No matter how frightened we are by the current turmoil, we must stay positive, listen to and engage with unheard or silenced voices, and help answer what’s become the central question of democracy in the United States: Can we really become an equal people, equally bound by law, when so many of us are beaten down by racist institutions and their enforcers?

How? This is no time for allies to disconnect — it’s time for them to take a long look in the mirror, ponder viewpoints they may not agree with or understand and engage in efforts to dismantle institutional white supremacy, Romero says. Reform is not enough anymore. Among many other changes, the most acute challenge the ACLU is now tackling is how to defund militarized police forces that more often look like more standing armies than civil servants — and bring them under civilian control. “For allies in this struggle, and those of us who don’t live this experience every day, it is time for us to lean in,” Romero says. “You can’t change the channel, you can’t tune out, you can’t say, ‘This is too hard.’ It is not that hard for us to listen and learn and heed.”

CryptogramZoom's Commitment to User Security Depends on Whether you Pay It or Not

Zoom was doing so well.... And now we have this:

Corporate clients will get access to Zoom's end-to-end encryption service now being developed, but Yuan said free users won't enjoy that level of privacy, which makes it impossible for third parties to decipher communications.

"Free users for sure we don't want to give that because we also want to work together with FBI, with local law enforcement in case some people use Zoom for a bad purpose," Yuan said on the call.

This is just dumb. Imagine the scene in the terrorist/drug kingpin/money launderer hideout: "I'm sorry, boss. We could have have strong encryption to secure our bad intentions from the FBI, but we can't afford the $20." This decision will only affect protesters and dissidents and human rights workers and journalists.

Here's advisor Alex Stamos doing damage control:

Nico, it's incorrect to say that free calls won't be encrypted and this turns out to be a really difficult balancing act between different kinds of harms. More details here:

Some facts on Zoom's current plans for E2E encryption, which are complicated by the product requirements for an enterprise conferencing product and some legitimate safety issues. The E2E design is available here: https://github.com/zoom/zoom-e2e-whitepaper/blob/master/zoom_e2e.pdf

I read that document, and it doesn't explain why end-to-end encryption is only available to paying customers. And note that Stamos said "encrypted" and not "end-to-end encrypted." He knows the difference.

Anyway, people were rightly incensed by his remarks. And yesterday, Yuan tried to clarify:

Yuan sought to assuage users' concerns Wednesday in his weekly webinar, saying the company was striving to "do the right thing" for vulnerable groups, including children and hate-crime victims, whose abuse is sometimes broadcast through Zoom's platform.

"We plan to provide end-to-end encryption to users for whom we can verify identity, thereby limiting harm to vulnerable groups," he said. "I wanted to clarify that Zoom does not monitor meeting content. We do not have backdoors where participants, including Zoom employees or law enforcement, can enter meetings without being visible to others. None of this will change."

Notice that is specifically did not say that he was offering end-to-end encryption to users of the free platform. Only to "users we can verify identity," which I'm guessing means users that give him a credit card number.

The Twitter feed was similarly sloppily evasive:

We are seeing some misunderstandings on Twitter today around our encryption. We want to provide these facts.

Zoom does not provide information to law enforcement except in circumstances such as child sexual abuse.

Zoom does not proactively monitor meeting content.

Zoom does no have backdoors where Zoom or others can enter meetings without being visible to participants.

AES 256 GCM encryption is turned on for all Zoom users -- free and paid.

Those facts have nothing to do with any "misunderstanding." That was about end-to-end encryption, which the statement very specifically left out of that last sentence. The corporate communications have been clear and consistent.

Come on, Zoom. You were doing so well. Of course you should offer premium features to paying customers, but please don't include security and privacy in those premium features. They should be available to everyone.

And, hey, this is kind of a dumb time to side with the police over protesters.

I have emailed the CEO, and will report back if I hear back. But for now, assume that the free version of Zoom will not support end-to-end encryption.

EDITED TO ADD (6/4): Another article.

EDITED TO ADD (6/4): I understand that this is complicated, both technically and politically. (Note, though, Jitsi is doing it.) And, yes, lots of people confused end-to-end encryption with link encryption. (My readers tend to be more sophisticated than that.) My worry that the "we'll offer end-to-end encryption only to paying customers we can verify, even though there's plenty of evidence that 'bad purpose' people will just get paid accounts" story plays into the dangerous narrative that encryption itself is dangerous when widely available. And disagree with the notion that the possibility child exploitation is a valid reason to deny security to large groups of people.

Matthew Green on this issue. An excerpt:

Once the precedent is set that E2E encryption is too "dangerous" to hand to the masses, the genie is out of the bottle. And once corporate America accepts that private communications are too politically risky to deploy, it's going to be hard to put it back.

From Signal:

Want to help us work on end-to-end encrypted group video calling functionality that will be free for everyone? Zoom on over to our careers page....

TEDConversations on social progress: Week 3 of TED2020

For week 3 of TED2020, global leaders in technology, vulnerability research and activism gathered for urgent conversations on how to foster connection, channel energy into concrete social action and work to end systemic racism in the United States. Below, a recap of their insights.

“When we see the internet of things, let’s make an internet of beings. When we see virtual reality, let’s make it a shared reality,” says Audrey Tang, Taiwan’s digital minister for social innovation. She speaks with TED science curator David Biello at TED2020: Uncharted on June 1, 2020. (Photo courtesy of TED)

Audrey Tang, Taiwan’s digital minister for social innovation

Big idea: Digital innovation rooted in communal trust can create a stronger, more transparent democracy that is fast, fair — and even fun.

How? Taiwan has built a “digital democracy” where digital innovation drives active, inclusive participation from all its citizens. Sharing how she’s helped transform her government, Audrey Tang illustrates the many creative and proven ways technology can be used to foster community. In responding to the coronavirus pandemic, Taiwan created a collective intelligence system that crowdsources information and ideas, which allowed the government to act quickly and avoid a nationwide shutdown. They also generated a publicly accessible map that shows the availability of masks in local pharmacies to help people get supplies, along with a “humor over rumor” campaign that combats harmful disinformation with comedy. In reading her job description, Tang elegantly lays out the ideals of digital citizenship that form the bedrock of this kind of democracy: “When we see the internet of things, let’s make an internet of beings. When we see virtual reality, let’s make it a shared reality. When we see machine learning, let’s make it collaborative learning. When we see user experience, let’s make it about human experience. And whenever we hear the singularity is near, let us always remember the plurality is here.”


Brené Brown explores how we can harness vulnerability for social progress and work together to nurture an era of moral imagination. She speaks with TED’s head of curation Helen Walters at TED2020: Uncharted on June 2, 2020. (Photo courtesy of TED)

Brené Brown, Vulnerability researcher, storyteller

Big question: The United States is at its most vulnerable right now. Where do we go from here?

Some ideas: As the country reels from the COVID-19 pandemic and the murder of George Floyd, along with the protests that have followed, Brené Brown offers insights into how we might find a path forward. Like the rest of us, she’s in the midst of processing this moment, but believes we can harness vulnerability for progress and work together to nurture an era of moral imagination. Accountability must come first, she says: people have to be held responsible for their racist behaviors and violence, and we have to build safe communities where power is shared. Self-awareness will be key to this work: the ability to understand your emotions, behaviors and actions lies at the center of personal and social change and is the basis of empathy. This is hard work, she admits, but our ability to experience love, belonging, joy, intimacy and trust — and to build a society rooted in empathy — depend on it. “In the absence of love and belonging, there’s nothing left,” she says.


Dr. Phillip Atiba Goff, Rashad Robinson, Dr. Bernice King and Anthony D. Romero share urgent insights into this historic moment. Watch the discussion on TED.com.

In a time of mourning and anger over the ongoing violence inflicted on Black communities by police in the US and the lack of accountability from national leadership, what is the path forward? In a wide-ranging conversation, Dr. Phillip Atiba Goff, the CEO of Center for Policing Equity; Rashad Robinson, the president of Color of Change; Dr. Bernice Albertine King, the CEO of the King Center; and Anthony D. Romero, the executive director of the American Civil Liberties Union, share urgent insights into how we can dismantle the systems of oppression and racism responsible for tragedies like the murders of Ahmaud Arbery, Breonna Taylor, George Floyd and far too many others — and explored how the US can start to live up to its ideals. Watch the discussion on TED.com.

CryptogramNew Research: "Privacy Threats in Intimate Relationships"

I just published a new paper with Karen Levy of Cornell: "Privacy Threats in Intimate Relationships."

Abstract: This article provides an overview of intimate threats: a class of privacy threats that can arise within our families, romantic partnerships, close friendships, and caregiving relationships. Many common assumptions about privacy are upended in the context of these relationships, and many otherwise effective protective measures fail when applied to intimate threats. Those closest to us know the answers to our secret questions, have access to our devices, and can exercise coercive power over us. We survey a range of intimate relationships and describe their common features. Based on these features, we explore implications for both technical privacy design and policy, and offer design recommendations for ameliorating intimate privacy risks.

This is an important issue that has gotten much too little attention in the cybersecurity community.

Worse Than FailureError'd: Just a Big Mixup

Daniel M. writes, "How'd they make this mistake? Simple. You add the prices into the bowl and turn the mixer on."

 

"I'm really glad to see a retailer making sure that I get the most accurate discount possible," Kelly K. wrote.

 

"I sure hope they're not also receiving invalid maintenance," Steven S. wrote.

 

Ernie writes, "Recently, I was looking for some hints on traditional bread making and found some interesting sources. Some of them go back to the middle ages."

 

"Tried to get a refund travel voucher through KLM, and well, obviously they know more than everyone else," Matthias wrote.

 

Roger G. writes, "I'm planning my ride in Mapometer and apparently I'm descending 100,000ft into the earths core. I'll let you know what I find..."

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

TEDConversations on rebuilding a healthy economy: Week 1 of TED2020

To kick off TED2020, leaders in business, finance and public health joined the TED community for lean-forward conversations to answer the question: “What now?” Below, a recap of the fascinating insights they shared.

“If you don’t like the pandemic, you are not going to like the climate crisis,” says Kristalina Georgieva, Managing Director of the International Monetary Fund. She speaks with head of TED Chris Anderson at TED2020: Uncharted on May 18, 2020. (Photo courtesy of TED)

Kristalina Georgieva, Managing Director of the International Monetary Fund (IMF)

Big idea: The coronavirus pandemic shattered the global economy. To put the pieces back together, we need to make sure money is going to countries that need it the most — and that we rebuild financial systems that are resilient to shocks.

How? Kristalina Georgieva is encouraging an attitude of determined optimism to lead the world toward recovery and renewal amid the economic fallout of COVID-19. The IMF has one trillion dollars to lend — it’s now deploying these funds to areas hardest hit by the pandemic, particularly in developing countries, and it’s also put a debt moratorium into effect for the poorest countries. Georgieva admits recovery is not going to be quick, but she thinks that countries can emerge from this “great transformation” stronger than before if they build resilient, disciplined financial systems. Within the next ten years, she hopes to see positive shifts towards digital transformation, more equitable social safety nets and green recovery. And as the environment recovers while the world grinds to a halt, she urges leaders to maintain low carbon footprints — particularly since the pandemic foreshadows the devastation of global warming. “If you don’t like the pandemic, you are not going to like the climate crisis,” Georgieva says. Watch the interview on TED.com »


“I’m a big believer in capitalism. I think it’s in many ways the best economic system that I know of, but like everything, it needs an upgrade. It needs tuning,” says Dan Schulman, president and CEO of PayPal. He speaks with TED business curators Corey Hajim at TED2020: Uncharted on May 19, 2020. (Photo courtesy of TED)

Dan Schulman, President and CEO of PayPal

Big idea: Employee satisfaction and consumer trust are key to building the economy back better.

How? A company’s biggest competitive advantage is its workforce, says Dan Schulman, explaining how PayPal instituted a massive reorientation of compensation to meet the needs of its employees during the pandemic. The ripple of benefits of this shift have included increased productivity, financial health and more trust. Building further on the concept of trust, Schulman traces how the pandemic has transformed the managing and moving of money — and how it will require consumers to renew their focus on privacy and security. And he shares thoughts on the new roles of corporations and CEOs, the cashless economy and the future of capitalism. “I’m a big believer in capitalism. I think it’s in many ways the best economic system that I know of, but like everything, it needs an upgrade. It needs tuning,” Schulman says. “For vulnerable populations, just because you pay at the market [rate] doesn’t mean that they have financial health or financial wellness. And I think everyone should know whether or not their employees have the wherewithal to be able to save, to withstand financial shocks and then really understand what you can do about it.”


Biologist Uri Alon shares a thought-provoking idea on how we could get back to work: a two-week cycle of four days at work followed by 10 days of lockdown, which would cut the virus’s reproductive rate. He speaks with head of TED Chris Anderson at TED2020: Uncharted on May 20, 2020. (Photo courtesy of TED)

Uri Alon, Biologist

Big idea: We might be able to get back to work by exploiting one of the coronavirus’s key weaknesses. 

How? By adopting a two-week cycle of four days at work followed by 10 days of lockdown, bringing the virus’s reproductive rate (R₀ or R naught) below one. The approach is built around the virus’s latent period: the three-day delay (on average) between when a person gets infected and when they start spreading the virus to others. So even if a person got sick at work, they’d reach their peak infectious period while in lockdown, limiting the virus’s spread — and helping us avoid another surge. What would this approach mean for productivity? Alon says that by staggering shifts, with groups alternating their four-day work weeks, some industries could maintain (or even exceed) their current output. And having a predictable schedule would give people the ability to maximize the effectiveness of their in-office work days, using the days in lockdown for more focused, individual work. The approach can be adopted at the company, city or regional level, and it’s already catching on, notably in schools in Austria.


“The secret sauce here is good, solid public health practice … this one was a bad one, but it’s not the last one,” says Georges C. Benjamin, Executive Director of the American Public Health Association. He speaks with TED science curator David Biello at TED2020: Uncharted on May 20, 2020. (Photo courtesy of TED)

Georges C. Benjamin, Executive Director of the American Public Health Association

Big Idea: We need to invest in a robust public health care system to lead us out of the coronavirus pandemic and prevent the next outbreak.

How: The coronavirus pandemic has tested the public health systems of every country around the world — and, for many, exposed shortcomings. Georges C. Benjamin details how citizens, businesses and leaders can put public health first and build a better health structure to prevent the next crisis. He envisions a well-staffed and equipped governmental public health entity that runs on up-to-date technology to track and relay information in real time, helping to identify, contain, mitigate and eliminate new diseases. Looking to countries that have successfully lowered infection rates, such as South Korea, he emphasizes the importance of early and rapid testing, contact tracing, self-isolation and quarantining. Our priority, he says, should be testing essential workers and preparing now for a spike of cases during the summer hurricane and fall flu seasons.The secret sauce here is good, solid public health practice,” Benjamin says. “We should not be looking for any mysticism or anyone to come save us with a special pill … because this one was a bad one, but it’s not the last one.”

TEDConversations on climate action and contact tracing: Week 2 of TED2020

For week 2 of TED2020, global leaders in climate, health and technology joined the TED community for insightful discussions around the theme “build back better.” Below, a recap of the week’s fascinating and enlightening conversations about how we can move forward, together.

“We need to change our relationship to the environment,” says Chile’s former environment minister Marcelo Mena. He speaks with TED current affairs curator Whitney Pennington Rodgers at TED2020: Uncharted on May 26, 2020. (Photo courtesy of TED)

Marcelo Mena, environmentalist and former environment minister of Chile

Big idea: People power is the antidote to climate catastrophe.

How? With a commitment to transition to zero emissions by 2050, Chile is at the forefront of resilient and inclusive climate action. Mena shares the economic benefits instilling green solutions can have on a country: things like job creation and reduced cost of mobility, all the result of sustainability-minded actions (including phasing coal-fired power plants and creating fleets of energy-efficient buses). Speaking to the air of social unrest across South America, Mena traces how climate change fuels citizen action, sharing how protests have led to green policies being enacted. There will always be those who do not see climate change as an imminent threat, he says, and economic goals need to align with climate goals for unified and effective action. “We need to change our relationship to the environment,” Mena says. “We need to protect and conserve our ecosystems so they provide the services that they do today.”


“We need to insist on the future being the one that we want, so that we unlock the creative juices of experts and engineers around the world,” says Nigel Topping, UK High Level Climate Action Champion, COP26. He speaks with TED Global curator Bruno Giussani at TED2020: Uncharted on May 26, 2020. (Photo courtesy of TED)

Nigel Topping, UK High Level Climate Action Champion, COP26

Big idea: The COVID-19 pandemic presents a unique opportunity to break from business as usual and institute foundational changes that will speed the world’s transition to a greener economy. 

How? Although postponed, the importance of COP26 — the UN’s international climate change conference — has not diminished. Instead it’s become nothing less than a forum on whether a post-COVID world should return to old, unsustainable business models, or instead “clean the economy” before restarting it. In Topping’s view, economies that rely on old ways of doing business jeopardize the future of our planet and risk becoming non-competitive as old, dirty jobs are replaced by new, cleaner ones. By examining the benefits of green economics, Topping illuminates the positive transformations happening now and leverages them to inspire businesses, local governments and other economic players to make radical changes to business as usual. “From the bad news alone, no solutions come. You have to turn that into a motivation to act. You have to go from despair to hope, you have to choose to act on the belief that we can avoid the worst of climate change… when you start looking, there is evidence that we’re waking up.”


“Good health is something that gives us all so much return on our investment,” says Joia Mukherjee. Shes speaks with head of TED Chris Anderson at TED2020: Uncharted on May 27, 2020. (Photo courtesy of TED)

Joia Mukherjee, Chief Medical Officer, Partners in Health (PIH)

Big idea: We need to massively scale up contact tracing in order to slow the spread of COVID-19 and safely reopen communities and countries.

How? Contact tracing is the process of identifying people who come into contact with someone who has an infection, so that they can be quarantined, tested and supported until transmission stops. The earlier you start, the better, says Mukherjee — but, since flattening the curve and easing lockdown measures depend on understanding the spread of the disease, it’s never too late to begin. Mukherjee and her team at PIH are currently supporting the state of Massachusetts to scale up contact tracing for the most vulnerable communities. They’re employing 1,700 full-time contact tracers to investigate outbreaks in real-time and, in partnership with resource care coordinators, ensuring infected people receive critical resources like health care, food and unemployment benefits. With support from The Audacious Project, a collaborative funding initiative housed at TED, PIH plans to disseminate its contact tracing expertise across the US and support public health departments in slowing the spread of COVID-19. “Good health is something that gives us all so much return on our investment,” Mukherjee says. See what you can do for this idea »


Google’s Chief Health Officer Karen DeSalvo shares the latest on the tech giant’s critical work on contact tracing. She speaks with head of TED Chris Anderson at TED2020: Uncharted on May 27, 2020. (Photo courtesy of TED)

Karen DeSalvo, Chief Health Officer, Google

Big idea: We can harness the power of tech to combat the pandemic — and reshape the future of public health.

How? Google and Apple recently announced an unprecedented partnership on the COVID-19 Exposure Notifications API, a Bluetooth-powered technology that would tell people they may have been exposed to the virus. The technology is designed with privacy at its core, DeSalvo says: it doesn’t use GPS or location tracking and isn’t an app but rather an API that public health agencies can incorporate into their own apps, which users could opt in to — or not. Since smartphones are so ubiquitous, the API promises to augment contact tracing and help governments and health agencies reduce the spread of the coronavirus. Overall, the partnership between tech and public health is a natural one, DeSalvo says; communication and data are pillars of public health, and a tech giant like Google has the resources to distribute those at a global scale. By helping with the critical work of contact tracing, DeSalvo hopes to ease the burden on health workers and give scientists time to create a vaccine. “Having the right information at the right time can make all the difference,” DeSalvo says. “It can literally save lives.”

After the conversation, Karen DeSalvo was joined by Joia Mukherjee to further discuss how public health entities can partner with tech companies. Both DeSalvo and Mukherjee emphasize the importance of knitting together the various aspects of public health systems — from social services to housing — to create a healthier and more just society. They also both emphasize the importance of celebrating community health workers, who provide on-the-ground information and critical connection with people across the world.

,

Cory DoctorowRave for “Poesy the Monster Slayer”

No matter how many books I write (20+ now!), the first review for a new one is always scary. That goes double when the book is a first as well – like Poesy the Monster Slayer, my first-ever picture book, which comes out from First Second on Jul 14.

https://us.macmillan.com/books/9781626723627

So it was with delight and relief that I read Publishers Weekly’s (rave) review of Poesy:

“Some children fear monsters at bedtime, but Poesy welcomes them. Her pink ‘monster lair’ features gothic art and stuffed animals, and she makes her father read The Book of Monsters from cover to cover before lights out. ‘PLEASE stay in bed tonight,’ he pleads as he leaves, but there’s no chance: the werewolf who soon enters her window is the size of a grizzly. ‘Werewolves HATED silver,’ Poesy knows, ‘and they feared the light’ 0 armed with a Princess Frillypants silver tiara and a light-up wand, she vanquishes the beast. And that’s just the beginning of her tear through monsterdom. ‘Poesy Emmeline Russell Schnegg,’ her mother growls from the doorway (in a funny turn, the girl gains a middle name every time a parent appears). Assured panels by Rockefeller (Pop!) combine frilly with threatening, illuminated by eerie light sources. Doctorow, making his picture book debut, strikes a gently edgy tone (‘He was so tired,’ Poesy sees, ‘that he stepped on a Harry the Hare block and said some swears. Poor Daddy!’), and his blow-by-blow account races to its closing spread: of two tired parents who resemble yet another monster. Ages 4-6.”

Whew!

I had planned to do a launch party at Dark Delicacies, my neighborhood horror bookstore, on Jul 11, but that’s off (obviously).

So we’re doing the next-best thing: preorder from the store and you’ll get a signature and dedication from me AND my daughter, Poesy (the book’s namesake).

https://www.darkdel.com/store/p1562/_July%3A_Poesy_the_Monster_Slayer.html

Sociological ImagesConflict Brings Us Together

For a long time, political talk at the “moderate middle” has focused on a common theme that goes something like this: 

There is too much political polarization and conflict. It’s tearing us apart. People aren’t treating each other with compassion. We need to come together, set aside our differences, and really listen to each other.

I have heard countless versions of this argument in my personal life and in public forums. It is hard to disagree with them at first. Who can be against seeking common ground?

But as a political sociologist, I am also skeptical of this argument because we have good research showing how it keeps people and organizations from working through important disagreements. When we try to avoid conflict above all, we often end up avoiding politics altogether. It is easy to confuse common ground with occupied territory — social spaces where legitimate problems and grievances are ignored in the name of some kind of pleasant consensus. 

A really powerful sociological image popped up in my Twitter feed that makes the point beautifully. We actually did find some common ground this week through a trend that united the country across red states and blue states:

It is tempting to focus on protests as a story about conflict alone, and conflict certainly is there. But it is also important to realize that this week’s protests represent a historic level of social consensus. The science of cooperation and social movements reminds us that getting collective action started is hard. And yet, across the country, we see people not only stepping up, but self-organizing groups to handle everything from communication to community safety and cleanup. In this way, the protests also represent a remarkable amount of agreement that the current state of policing in this country is simply neither just nor tenable. 

I was struck by this image because I don’t think nationwide protests are the kind of thing people have in mind when they call for everyone to come together, but right now protesting itself seems like one of the most unifying trends we’ve got. That’s the funny thing about social cohesion and cultural consensus. It is very easy to call for setting aside our differences and working together when you assume everyone will be rallying around your particular way of life. But social cohesion is a group process, one that emerges out of many different interactions, and so none of us ever have that much control over when and where it actually happens.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramThermal Imaging as Security Theater

Seems like thermal imaging is the security theater technology of today.

These features are so tempting that thermal cameras are being installed at an increasing pace. They're used in airports and other public transportation centers to screen travelers, increasingly used by companies to screen employees and by businesses to screen customers, and even used in health care facilities to screen patients. Despite their prevalence, thermal cameras have many fatal limitations when used to screen for the coronavirus.

  • They are not intended for medical purposes.
  • Their accuracy can be reduced by their distance from the people being inspected.
  • They are "an imprecise method for scanning crowds" now put into a context where precision is critical.
  • They will create false positives, leaving people stigmatized, harassed, unfairly quarantined, and denied rightful opportunities to work, travel, shop, or seek medical help.
  • They will create false negatives, which, perhaps most significantly for public health purposes, "could miss many of the up to one-quarter or more people infected with the virus who do not exhibit symptoms," as the New York Times recently put it. Thus they will abjectly fail at the core task of slowing or preventing the further spread of the virus.

Worse Than FailureCodeSOD: Scheduling your Terns

Mike has a co-worker who’s better at Code Golf than I am. They needed to generate a table with 24 column headings, one for each hour of the day, formatted in HAM- the hour and AM/PM. As someone bad at code golf, my first instinct is honestly to use two for loops, but in practice I’d probably do a 24 iteration loop with a branch to decide if it’s AM/PM and handle it appropriately, as well as a branch to handle the fact that hour 0 should be printed as 12.

Which, technically, more or less what Mike’s co-worker did, but they did in in golf style, using PHP.

<tr>
<?php for ($i = 0; $i < 24; $i++) {
echo '<th><div>'.($i%12?$i%12:12).($i/12>=1?'pm':'am').'</div></th><th></th>';
}
?>
</tr>

This is code written by someone who just recently discovered ternaries. It’s not wrong. It’s not even a complete and utter disaster. It’s just annoying. Maybe I’m jealous of their code golf skills, but this is the kind of code that makes me grind my teeth when I see it.

It’s mildly… clever? $i%12?$i%12:12- i%12 will be zero when i is 12, which is false, and our false branch says to output 12, and our true branch says to output i%12. So that’s sorted, handles all 24 hours of the day.

Then, for AM/PM, they ($i/12>=1?'pm':'am')- which also works. Values less than 12 fail the condition, so our false path is 'am', values greater than 12 will get 'pm'.

But wait a second. We don’t need the >= or the division in there. This could just be ($i>11?'pm':'am').

Well, maybe I am good at Code Golf.

I still hate it.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Krebs on SecurityRomanian Skimmer Gang in Mexico Outed by KrebsOnSecurity Stole $1.2 Billion

An exhaustive inquiry published today by a consortium of investigative journalists says a three-part series KrebsOnSecurity published in 2015 on a Romanian ATM skimming gang operating in Mexico’s top tourist destinations disrupted their highly profitable business, which raked in an estimated $1.2 billion and enjoyed the protection of top Mexican authorities.

The multimedia investigation by the Organized Crime and Corruption Reporting Project (OCCRP) and several international journalism partners detailed the activities of the so-called Riviera Maya crime gang, allegedly a mafia-like group of Romanians who until very recently ran their own ATM company in Mexico called “Intacash” and installed sophisticated electronic card skimming devices inside at least 100 cash machines throughout Mexico.

According to the OCCRP, Riviera Maya’s skimming devices allowed thieves to clone the cards, which were used to withdraw funds from ATMs in other countries — often halfway around the world in places like India, Indonesia, and Taiwan.

Investigators say each skimmer captured on average 1,000 cards per month, siphoning about $200 from individual victim accounts. This allowed the crime gang to steal approximately $20 million monthly.

“The gang had little tricks,” OCCRP reporters recounted in their video documentary (above). “They would use the cards in different cities all over the globe and wait three months so banks would struggle to trace where the card had originally been cloned.”

In September 2015, I traveled to Mexico’s Yucatan Peninsula to find and document almost two dozen ATMs in the region that were compromised with Bluetooth-based skimming devices. Unlike most skimmers — which can be detected by looking for out-of-place components attached to the exterior of a compromised cash machine — these skimmers were hooked to the internal electronics of ATMs operated by Intacash’s competitors by authorized personnel who’d reportedly been bribed or coerced by the gang.

But because the skimmers were Bluetooth-based, allowing thieves periodically to collect stolen data just by strolling up to a compromised machine with a mobile device, I was able to detect which ATMs had been hacked using nothing more than a cheap smart phone.

One of the Bluetooth-enabled PIN pads pulled from a compromised ATM in Mexico. The two components on the left are legitimate parts of the machine. The fake PIN pad made to be slipped under the legit PIN pad on the machine, is the orange bit, top right. The Bluetooth and data storage chips are in the middle.

Several days of wandering around Mexico’s top tourist areas uncovered these sophisticated skimmers inside ATMs in Cancun, Cozumel, Playa del Carmen and Tulum, including a compromised ATM in the lobby of my hotel in Cancun. OCCRP investigators said the gang also had installed the same skimmers in ATMs at tourist hotspots on the western coast of Mexico, in Puerto Vallarta, Sayulita and Tijuana.

Part III of my 2015 investigation concluded that Intacash was likely behind the scheme. An ATM industry source told KrebsOnSecurity at the time that his technicians had been approached by ATM installers affiliated with Intacash, offering those technicians many times their monthly salaries if they would provide periodic access to the machines they maintained.

The alleged leader of the Riviera Maya organization and principal owner of Intacash, 43-year-old Florian “The Shark” Tudor, is a Romanian with permanent residence in Mexico. Tudor claims he’s an innocent, legitimate businessman who’s been harassed and robbed by Mexican authorities.

Last year, police in Mexico arrested Tudor for illegal weapons possession, and raided his various properties there in connection with an investigation into the 2018 murder of his former bodyguard, Constantin Sorinel Marcu.

According to prosecution documents, Marcu and The Shark spotted my reporting shortly after it was published in 2015, and discussed what to do next on a messaging app:

The Shark: Krebsonsecurity.com See this. See the video and everything. There are two episodes. They made a telenovela.

Marcu: I see. It’s bad.

The Shark: They destroyed us. That’s it. Fuck his mother. Close everything.

The intercepted communications indicate The Shark also wanted revenge on whoever was responsible for leaking information about their operations.

The Shark: Tell them that I am going to kill them.

Marcu: Okay, I can kill them. Any time, any hour.

The Shark: They are checking all the machines. Even at banks. They found over 20.

Marcu: Whaaaat?!? They found? Already??

Throughout my investigation, I couldn’t be sure whether Intacash’s shiny new ATMs — which positively blanketed tourist areas in and around Cancun — also were used to siphon customer card data. I did write about my suspicions that Intacash’s ATMs were up to no good when I found they frequently canceled transactions just after a PIN was entered, and typically failed to provide paper receipts for withdrawals made in U.S. dollars.

But citing some of the thousands of official documents obtained in their investigation, the OCCRP says investigators now believe Intacash installed the same or similar skimming devices in its own ATMs prior to deploying them — despite advertising them as equipped with the latest security features and fraudulent device inhibitors.

Tudor’s organization “had the access that gave The Shark’s crew huge opportunities for fraud,” the OCCRP reports. “And on the Internet, the number of complaints grew. Foreign tourists in Mexico fleeced” by Intacash’s ATMs.

Many of the compromised ATMs I located in my travels throughout Mexico were at hotels, and while Intacash’s ATMs could be found on many street locations in the region, it was rare to find them installed at hotels.

The confidential source with whom I drove from place to place at the time said Intacash avoided installing their machines at hotels — despite such locations being generally far more profitable — for one simple reason: If one’s card is cloned from a hotel ATM, the customer can easily complain to the hotel staff. With a street ATM, not so much.

The investigation by the OCCRP and its partners paints a vivid picture of a highly insular, often violent transnational organized crime ring that controlled at least 10 percent of the $2 billion annual global market for skimmed cards.

It also details how the group laundered their ill-gotten gains, and is alleged to have built a human smuggling ring that helped members of the crime gang cross into the U.S. and ply their skimming trade against ATMs in the United States. Finally, the series highlights how the Riviera Maya gang operated with impunity for several years by exploiting relationships with powerful anti-corruption officials in Mexico.

Tudor and many of his associates maintain their innocence and are still living as free men in Mexico, although Tudor is facing charges in Romania for his alleged involvement with organized crime, attempted murder and blackmail. Intacash is no longer operating in Mexico. In 2019, Intacash’s sponsoring bank in Mexico suspended the company’s contract to process ATM transactions.

For much more on this investigation, check out OCCRP’s multi-part series, How a Crew of Romanian Criminals Conquered the World of ATM Skimming.

TEDIgnite: The talks of TED@WellsFargo

TED curator Cyndi Stivers opens TED@WellsFargo at the Knight Theater on February 5, 2020, in Charlotte, North Carolina. (Photo: Ryan Lash / TED)

World-changing ideas that unearth solutions and ignite progress can come from anywhere. With that spirit in mind at TED@WellsFargo, thirteen speakers showcased how human empathy and problem-solving can combine with technology to transform lives (and banking) for the better.

The event: TED@WellsFargo, a day of thought-provoking talks on topics including how to handle challenging situations at work, the value of giving back and why differences can be strengths. It’s the first time TED and Wells Fargo have partnered to create inspiring talks from Wells Fargo Team Members.

When and where: Wednesday, February 5, 2020, at the Knight Theater in Charlotte, North Carolina

Opening and closing remarks: David Galloreese, Wells Fargo Head of Human Resources, and Jamie Moldafsky, Wells Fargo Chief Marketing Officer

Performances by: Dancer Simone Cooper and singer/songwriter Jason Jet and his band

The talks in brief:

“What airlines don’t tell you is that putting your oxygen mask on first, while seeing those around you struggle, it takes a lot of courage. But being able to have that self-control is sometimes the only way that we are able to help those around us,” says sales and trading analyst Elizabeth Camarillo Gutierrez. She speaks at TED@WellsFargo at the Knight Theater on February 5, 2020, in Charlotte, North Carolina. (Photo: Ryan Lash / TED)

Elizabeth Camarillo Gutierrez, sales and trading analyst

Big idea: As an immigrant, learning to thrive in America while watching other immigrants struggle oddly echoes what flight attendants instruct us to do when the oxygen masks drop in an emergency landing: if you want to help others put on their masks, you must put on your own mask first.

How? At age 15, Elizabeth Camarillo Gutierrez found herself alone in the US when her parents were forced to return to Mexico, taking her eight-year-old brother with them. For eight years, she diligently completed her education — and grappled with guilt, believing she wasn’t doing enough to aid fellow immigrants. Now working as a sales and trading analyst while guiding her brother through school in New York, she’s learned a valuable truth: in an emergency, you can’t save others until you save yourself.

Quote of the talk: “Immigrants [can’t] and will never be able to fit into any one narrative, because most of us are actually just traveling along a spectrum, trying to survive.”


Matt Trombley, customer remediation supervisor

Big idea: Agonism — “taking a warlike stance in contexts that are not literally war” — plagues many aspects of modern-day life, from the way we look at our neighbors to the way we talk about politics. Can we work our way out of this divisive mindset?

How: Often we think that those we disagree with are our enemies, or that we must approve of everything our loved ones say or believe. Not surprisingly, this is disastrous for relationships. Matt Trombley shows us how to fight agonism by cultivating common ground (working to find just a single shared thread with someone) and by forgiving others for the slights that we believe their values cause us. If we do this, our relationships will truly come to life.

Quote of the talk: “When you can find even the smallest bit of common ground with somebody, it allows you to understand just the beautiful wonder and complexity and majesty of the other person.”


Dorothy Walker, project manager

Big idea: Anybody can help resolve a conflict — between friends, coworkers, strangers, your children — with three simple steps.

How? Step one: prepare. Whenever possible, set a future date and time to work through a conflict, when emotions aren’t running as high. Step two: defuse and move forward. When you do begin mediating the conflict, start off by observing, listening and asking neutral questions; this will cause both parties to stop and think, and give you a chance to shift positive energy into the conversation. Finally, step three: make an agreement. Once the energy of the conflict has settled, it’s time to get an agreement (either written or verbal) so everybody can walk away with a peaceful resolution.

Quote of the talk: “There is a resolution to all conflicts. It just takes your willingness to try.”


Charles Smith, branch manager

Big idea: The high rate of veteran suicide is intolerable — and potentially avoidable. By prioritizing the mental health of military service members both during and after active duty, we can save lives.

How? There are actionable solutions to end the devastating epidemic of military suicide, says Charles Smith. First, by implementing a standard mental health evaluation to military applicants, we can better gauge the preliminary markers of post traumatic stress disorder (PTSD) or depression. Data is a vital part of the solution: if we keep better track of mental health data on service members, we can also predict where support is most needed and create those structures proactively. By identifying those with a higher risk early on in their military careers, we can ensure they have appropriate care during their service and connect them to the resources they need once they are discharged, enabling veterans to securely and safely rejoin civilian life.

Quote of the talk: “If we put our minds and resources together, and we openly talk and try to find solutions for this epidemic, hopefully, we can save a life.”

“We all know retirement is all about saving more now, for later. What if we treated our mental health and overall well-being in the same capacity? Develop and save more of you now, for later in life,” says premier banker Rob Cooke. He speaks at TED@WellsFargo at the Knight Theater on February 5, 2020, in Charlotte, North Carolina. (Photo: Ryan Lash / TED)

Rob Cooke, premier banker

Big idea: Work-related stress costs us a lot, in our lives and the economy. We need to reframe the way we manage stress — both in our workplaces and in our minds.

How? “We tend to think of [stress] as a consequence, but I see it as a culture,” says Rob Cooke. Despite massive global investments in the wellness industry, we are still losing trillions of dollars due to a stress-related decrease in employee productivity and illness. Cooke shares a multifaceted approach to shifting the way stress is managed, internally and culturally. It starts with corporations prioritizing the well-being of employees, governments incentivizing high standards for workplace wellness and individually nurturing our relationship with our own mental health.

Quote of the talk: “We all know retirement is all about saving more now, for later. What if we treated our mental health and overall well-being in the same capacity? Develop and save more of you now, for later in life.”


Aeris Nguyen, learning and development facilitator

Big idea: What would our world be like if we could use DNA to verify our identity?

Why? Every year, millions of people have their identities stolen or misused. This fact got Aeris Nguyen thinking about how to safeguard our information for good. She shares an ambitious thought experiment, asking: Can we use our own bodies to verify our selves? While biometric data such as facial or palm print recognition have their own pitfalls (they can be easily fooled by, say, wearing a specially lighted hat or using a wax hand), what if we could use our DNA — our blood, hair or earwax? Nguyen acknowledges the ethical dilemmas and logistical nightmares that would come with collecting and storing more than seven billion files of DNA, but she can’t help but wonder if someday, in the far future, this will become the norm.

Quote of the talk: “Don’t you find it strange that we carry around these arbitrary, government assigned numbers or pieces of paper with our picture on it and some made-up passwords to prove we are who we say we are?  When, in fact, the most rock-solid proof of our identity is something we carry around in our cells — our DNA.”

“To anyone reeling from forces trying to knock you down and cram you into these neat little boxes people have decided for you — don’t break. I see you. My ancestors see you. Their blood runs through me as they run through so many of us. You are valid. And you deserve rights and recognition. Just like everyone else,” says France Villarta. He speaks at TED@WellsFargo at the Knight Theater on February 5, 2020, in Charlotte, North Carolina. (Photo: Ryan Lash / TED)

France Villarta, communications consultant

Big idea: Modern ideas of gender are much older than we may think.

How? In many cultures around the world, the social construct of gender is binary — man or woman, assigned certain characteristics and traits, all designated by biological sex. But that’s not the case for every culture. France Villarta details the gender-fluid history of his native Philippines and how the influence of colonial rule forced narrow-minded beliefs onto its people. In a talk that’s part cultural love letter, part history lesson, Villarta emphasizes the beauty and need in reclaiming gender identities. “Oftentimes, we think of something as strange only because we’re not familiar with it or haven’t taken enough time to try and understand,” he says. “The good thing about social constructs is that they can be reconstructed — to fit a time and age.”

Quote of the talk: “To anyone reeling from forces trying to knock you down and cram you into these neat little boxes people have decided for you — don’t break. I see you. My ancestors see you. Their blood runs through me as they run through so many of us. You are valid. And you deserve rights and recognition. Just like everyone else.”

Dancer Simone Cooper performs a self-choreographed dance onstage at TED@WellsFargo at the Knight Theater on February 5, 2020, in Charlotte, North Carolina. (Photo: Ryan Lash / TED)

Dean Furness, analytic consultant

Big idea: You can overcome personal challenges by focusing on yourself, instead of making comparisons to others.

How? After a farming accident paralyzed Dean Furness below the waist, he began the process of adjusting to life in a wheelchair. He realized he’d have to nurture and focus on this new version of himself, rather than fixate on his former height, strength and mobility. With several years of rehabilitation and encouragement from his physical therapist, Furness began competing in the Chicago and Boston marathons as a wheelchair athlete. By learning how to own each day, he says, we can all work to get better, little by little.

Quote of the talk: “Take some time and focus on you, instead of others. I bet you can win those challenges and really start accomplishing great things.”


John Puthenveetil, financial advisor

Big idea: Because of the uncertain world we live in, many seek solace from “certainty merchants” — like physicians, priests and financial advisors. Given the complex, chaotic mechanisms of our economy, we’re better off discarding “certainty” for better planning.

How? We must embrace adaptable plans that address all probable contingencies, not just the most obvious ones. This is a crucial component of “scenario-based planning,” says John Puthenveetil. We should always aim for being approximately right rather than precisely wrong. But this only works if we pay attention, heed portents of possible change and act decisively — even when that’s uncomfortable.

Quote of the talk: “It is up to us to use [scenario-based planning] wisely: Not out of a sense of weakness or fear, but out of the strength and conviction that comes from knowing that we are prepared to play the hand that is dealt.”


Johanna Figueira, digital marketing consultant

Big idea: The world is more connected than ever, but some communities are still being cut off from vital resources. The solution? Digitally matching professional expertise with locals who know what their communities really need.

How? Johanna Figueira is one of millions who has left Venezuela due to economic crisis, crumbling infrastructure and decline in health care — but she hasn’t left these issues behind. With the help of those still living in the country, Figueira helped organize Code for Venezuela — a platform that matches experts with communities in need to create simple, effective tools to improve quality of life. She shares two of their most successful projects: MediTweet, an intelligent Twitter bot that helps Venezuelans find medicinal supplies, and Blackout Tracker, a tool that helps pinpoint power cuts in Venezuela that the government won’t report. Her organization shows the massive difference made when locals participate in their own solutions.

Quote of the talk: “Some people in Silicon Valley may look at these projects and say that they’re not major technological innovations. But that’s the point. These projects are not insanely advanced — but it’s what the people of Venezuela need, and they can have a tremendous impact.”


Jeanne Goldie, branch sales manager

Big idea: We’re looking for dynamic hotbeds of innovation in all the wrong places.

How? Often, society looks to the young for the next big thing, leaving older generations to languish in their shadow until being shuffled out altogether, taking their brain power and productivity with them. Instead of discarding today’s senior workforce, Jeanne Goldie suggests we tap into their years of experience and retrain them, just as space flight has moved from the disposable rockets of NASA’s moon launches to today’s reusable Space X models.

Quote of the talk: “If we look at data and technology as the tools they are … but not as the answer, we can come up with better solutions to our most challenging problems.”


Rebecca Knill, business systems consultant

Big idea: By shifting our cultural understanding of ability and using technology to connect, we can build a more inclusive and human world.

How? The medical advances of modern technology have improved accessibility for disabled communities. Rebecca Knill, a self-described cyborg who has a cochlear implant, believes the next step to a more connected world is changing our perspectives. For example, being deaf isn’t shameful or pitiful, says Knill — it’s just a different way of navigating the world. To take full advantage of the fantastic opportunities new technology offers us, we must drop our assumptions and meet differences with empathy.

Quote of the talk: “Technology has come so far. Our mindset just needs to catch up.”

“We have to learn to accept where people are and adjust ourselves to handle those situations … to recognize when it is time to professionally walk away from someone,” says business consultant Anastasia Penright. She speaks at TED@WellsFargo at the Knight Theater on February 5, 2020, in Charlotte, North Carolina. (Photo: Ryan Lash / TED)

Anastasia Penright, business consultant

Big idea: No workplace is immune to drama, but there are steps we can follow to remove ourselves from the chatter and focus on what’s really important.

How? No matter your industry, chances are you’ve experienced workplace drama. In a funny and relatable talk, Anastasia Penright shares a better way to coexist with our coworkers using five simple steps she’s taken to leave drama behind and excel in her career. First, we must honestly evaluate our own role in creating and perpetuating conflicts; then evaluate our thoughts and stop thinking about every possible scenario. Next, it’s important to release our negative energy to a trusted confidant (a “venting buddy”) while trying to understand and accept the unique communication styles and work languages of our colleagues. Finally, she says, we need to recognize when we’re about to step into drama and protect our energy by simply walking away.

Quote of the talk: “We have to learn to accept where people are and adjust ourselves to handle those situations … to recognize when it is time to professionally walk away from someone.”

Jason Jet performs the toe-tapping, electro-soul song “Time Machine” at TED@WellsFargo at the Knight Theater on February 5, 2020, in Charlotte, North Carolina. (Photo: Ryan Lash / TED)

CryptogramWallpaper that Crashes Android Phones

This is interesting:

The image, a seemingly innocuous sunset (or dawn) sky above placid waters, may be viewed without harm. But if loaded as wallpaper, the phone will crash.

The fault does not appear to have been maliciously created. Rather, according to developers following Ice Universe's Twitter thread, the problem lies in the way color space is handled by the Android OS.

The image was created using the RGB color space to display image hues, while Android 10 uses the sRGB color space protocol, according to 9to5Google contributor Dylan Roussel. When the Android phone cannot properly convert the Adobe RGB image, it crashes.

Worse Than FailureCodeSOD: Synchronize Your Clocks

Back when it was new, one of the “great features” of Java was that it made working with threads “easy”. Developers learning the language were encouraged to get a grip right on threads right away, because that was the new thing which would make their programs so much better.

Well, concurrency is hard. Or, to put it another way, “I had a problem, so I decided to use threads. prhave twI Now o oblems.”

Another thing that’s hard in Java is working with dates and times.

Larisa inherited some code which wanted to be able to check the current system time in a threadsafe fashion. They weren’t doing anything fancy- no timezones, no formatting, just getting the Unix Timestamp off the clock. If you’re thinking to yourself, “It’s just a read operation and there’s no need to bring threads into this at all,” you obviously didn’t write today’s code block.

The right way to do this in Java 8+, would be to use the built-in java.time objects, but in older versions of Java you might need to do something like this:

long currentTime = System.currentTimeMillis();

But that doesn’t involve any design patterns, any synchronized code blocks to protect against multiple threads, and simply isn’t Enterprise enough.

public class Clock {
    private static Clock sfClock = null;

    protected static synchronized void register(Clock testClock) {
        sfClock = testClock;
    }

    public static synchronized Clock getIt() {
        if (sfClock == null) {
            sfClock = new Clock();
        }
        return sfClock;
    }

    public static long now() {
        return getIt().nowImpl();
    }

    protected long nowImpl() {
        return System.currentTimeMillis();
    }

}

This is an attempt to implement the Singleton pattern, which is the go to pattern for people to use, because it’s the easiest to understand and implement and doubles as what is basically a global variable.

You’ll note that there’s no constructor, since there’s no internal state, so there’s no point in making this a singleton.

getIt will create an instance if there isn’t one, but you can also supply an instance via register. You might think that the developer put some thought into how this class would be tested, but again- there’s no internal state or even any internal logic. You could inherit from Clock to make a MockClock that could be used in testing, but that is a long hill to climb to justify this.

The real genius, though, is that our ugly getIt method doesn’t ever have to be directly invoked. Instead, now does that for you. Clock.now() will call getIt, which gets an instance of Clock, then invoke nowImpl, the actual implementation of our now method.

For bonus points, the reason Larisa found this was that there are a lot of threads in this program, and they’re tying timestamps to actions on the regular, so the fact that getIt is synchronized was actually killing performance.

None of this code is necessary. It’s an over-engineered solution to a problem nobody actually had.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Krebs on SecurityREvil Ransomware Gang Starts Auctioning Victim Data

The criminal group behind the REvil ransomware enterprise has begun auctioning off sensitive data stolen from companies hit by its malicious software. The move marks an escalation in tactics aimed at coercing victims to pay up — and publicly shaming those who don’t. But it may also signal that ransomware purveyors are searching for new ways to profit from their crimes as victim businesses struggle just to keep the lights on during the unprecedented economic slowdown caused by the COVID-19 pandemic.

Over the past 24 hours, the crooks responsible for spreading the ransom malware “REvil” (a.k.a. “Sodin” and “Sodinokibi“) used their Dark Web “Happy Blog” to announce its first ever stolen data auction, allegedly selling files taken from a Canadian agricultural production company that REvil says has so far declined its extortion demands.

A partial screenshot from the REvil ransomware group’s Dark Web blog.

The victim firm’s auction page says a successful bidder will get three databases and more than 22,000 files stolen from the agricultural company. It sets the minimum deposit at $5,000 in virtual currency, with the starting price of $50,000.

Prior to this auction, REvil — like many other ransomware gangs — has sought to pressure victim companies into paying up mainly by publishing a handful of sensitive files stolen from their extortion targets, and threatening to release more data unless and until the ransom demand is met.

Experts say the auction is a sign that ransomware groups may be feeling the financial pinch from the current economic crisis, and are looking for new ways to extract value from victims who are now less likely or able to pay a ransom demand.

Lawrence Abrams, editor of the computer help and news Web site BleepingComputer, said while some ransomware groups have a history of selling victim data on cybercrime forums, this latest move by REvil may be just another tactic used by criminals to force victims to negotiate a ransom payment.

“The problem is a lot of victim companies just don’t have the money [to pay ransom demands] right now,” Abrams said. “Others have gotten the message about the need for good backups, and probably don’t need to pay. But maybe if the victim is seeing their data being actively bid on, they may be more inclined to pay the ransom.”

There is some evidence to suggest that the recent economic downturn wrought by COVID-19 has had a measurable impact on ransomware payouts. A report published in mid-April by cryptocurrency research firm Chainalysis found that ransomware payments “have decreased significantly since the COVID-19 crisis intensified in the U.S. and Europe in early March.”

Abrams said other ransomware groups have settled on different methods to increase victim payouts, noting that one prominent gang is now doubly extorting targets — demanding one payment amount in return for a digital key that can unlock files scrambled by the malware, and another payment in exchange for a promise to permanently delete data stolen from the victim.

The implied threat is that victims who pay to recover their files but don’t bite on the deletion payment can expect to see their private data traded, published or sold on the Dark Web.

“Some of these [extortion groups] have said if they don’t get paid they’re going to sell the victim’s data on the Dark Web, in order to recoup their costs,” Abrams said. “Others are now charging a few not only for the ransomware decryptor, but also a fee to delete the victim’s data. So it’s a double vig.”

The FBI and multiple security firms have advised victims not to pay any ransom demands, as doing so just encourages the attackers and in any case may not result in actually regaining access to encrypted files. In practice, however, many cybersecurity consulting firms are quietly urging their customers that paying up is the fastest route back to business-as-usual.

Here are a few tips that can help reduce the likelihood that you or your organization will fall victim to a ransomware attack:

-Patch, early and often: Many ransomware attacks leverage known security flaws in servers and desktops.

-Disable RDP: Short for Remote Desktop Protocol, this feature of Windows allows a system to be remotely administered over the Internet. A ridiculous number of businesses — particularly healthcare providers — get hit with ransomware because they leave RDP open to the Internet and secured with easy-to-guess passwords. And there are a number of criminal services that sell access to brute-forced RDP installations.

-Filter all email: Invest in security systems that can block executable files at the email gateway.

-Isolate mission-critical systems and data: This can be harder than it sounds. It may be worth hiring a competent security firm to make sure this is done right.

-Backup key files and databases: Bear in mind that ransomware can encrypt any network or cloud-based files or folders that are mapped and have been assigned a drive letter. Backing up to a secondary system that is not assigned a drive letter or is disconnected when it’s not backing up data is key. The old “3-2-1” backup rule comes into play here: Wherever possible, keep three backups of your data, on two different storage types, with at least one backup offsite.

-Disable macros in Microsoft Office: Block external content in Office files. Educate users that ransomware very often succeeds only when a user opens Office file attachment sent via email and manually enables Macros.

-Enable controlled folder access: Create rules to disallow the running of executable files in Windows from local user profile folders (App Data, Local App Data, ProgramData, Temp, etc.)

Sites like nomoreransom.org distribute free decryptor tools that can help some ransomware victims recover files without paying a ransom demand.

CryptogramPassword Changing After a Breach

This study shows that most people don't change their passwords after a breach, and if they do they change it to a weaker password.

Abstract: To protect against misuse of passwords compromised in a breach, consumers should promptly change affected passwords and any similar passwords on other accounts. Ideally, affected companies should strongly encourage this behavior and have mechanisms in place to mitigate harm. In order to make recommendations to companies about how to help their users perform these and other security-enhancing actions after breaches, we must first have some understanding of the current effectiveness of companies' post-breach practices. To study the effectiveness of password-related breach notifications and practices enforced after a breach, we examine­ -- based on real-world password data from 249 participants­ -- whether and how constructively participants changed their passwords after a breach announcement.

Of the 249 participants, 63 had accounts on breached domains;only 33% of the 63 changed their passwords and only 13% (of 63)did so within three months of the announcement. New passwords were on average 1.3× stronger than old passwords (when comparing log10-transformed strength), though most were weaker or of equal strength. Concerningly, new passwords were overall more similar to participants' other passwords, and participants rarely changed passwords on other sites even when these were the same or similar to their password on the breached domain.Our results highlight the need for more rigorous password-changing requirements following a breach and more effective breach notifications that deliver comprehensive advice.

News article.

EDITED TO ADD (6/2): Another news aricle. Slashdot thread.

Cryptogram"Sign in with Apple" Vulnerability

Researcher Bhavuk Jain discovered a vulnerability in the "Sign in with Apple" feature, and received a $100,000 bug bounty from Apple. Basically, forged tokens could gain access to pretty much any account.

It is fixed.

EDITED TO ADD (6/2): Another story.

Worse Than FailureCodeSOD: Try a Different Version

Back when I was still working for a large enterprise company, I did a lot of code reviews. This particular organization didn’t have much interest in code quality, so a lot of the code I was reviewing was just… bad. Often, I wouldn’t even need to read the code to see that it was bad.

In the olden times, inconsistent or unclear indentation was a great sign that the code would be bad. As IDEs started automating indentation, you lost that specific signal, but gained a new one. You can just tell code is bad when it’s shaped like this:

public List<Integer> getDocSectionsChanged(CustomerVersionTag versionTag) {
	Set<Integer> sections = new HashSet<>();
	for (Map.Entry<String, List<String>> entry : getVersionChanges().get(versionTag).entrySet()) {
		for (F.Tuple<CustomerVersioningDocSection, Map<String, List<String>>> tuple : getDocSectionToSdSection()) {
			for (Map.Entry<String, List<String>> entry2 : tuple._2.entrySet()) {
				if (entry.getKey().startsWith(entry2.getKey())) {
					for (String change : entry.getValue()) {
						for (String lookFor : entry2.getValue()) {
							if (change.startsWith(lookFor)) {
								sections.add(getDocSectionNumber(tuple._1));
							}
						}
					}
				}
			}
		}
	}
	return sections.stream().sorted(Integer::compareTo).collect(Collectors.toList());
}

Torvalds might not think much of 80 character lines, but exceedingly long lines are definitely a code smell, especially when they're mostly whitespace.

This is from a document management system. It tracks versions of documents, and a new feature was requested: finer grained reporting on which sections of the document changed between versions. That information was already stored, so all a developer needed to do was extract it into a list of section numbers.

Edda’s entire team agreed that this would be a simple task, and estimated a relatively short time to build it- hours, maybe a day at the outside. Two weeks later when it was finally delivered, the project manager wanted to know how their estimate had gotten so off.

At a glance, you know the code is bad, because it’s shaped badly. That level of indentation is always a quick sign that something’s badly built. But then note the tested loops: a startsWith in a loop in a loop in a startsWith in a loop in a loop in a loop. The loops don’t even always make sense to be nested- the outermost loops across an entrySet to get entries, but the next loop iterates across the result of getDocSectionToSdSection(), which takes no parameters- the 2nd loop isn’t actually driven by anything extracted in the 1st loop. The inner-most pair of loops seem to be an attempt compare every change in two entry objects to see if there’s a difference at any point.

I don’t know their API, so I certainly don’t know the right approach, but at a glance, it’s clear that this is the wrong approach. With the nesting code structures and the deeply nested generics (types like F.Tuple<CustomerVersioningDocSection, Map<String, List<String>>> are another key sign somebody messed up), I don’t have any idea what the developer was thinking or what the purpose of this code was. I don’t know what they were going for, but I hope to the gods they missed.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Cory DoctorowHow Big Tech Monopolies Distort Our Public Discourse

This week, I’m podcasting How Big Tech Monopolies Distort Our Public Discourse, a new article I wrote for the Electronic Frontier Foundation’s Deeplinks blog. It’s the most comprehensive of the articles I’ve written about the problems of surveillance capitalism, a subject I’ve also addressed in a forthcoming, book-length essay. In a nutshell, my dispute with the “surveillance capitalism” hypothesis is that I think it overstates how effective Big Tech is at changing our minds with advanced machine learning techniques, while underplaying the role that monopoly plays in allowing Big Tech to poison and distort our public discourse.

I think this is a distinction with a difference, because if Big Tech has figured out how to use data to rob us of our free will, anti-monopoly enforcement won’t solve the problem – it’ll just create lots of smaller companies with their own Big Data mind-control rays. But if the problem rests in monopoly itself, then we can solve the problem with anti-monopoly techniques that have been used to counter every other species of robber-baron, from oil to aluminum to groceries to telephones.

MP3

Worse Than FailureCodeSOD: Don't be so Negative Online

It's fair to say that regardless of their many advantages, "systems languages", like C, are much harder to use than their more abstract cousins.Vendors know this, which is why they often find a way to integrate across language boundaries. You might write critical functions in C or C++, then invoke them in Python or from Swift or… Visual Basic 6.

And crossing those language boundaries can pose other challenges. For example, Python has a built-in boolean type. C, for quite a long time didn't. Which means a lot of C code has blocks like this:

#define BOOL int #define FALSE 0 #define TRUE 1 #define FILE_NOT_FOUND 2

Carl C provides that block, just for illustration purposes. Awhile back, he inherited a big pile of antique COM+ and the associated VB6 front end, along with a herd of "COM Wizards" and "Junior VB Programmers" to help maintain it.

The idea of the system was that all the "hard stuff" would be done in C++, while the UI would be a VB6 application. The C++ COM layer talked to some hardware which could be attached to the PC, and the VB6 layer let the user check the status and interact with the device.

Unfortunately, the Junior VB Programmers quickly encountered a problem: they could NEVER get the device online. Plugging, unplugging, rebooting, trying different ports, different computers, it never worked. But when the "COM wizards" tossed them a diagnostic program written in C++, things worked fine.

"Must be a problem in your VB code," was the obvious conclusion.

Dim oHardware as New HardwareServer ' Initialize Hardware oHardware.Initialize 0 If oHardware.ONLINE = True Then Set oActuator = oHardware.Actuator Else MsgBox "Hardware did not initialize correctly." End End If

Reading through that code, it's hard to see at a glance what could be wrong about it. Could the problem be in the COM layer?

interface IHardwareServer : IDispatch { [propget, id(1)] HRESULT Actuator([out, retval] IActuator* *pVal); [propget, id(2)] HRESULT ONLINE([out, retval] BOOL *pVal); [id(3)] HRESULT Initialize(short interfaceID); }; coclass HardwareServer { [default] interface IHardwareServer; };

While the COM approach to defining a property is a little awkward, nothing at a glance looks wrong here, either. ONLINE is a property that returns a BOOL.

But this is a C++ boolean. Defined so that true is one and false is zero.

Visual Basic, in addition to letting arrays start at 1 if you really wanted to, had another quirk. It was pretty loosey-goosey with types (defaulting to the handy Variant type, which is a fancy way of saying "basically no type at all"), and the internal implementation of its types could be surprising.

For example, in VB6, False was zero, much like you'd expect. And True was… -1. Yes, -1. Carl suggests this was to "kinda sorta mostly hide the distinction between bitwise and logical operations", which does sound like the sort of design choice Visual Basic would make. This is also TRWTF.

Now, it's easy to see how the Visual Basic code above is wrong: oHardware.ONLINE = True is testing to see if ONLINE is equal to -1, which is not true. A more correct way of writing the Visual Basic would be simply to test if oHardware.ONLINE then…. Visual Basic is okay with falsy/truthy statements, so whether ONLINE is 1 or -1, that would evaluate as true.

That doesn't let the COM programmers off the hook though. COM was designed to work across languages, and COM was designed with the understanding that different languages might have different internal representations of boolean values.

As Carl adds:

Of course if they were really COM wizards they would have used the VARIANT_BOOL type in the first place, and returned VARIANT_TRUE or VARIANT_FALSE.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

TEDValues reset: The talks of TED2020 Session 2

There’s a theory that the shock we’re currently experiencing is intense enough to force a radical reset of our values — of how we are and how we act. In an idea-packed session 2 of TED2020, speakers from across disciplines and walks of life looked to this aspiration of a “values reset,” sharing new thinking on topics ranging from corporate responsibility down to our individual responsibilities and the things each of us can right now. Below, a recap of the night’s inspiring talks and performances.

“Nobody works in a vacuum. The men and women who run companies actively cocreate the reality we all have to share. And just like with global warming, we are each of us responsible for the collective consequences of our individual decisions and actions,” says filmmaker and activist Abigail Disney. She speaks at TED2020: Uncharted on May 28, 2020. (Photo courtesy of TED)

Abigail Disney, Filmmaker, activist

Big idea: Respect, dignity and a guaranteed livable wage are the right of all workers, not the privilege of a select few.

How? As CEO of the Disney Company, Roy Disney believed he had a moral obligation to every person who worked at the company. Though her grandfather wasn’t perfect, Abigail Disney says he believed that workers were worthy of immense respect — and he put that belief into practice by creating jobs with fair wages and benefits. In honor of her grandfather’s legacy, Disney advocates for income equality for all workers — and calls out the company that bears her name, asking them to do better for their workers. Our conscience and empathy should drive us, she says, not profits or economic growth. Disney believes we need a system-wide shift, one that recognizes that all workers deserve the wages, protections and benefits that would enable them to live full, secure and dignified lives.

Quote of the talk: “Nobody works in a vacuum. The men and women who run companies actively cocreate the reality we all have to share. And just like with global warming, we are each of us responsible for the collective consequences of our individual decisions and actions.”


Backed by brilliant illustrations from Laolu Senbanjo, journalist and satirist Adeola Fayehun shares her work exposing corruption in Africa with sharp, incisive humor. She speaks at TED2020: Uncharted on May 28, 2020. (Photo courtesy of TED)

Adeola Fayehun, Journalist, satirist

Big idea: Africa is overflowing with all the natural resources, intellectual skill and talent it needs. To flourish, its people need to hold corrupt leaders accountable.

Why? On her show Keeping It Real With Adeola, Adeola Fayehun exposes corruption in Africa with sharp, incisive humor. She urges those outside Africa to stop seeing the continent through the lens of their biases, and encourages us all to call out false policies and shatter stereotypes. “Please listen more,” she says. “Listen to your African friends without a preconceived notion of what you think they’re going to say. Read African books, watch African movies, visit Africa or, at the very least, learn some of the names of our 54 beautiful countries.”

Quote of the talk: “Africa is like a sleeping giant. The truth is I am trying to wake up this giant. That’s why I air the dirty laundry of those in charge of the giant.”


Rufus Wainwright performs “Peaceful Afternoon” and “Going To A Town” at TED2020: Uncharted on May 28, 2020. (Photo courtesy of TED)

From his home in Los Angeles, songwriter Rufus Wainwright shares intimate versions of his songs “Peaceful Afternoon” and “Going To A Town.” Gorgeous slow pans are courtesy of Jörn Weisbrodt, Wainwright’s husband and videographer for the performances.


“We hate the idea that really important things in life might happen by luck or by chance, that really important things in our life are not under our control,” says psychology professor Barry Schwartz. He speaks at TED2020: Uncharted on May 28, 2020. (Photo courtesy of TED)

Barry Schwartz, Psychology professor

Big idea: Our society is predicated on the idea that the distribution of opportunity is fair — but, in reality, working hard and playing by the rules is no guarantee of success. Good fortune and luck have far more to do with our opportunities (and therefore our future success) than we’re willing to admit.

How? Just look at the ultra-competitive landscape of college admissions, where a dearth of slots for qualified and capable students has created an epidemic of anxiety and depression among teenage university applicants long before they even make it to the job market. Schwartz suggests that the belief that working hard automatically leads to success blinds us to a core injustice: many of us simply will not get what we want. If our educational institutions — and our nation’s employers — were to emphasize this injustice by picking their students and employees randomly from a pool of those most likely to succeed, we might be forced to recognize the role that fortune plays in our lives.

Quote of the talk: “We hate the idea that really important things in life might happen by luck or by chance, that really important things in our life are not under our control.”


“I have a choice, right now, in the midst of the storm, to decide to overcome,” says Seattle Seahawks quarterback Russell Wilson. He speaks at TED2020: Uncharted on May 28, 2020. (Photo courtesy of TED)

Russell Wilson, Seattle Seahawks quarterback

Big idea: “Neutral thinking” can transform your life and help you unlock sustained personal success.

How? Athletes train their bodies to run faster, jump higher, achieve more — so why don’t they train their minds, too? For the past 10 years, Wilson has been doing just that with the assistance of mental conditioning coach Trevor Moawad. By harnessing the technique of “neutral thinking” — a strategy that emphasizes judgment-free acceptance of the present moment — Wilson has been able to maintain focus in high-pressure situations. Positivity can be dangerous and distracting, Wilson says, and negativity is sure to bring you down — but by honing a neutral mental game and executing in the present moment, you set yourself up to succeed.

Quote of the talk:I have a choice, right now, in the midst of the storm, to decide to overcome.”

Planet Linux AustraliaDavid Rowe: Effective Altruism

Long term readers of the blog may recall my daughter Amy. Well, she has moved on from teenage partying and is now e-volunteering at Effective Altruism Australia. She recently pointed me at the free e-book The Life You Can Save by Peter Singer.

I was already familiar with the work of Peter Singer, having read “the Most Good You Can Do”. Peter puts numbers on altruistic behaviour to evaluate them. This appeals to me – as an engineer I uses numbers to evaluate artefacts I build like modems, or other processes going on in the world like COVD-19.

Using technology to help people is a powerful motivator for Geeks. I’ve been involved in a few of these initiatives myself (OLPC and The Village Telco). It’s really tough to create something that helps people long term. A wider set of skills and capabilities are required than just “the technology”.

On my brief forays into the developing world I’ve seen ecologies of people (from the first and developing worlds) living off development dollars. In some cases there is no incentive to report the true outcomes, for example how many government bureaucrats want to report failure? How many consultants want the gig to end?

So I really get the need for scientific evaluation of any development endeavours. Go Peter and the Effective Altruism movement!

I spend around 1000 hours a year writing open source code, a strong argument that I am “doing enough” in the community space. However I have no idea how effective that code is. Is it helping anyone? My inclination to help is also mixed with “itch scratching” – geeky stuff I want to work on because I find it interesting.

So after the reading the book and having a think – I’m sold. I have committed 5% of my income to Effective Altruism Australia, selecting Give Directly as a target for my funds as it appealed to me personally.

I asked Amy proof read this post – and she suggested that instead of $ you, can donate time – that’s what she does. She also said:

Effective Altrusim opens your eyes to alternative ways to interact with charities. It combines the board field of social science to explore how may aspects intersect; by applying the scientific method to that of economics, psychology, international development, and anthropology.

Reading Further

Busting Teenage Partying with a Fluksometer
Effective Altruism Australia

,

Planet Linux AustraliaSimon Lyall: AudioBooks – May 2020

Fewer books this month. At home on lockdown and weather a bit worse so less time to go on walks walks and listen.

Save the Cat! Writes a Novel: The Last Book On Novel Writing You’ll Ever Need by Jessica Brody

A fairly straight adaption of the screenplay-writing manual. Lots of examples from well-known books including full breakdowns of beats. 3/5

Happy Singlehood: The Rising Acceptance and Celebration of Solo Living by Elyakim Kislev

Based on 142 interviews. A lot of summaries of findings with quotes for interviewees and people’s blogs. Last chapter has some policy push but a little lights 3/5

Scandinavia: A History by Ewan Butler

Just a a 6 hour long quick spin though history. First half suffers a bit with lists of Kings although there is a bit more colour later in. Okay prep for something meatier 3/5

One Giant Leap: The Impossible Mission That Flew Us to the Moon by Charles Fishman

A bit of a mix. It covers the legacy of Apollo but the best bits are chapters on the Computers, Politics and other behind the scenes things. A compliment to astronaut and mission orientated books. 4/5

My Scoring System

  • 5/5 = Brilliant, top 5 book of the year
  • 4/5 = Above average, strongly recommend
  • 3/5 = Average. in the middle 70% of books I read
  • 2/5 = Disappointing
  • 1/5 = Did not like at all

Share

,

CryptogramFriday Squid Blogging: Humboldt Squid Communication

Humboldt Squid communicate by changing their skin patterns and glowing.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityCareer Choice Tip: Cybercrime is Mostly Boring

When law enforcement agencies tout their latest cybercriminal arrest, the defendant is often cast as a bravado outlaw engaged in sophisticated, lucrative, even exciting activity. But new research suggests that as cybercrime has become dominated by pay-for-service offerings, the vast majority of day-to-day activity needed to support these enterprises is in fact mind-numbingly boring and tedious, and that highlighting this reality may be a far more effective way to combat cybercrime and steer offenders toward a better path.

Yes, I realize hooded hacker stock photos have become a meme, but that’s the point.

The findings come in a new paper released by researchers at Cambridge University’s Cybercrime Centre, which examined the quality and types of work needed to build, maintain and defend illicit enterprises that make up a large portion of the cybercrime-as-a-service market. In particular, the academics focused on botnets and DDoS-for-hire or “booter” services, the maintenance of underground forums, and malware-as-a-service offerings.

In examining these businesses, the academics stress that the romantic notions of those involved in cybercrime ignore the often mundane, rote aspects of the work that needs to be done to support online illicit economies. The researchers concluded that for many people involved, cybercrime amounts to little more than a boring office job sustaining the infrastructure on which these global markets rely, work that is little different in character from the activity of legitimate system administrators.

Richard Clayton, a co-author of the report and director of Cambridge’s Cybercrime Centre, said the findings suggest policymakers and law enforcement agencies may be doing nobody a favor when they issue aggrandizing press releases that couch their cybercrime investigations as targeting sophisticated actors.

“The way in which everyone looks at cybercrime is they’re all interested in the rockstars and all the exciting stuff,” Clayton told KrebsOnSecurity. “The message put out there is that cybercrime is lucrative and exciting, when for most of the people involved it’s absolutely not the case.”

From the paper:

“We find that as cybercrime has developed into industrialized illicit economies, so too have a range of tedious supportive forms of labor proliferated, much as in mainstream industrialized economies. We argue that cybercrime economies in advanced states of growth have begun to create their own tedious, low-fulfillment jobs, becoming less about charismatic transgression and deviant identity, and more about stability and the management and diffusion of risk. Those who take part in them, the research literature suggests, may well be initially attracted by exciting media portrayals of hackers and technological deviance.”

“However, the kinds of work and practices in which they actually become involved are not reflective of the excitement and exploration which characterized early ‘hacker’ communities, but are more similar to low-level work in drug dealing gangs, involving making petty amounts of money for tedious work in the service of aspirations that they may one day be one of the major players. This creates the same conditions of boredom…which are found in mainstream jobs when the reality emerges that these status and financial goals are as blocked in the illicit economy as they are in the regular job market.”

The researchers drew on interviews with people engaged in such enterprises, case studies on ex- or reformed criminal hackers, and from scraping posts by denizens of underground forums and chat channels. They focused on the activity needed to keep various crime services operating efficiently and free from disruption from interlopers, internecine conflict, law enforcement or competitors.

BOOTER BLUES

For example, running an effective booter service requires a substantial amount of administrative work and maintenance, much of which involves constantly scanning for, commandeering and managing large collections of remote systems that can be used to amplify online attacks.

Booter services (a.k.a. “stressers”) — like many other cybercrime-as-a-service offerings — tend to live or die by their reputation for uptime, effectiveness, treating customers fairly, and for quickly responding to inquiries or concerns from users. As a result, these services typically require substantial investment in staff needed for customer support work (through a ticketing system or a realtime chat service) when issues arise with payments or with clueless customers failing to understand how to use the service.

In one interview with a former administrator of a booter service, the proprietor told researchers he quit and went on with a normal life after getting tired of dealing with customers who took for granted all the grunt work needed to keep the service running. From the interview:

“And after doing [it] for almost a year, I lost all motivation, and really didn’t care anymore. So I just left and went on with life. It wasn’t challenging enough at all. Creating a stresser is easy. Providing the power to run it is the tricky part. And when you have to put all your effort, all your attention. When you have to sit in front of a computer screen and scan, filter, then filter again over 30 amps per 4 hours it gets annoying.”

The researchers note that this burnout is an important feature of customer support work, “which is characterized less by a progressive disengagement with a once-interesting activity, and more by the gradual build-up of boredom and disenchantment, once the low ceiling of social and financial capital which can be gained from this work is reached.”

WHINY CUSTOMERS

Running a malware-as-a-service offering also can take its toll on developers, who quickly find themselves overwhelmed with customer support requests and negative feedback when a well-functioning service has intermittent outages.

Indeed, the author of the infamous ZeuS Trojan — a powerful password stealing tool that paved the way for hundreds of millions of dollars stolen from hacked businesses — is reputed to have quit the job and released the source code for the malware (thus spawning an entire industry of malware-as-a-service offerings) mainly to focus his skills on less tedious work than supporting hundreds of customers.

“While they may sound glamorous, providing these cybercrime services require the same levels of boring, routine work as is needed for many non-criminal enterprises, such as system administration, design, maintenance, customer service, patching, bug-fixing, account-keeping, responding to sales queries, and so on,” the report continues.

To some degree, the ZeuS’s author experience may not be the best example, because his desire to get away from supporting hundreds of customers ultimately led to his focusing attention and resources on building a far more sophisticated malware threat — the peer-to-peer based Gameover malware that he leased to a small group of organized crime groups.

Likewise, the cover story in this month’s Wired magazine profiles Marcus Hutchins, who said he “quickly grew bored with his botnets and his hosting service, which he found involved placating a lot of ‘whiny customers.’ So he quit and began to focus on something he enjoyed far more: perfecting his own malware.”

BORING THEM OUT OF BUSINESS

Cambridge’s Clayton and his colleagues argue the last two examples are more the exception than the rule, and that their research points to important policy implications for fighting cybercrime that are often discounted or overlooked: Namely, interventions that focus on the economics of attention and boredom, and on making such work as laborious and boring as possible.

Many cybersecurity experts often remark that taking down domain names and other infrastructure tied to cybercrime businesses amounts to little more than a game of whack-a-mole, because the perpetrators simply move somewhere else to resume their operations. But the Cambridge researchers note that each takedown creates further repetitive, tedious, work for the administrators to set up their sites anew.

“Recent research shows that the booter market is particularly susceptible to interventions targeted at this infrastructural work, which make the jobs of these server managers more boring and more risky,” the researchers note.

The paper takes care to note that its depictions of the ‘boredom’ of the untrained administrative work carried out in the illicit economy should not be taken as impugning the valuable and complex work of legitimate system administrators. “Rather, it is to recognize that this is a different kind of knowledge and set of skills from engineering work, which needs to be taught, learned, and managed differently.”

The authors conclude that refocusing interventions in this way might also be supported by changes to the predominant forms of messaging used by law enforcement and policy professionals around cybercrime:

“If participation within these economies is in fact based in deviant aspiration rather than deviant experience, the currently dominant approaches to messaging, which tend to focus on the dangerous and harmful nature of these behaviors, the high levels of technical skill possessed by cybercrime actors, the large amounts of money made in illicit online economies, and the risk of detection, arrest, and prosecution are potentially counterproductive, only feeding the aspiration which drives this work. Conversely, by emphasizing the tedious, low-skilled, low-paid, and low-status reality of much of this work, messaging could potentially dissuade those involved in deviant online subcultures from making the leap from posting on forums to committing low-level crime.”

“Additionally, diversionary interventions that emphasize the shortage of sysadmin and ‘pen tester’ workers in the legitimate economy (“you could be paid really good money for doing the same things in a proper job”) need to recognize that pathways, motivations, and experiences may be rather more prosaic than might be expected.”

“Conceptualizing cybercrime actors as high-skilled, creative adolescents with a deep love for and understanding of technology may in fact mischaracterize most of the people on whom these markets depend, who are often low-skilled administrators who understand fairly little about the systems they maintain and administer, and whose approach is more akin to the practical knowledge of the maintainer than the systematic knowledge of a software engineer or security researcher. Finding all these bored people appropriate jobs in the legitimate economy may be as much about providing basic training as about parachuting superstars into key positions.”

Further reading: Cybercrime is (often) Boring: Maintaining the Infrastructure of Cybercrime Economies (PDF).

CryptogramBogus Security Technology: An Anti-5G USB Stick

The 5GBioShield sells for £339.60, and the description sounds like snake oil:

...its website, which describes it as a USB key that "provides protection for your home and family, thanks to the wearable holographic nano-layer catalyser, which can be worn or placed near to a smartphone or any other electrical, radiation or EMF [electromagnetic field] emitting device".

"Through a process of quantum oscillation, the 5GBioShield USB key balances and re-harmonises the disturbing frequencies arising from the electric fog induced by devices, such as laptops, cordless phones, wi-fi, tablets, et cetera," it adds.

Turns out that it's just a regular USB stick.

TEDTED2020 seeks the uncharted

The world has shifted, and so has TED.

We need brilliant ideas and thinkers more than ever. While we can’t convene in person, we will convene. Rather than a one-week conference, TED2020 will be an eight-week virtual experience — all held in the company of the TED community. Each week will offer signature TED programming and activities, as well as new and unique opportunities for connection and interaction. 

We have an opportunity to rebuild our world in a better, fairer and more beautiful way. In line with TED2020’s original theme, Uncharted, the conference will focus on the roles we all have to play in building back better. The eight-week program will offer ways to deepen community relationships and, together, re-imagine what the future can be.

Here’s what the TED2020 weekly program will look like: On Monday, Tuesday and Wednesday, a series of 45-minute live interviews, talks and debates centered on the theme Build Back Better. TED attendees can help shape the real-time conversation on an interactive, TED-developed virtual platform they can use to discuss ideas, share questions and give feedback to the stage. On Thursday, the community will gather to experience a longer mainstage TED session packed with unexpected moments, performances, visual experiences and provocative talks and interviews. Friday wraps up the week with an all-day, à la carte Community Day featuring an array of interactive choices including Discovery Sessions, speaker meetups and more.

 TED2020 speakers and performers include: 

  • JAD ABUMRAD, RadioLab founder 
  • CHRISTINA AGAPAKIS, Synthetic biology adventurer
  • REFIK ANADOL, Digital arts maestro
  • XIYE BASTIDA, Climate justice activist
  • SWIZZ BEATZ, Hip-hop artist, producer
  • GEORGES C. BENJAMIN, Executive Director, American Public Health Association
  • BRENÉ BROWN, Vulnerability researcher, storyteller 
  • WILL CATHCART, Head of WhatsApp
  • JAMIE DIMON, Global banker
  • ABIGAIL DISNEY, Filmmaker, activist
  • BILL GATES, Technologist, philanthropist
  • KRISTALINA GEORGIEVA, Managing Director, International Monetary Fund
  • JANE GOODALL, Primatologist, conservationist
  • AL GORE, Climate advocate
  • TRACY EDWARDS, Trailblazer
  • ISATA KANNEH-MASON, Pianist
  • SHEKU KANNEH-MASON, Cellist
  • NEAL KATYAL, Supreme Court litigator
  • EMILY KING, Singer, songwriter
  • YANN LECUN, AI pioneer
  • MICHAEL LEVIN, Cellular explorer
  • PHILIP LUBIN, Physicist
  • SHANTELL MARTIN, Artist
  • MARIANA MAZZUCATO, Policy influencer
  • MARCELO MENA, Environment minister of Chile
  • JACQUELINE NOVOGRATZ, Moral leader
  • DAN SCHULMAN, CEO and President, PayPal
  • AUDREY TANG, Taiwan’s digital minister for social innovation
  • DALLAS TAYLOR, Sound designer, podcaster
  • NIGEL TOPPING, Climate action champion
  • RUSSELL WILSON, Quarterback, Seattle Seahawks

The speaker lineup is being unveiled on ted2020.ted.com in waves throughout the eight weeks, as many speakers will be addressing timely and breaking news. Information about accessing the high-definition livestream of the entire conference and TED2020 membership options are also available on ted2020.ted.com.

The TED Fellows class of 2020 will once again be a highlight of the conference, with talks, Discovery Sessions and other special events sprinkled throughout the eight-week program. 

TED2020 members will also receive special access to the TED-Ed Student Talks program, which helps students around the world discover, develop and share their ideas in the form of TED-style talks. TEDsters’ kids and grandkids (ages 8-18) can participate in a series of interactive sessions led by the TED-Ed team and culminating in the delivery of each participant’s very own big idea.

As in the past, TED Talks given during the conference will be made available to the public in the coming weeks. Opening TED up to audiences around the world is foundational to TED’s mission of spreading ideas. Founded in 1984, the first TED conferences were held in Monterey, California. In 2006, TED experimented with putting TED Talk videos online for free — a decision that opened the doors to giving away all of its content. Today there are thousands of TED Talks available on TED.com. What was once a closed-door conference devoted to Technology, Entertainment and Design has become a global platform for sharing talks across a wide variety of disciplines. Thanks to the support of thousands of volunteer translators, TED Talks are available in 116 languages. TEDx, the licensing program that allows communities to produce independently organized TED events, has seen more than 28,000 events held in more than 170 countries. TED-Ed offers close to 1,200 free animated lessons and other learning resources for a youth audience and educators. Collectively, TED content attracts billions of views and listens each year.

TED has partnered with a number of innovative organizations to support its mission and contribute to the idea exchange at TED2020. They are collaborating with the TED team on innovative ways to engage a virtual audience and align their ideas and perspectives with this year’s programming. This year’s partners include: Accenture, BetterUp, Boston Consulting Group, Brightline™ Initiative, Cognizant, Hilton, Lexus, Project Management Institute, Qatar Foundation, Robert Wood Johnson Foundation, SAP, Steelcase and Target.

Get the latest information and updates on TED2020 on ted2020.ted.com.

CryptogramFacebook Announces Messenger Security Features that Don't Compromise Privacy

Note that this is "announced," so we don't know when it's actually going to be implemented.

Facebook today announced new features for Messenger that will alert you when messages appear to come from financial scammers or potential child abusers, displaying warnings in the Messenger app that provide tips and suggest you block the offenders. The feature, which Facebook started rolling out on Android in March and is now bringing to iOS, uses machine learning analysis of communications across Facebook Messenger's billion-plus users to identify shady behaviors. But crucially, Facebook says that the detection will occur only based on metadata­ -- not analysis of the content of messages­ -- so that it doesn't undermine the end-to-end encryption that Messenger offers in its Secret Conversations feature. Facebook has said it will eventually roll out that end-to-end encryption to all Messenger chats by default.

That default Messenger encryption will take years to implement.

More:

Facebook hasn't revealed many details about how its machine-learning abuse detection tricks will work. But a Facebook spokesperson tells WIRED the detection mechanisms are based on metadata alone: who is talking to whom, when they send messages, with what frequency, and other attributes of the relevant accounts -- essentially everything other than the content of communications, which Facebook's servers can't access when those messages are encrypted. "We can get pretty good signals that we can develop through machine learning models, which will obviously improve over time," a Facebook spokesperson told WIRED in a phone call. They declined to share more details in part because the company says it doesn't want to inadvertently help bad actors circumvent its safeguards.

The company's blog post offers the example of an adult sending messages or friend requests to a large number of minors as one case where its behavioral detection mechanisms can spot a likely abuser. In other cases, Facebook says, it will weigh a lack of connections between two people's social graphs -- a sign that they don't know each other -- or consider previous instances where users reported or blocked a someone as a clue that they're up to something shady.

One screenshot from Facebook, for instance, shows an alert that asks if a message recipient knows a potential scammer. If they say no, the alert suggests blocking the sender, and offers tips about never sending money to a stranger. In another example, the app detects that someone is using a name and profile photo to impersonate the recipient's friend. An alert then shows the impersonator's and real friend's profiles side-by-side, suggesting that the user block the fraudster.

Details from Facebook

Planet Linux AustraliaLev Lafayette: Using Live Linux to Save and Recover Your Data

There are two types of people in the world; those who have lost data and those who are about to. Given that entropy will bite eventually, the objective should be to minimise data loss. Some key rules for this backup, backup often, and backup with redundancy. Whilst an article on that subject will be produced, at this stage discussion is directed to the very specific task of recovering data from old machines which may not be accessible anymore using Linux. There number of times I've done this in past years is somewhat more than the number of fingers I have - however, like all good things it deserves to be documented in the hope that other people might find it useful.

To do this one will need a Linux live distribution of some sort as an ISO, as a bootable USB drive. A typical choice would be a Ubuntu Live or Fedora Live. If one is dealing with damaged hardware the old Slackware-derived minimalist distribution Recovery is Possible (RIP) is certainly worth using; it's certainly saved me in the past. If you need help in creating a bootable USB, the good people at HowToGeek provide some simple instructions.

With a Linux bootable disk of some description inserted in one's system, the recovery process can begin. Firstly, boot the machine and change the book order (in BIOS/UEFI) that the drive in question becomes the first in the boot order. Once the live distribution boots up, usually in a GUI environment, one needs to open the terminal application (e.g., GNOME in Fedora uses Applications, System Tools, Terminal) and change to the root user with the su command (there's no password on a live CD to be root!).

At this point one needs to create a mount point directory, where the data is going to be stored; mkdir /mnt/recovery. After this one needs to identify the disk which one is trying to access. The fdisk -l command will provide a list of all disks in the partition table. Some educated guesswork from the results is required here, which will provide the device filesystem Type; it almost certainly isn't an EFI System, or Linux swap for example. Typically one is trying to access something like /dev/sdaX.

Then one must mount the device to the directory that was just created, for example: mount /dev/sda2 /mnt/recovery. Sometimes a recalcitrant device will need to have the filesystem explicitly stated; the most common being ext3, ext4, fat, xfs, vfat, and ntfs-3g. To give a recent example I needed to run mount -t ext3 /dev/sda3 /mnt/recovery. From there one can copy the data from the mount point to a new source; a USB drive is probably the quickest, although one may take the opportunity to copy it to an external system (e.g., google drive) - and that's it! You've recovered your data!

Worse Than FailureError'd: A Pattern of Errors

"Who would have thought that a newspaper hired an ex-TV technician to test their new CMS with an actual test pattern!" wrote Yves.

 

"Guess I should throttle back on binging all of Netflix," writes Eric S.

 

Christian K. wrote, "So, does this let me listen directly to my network packets?"

 

"I feel this summarizes very well the current Covid-19 situation in the US," Henrik B. wrote.

 

Steve W. writes, "I don't know if I've been gardening wrong or computing wrong, but at least know I know how best to do it!"

 

"Oh, how silly of me to search a toy reseller's website for 'scrabble' when I really meant to search for 'scrabble'. It's so obvious now!"

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Planet Linux AustraliaFrancois Marier: Fixing locale problem in MythTV 30

After upgrading to MythTV 30, I noticed that the interface of mythfrontend switched from the French language to English, despite having the following in my ~/.xsession for the mythtv user:

export LANG=fr_CA.UTF-8
exec ~/bin/start_mythtv

I noticed a few related error messages in /var/log/syslog:

mythbackend[6606]: I CoreContext mythcorecontext.cpp:272 (Init) Assumed character encoding: fr_CA.UTF-8
mythbackend[6606]: N CoreContext mythcorecontext.cpp:1780 (InitLocale) Setting QT default locale to FR_US
mythbackend[6606]: I CoreContext mythcorecontext.cpp:1813 (SaveLocaleDefaults) Current locale FR_US
mythbackend[6606]: E CoreContext mythlocale.cpp:110 (LoadDefaultsFromXML) No locale defaults file for FR_US, skipping
mythpreviewgen[9371]: N CoreContext mythcorecontext.cpp:1780 (InitLocale) Setting QT default locale to FR_US
mythpreviewgen[9371]: I CoreContext mythcorecontext.cpp:1813 (SaveLocaleDefaults) Current locale FR_US
mythpreviewgen[9371]: E CoreContext mythlocale.cpp:110 (LoadDefaultsFromXML) No locale defaults file for FR_US, skipping

Searching for that non-existent fr_US locale, I found that others have this in their logs and that it's apparently set by QT as a combination of the language and country codes.

I therefore looked in the database and found the following:

MariaDB [mythconverg]> SELECT value, data FROM settings WHERE value = 'Language';
+----------+------+
| value    | data |
+----------+------+
| Language | FR   |
+----------+------+
1 row in set (0.000 sec)

MariaDB [mythconverg]> SELECT value, data FROM settings WHERE value = 'Country';
+---------+------+
| value   | data |
+---------+------+
| Country | US   |
+---------+------+
1 row in set (0.000 sec)

which explains the non-sensical FR-US locale.

I fixed the country setting like this

MariaDB [mythconverg]> UPDATE settings SET data = 'CA' WHERE value = 'Country';
Query OK, 1 row affected (0.093 sec)
Rows matched: 1  Changed: 1  Warnings: 0

After logging out and logging back in, the user interface of the frontend is now using the fr_CA locale again and the database setting looks good:

MariaDB [mythconverg]> SELECT value, data FROM settings WHERE value = 'Country';
+---------+------+
| value   | data |
+---------+------+
| Country | CA   |
+---------+------+
1 row in set (0.000 sec)

Krebs on SecurityUK Ad Campaign Seeks to Deter Cybercrime

The United Kingdom’s anti-cybercrime agency is running online ads aimed at young people who search the Web for services that enable computer crimes, specifically trojan horse programs and DDoS-for-hire services. The ad campaign follows a similar initiative launched in late 2017 that academics say measurably dampened demand for such services by explaining that their use to harm others is illegal and can land potential customers in jail.

For example, search in Google for the terms “booter” or “stresser” from a U.K. Internet address, and there’s a good chance you’ll see a paid ad show up on the first page of results warning that using such services to attack others online is illegal. The ads are being paid for by the U.K.’s National Crime Agency, which saw success with a related campaign for six months starting in December 2017.

A Google ad campaign paid for by the U.K.’s National Crime Agency.

NCA Senior Manager David Cox said the agency is targeting its ads to U.K. males age 13 to 22 who are searching for booter services or different types of remote access trojans (RATs), as part of an ongoing effort to help steer young men away from cybercrime and toward using their curiosity and skills for good. The ads link to advertorials and to the U.K.’s Cybersecurity Challenge, which tries gamify computer security concepts and highlight potential careers in cybersecurity roles.

“The fact is, those standing in front of a classroom teaching children have less information about cybercrime than those they’re trying to teach,” Cox said, noting that the campaign is designed to support so-called “knock-and-talk” visits, where investigators visit the homes of young people who’ve downloaded malware or purchased DDoS-for-hire services to warn them away from such activity. “This is all about showing people there are other paths they can take.”

While it may seem obvious to the casual reader that deploying some malware-as-a-service or using a booter to knock someone or something offline can land one in legal hot water, the typical profile of those who frequent these services is young, male, impressionable and participating in online communities of like-minded people in which everyone else is already doing it.

In 2017, the NCA published “Pathways into Cyber Crime,” a report that drew upon interviews conducted with a number of young men who were visited by U.K. law enforcement agents in connection with various cybercrime investigations.

Those findings, which the NCA said came about through knock-and-talk interviews with a number of suspected offenders, found that 61 percent of suspects began engaging in criminal hacking before the age of 16, and that the average age of suspects and arrests of those involved in hacking cases was 17 years old.

The majority of those engaged in, or on the periphery of, cyber crime, told the NCA they became involved via an interest in computer gaming.

A large proportion of offenders began to participate in gaming cheat websites and “modding” forums, and later progressed to criminal hacking forums.

The NCA learned the individuals visited had just a handful of primary motivations in mind, including curiosity, overcoming a challenge, or proving oneself to a larger group of peers. According to the report, a typical offender faces a perfect storm of ill-boding circumstances, including a perceived low risk of getting caught, and a perception that their offenses in general amounted to victimless crimes.

“Law enforcement activity does not act as a deterrent, as individuals consider cyber crime to be low risk,” the NCA report found. “Debrief subjects have stated that they did not consider law enforcement until someone they knew or had heard of was arrested. For deterrence to work, there must be a closing of the gap between offender (or potential offender) with law enforcement agencies functioning as a visible presence for these individuals.”

Cox said the NCA will continue to run the ads indefinitely, and that it is seeking funding from outside sources — including major companies in online gaming industry, whose platforms are perhaps the most targeted by DDoS-for-hire services. He called the program a “great success,” noting that in the past 30 days (13 of which the ads weren’t running for funding reasons), the ads generated some 5.32 million impressions, and more than 57,000 clicks.

FLATTENING THE CURVE

Richard Clayton is director of the University of Cambridge Cybercrime Centre, which has been monitoring DDoS attacks for several years using a variety of sensors across the Internet that pretend to be the types of systems which are typically commandeered and abused to help launch such assaults.

Last year, Clayton and fellow Cambridge researchers published a paper showing that law enforcement interventions — including the NCA’s anti-DDoS ad campaign between 2017 and 2018 — demonstrably slowed the growth in demand for DDoS-for-hire services.

“Our data shows that by running that ad campaign, the NCA managed to flatten out demand for booter services over that period,” Clayton said. “In other words, the demand for these services didn’t grow over the period as we would normally see, and we didn’t see more people doing it at the end of the period than at the beginning. When we showed this to the NCA, they were ever so pleased, because that campaign cost them less than ten thousand [pounds sterling] and it stopped this type of cybercrime from growing for six months.”

The Cambridge study found various interventions by law enforcement officials had measurable effects on the demand for and damage caused by booter and stresser services. Source: Booting the Booters, 2019.

Clayton said part of the problem is that many booter/stresser providers claim they’re offering lawful services, and many of their would-be customers are all too eager to believe this is true. Also, the price point is affordable: A typical booter service will allow customers to launch fairly high-powered DDoS attacks for just a few dollars per month.

“There are legitimate companies that provide these types of services in a legal manner, but there are all types of agreements that have to be in place before this can happen,” Clayton said. “And you don’t get that for ten bucks a month.”

DON’T BE EVIL

The NCA’s ad campaign is competing directly with Google ads taken out by many of the same people running these DDoS-for-hire services. It may surprise some readers to learn that cybercrime services often advertise on Google and other search sites much like any legitimate business would — paying for leads that might attract new customers.

Several weeks back, KrebsOnSecurity noticed that searching for “booter” or “stresser” in Google turned up paid ads for booter services prominently on the first page of results. But as I noted in a tweet about the finding, this is hardly a new phenomenon.

A booter ad I reported to Google that the company subsequently took offline.

Cambridge’s Clayton pointed me to a blog post he wrote in 2018 about the prevalence of such ads, which violate Google’s policies on acceptable advertisements via its platform. Google says it doesn’t allow ads for services that “cause damage, harm or injury,” and that they don’t allow adverts for services that “are designed to enable dishonest behavior.”

Clayton said Google eventually took down the offending ads. But as my few seconds of Googling revealed, the company appears to have decided to play wack-a-mole when people complain, instead of expressly prohibiting the placement of (and payment for) ads with these terms.

Google told KrebsOnSecurity that it relies on a combination of technology and people to enforce its policies.

“We have strict ad policies designed to protect users on our platforms,” Google said in a written statement. “We prohibit ads that enable dishonest behavior, including services that look to take advantage of or cause harm to users. When we find an ad that violates our policies we take action. In this case, we quickly removed the ads.”

Google pointed to a recent blog post detailing its enforcement efforts in this regard, which said in 2019 the company took down more than 2.7 billion ads that violated its policies — or more than 10 million ads per day — and that it removed a million advertiser accounts for the same reason.

The ad pictured above ceased to appear shortly after my outreach to them. Unfortunately, an ad for a different booter service (shown below) soon replaced the one they took down.

An ad for a DDoS-for-hire service that appeared shortly after Google took down the ones KrebsOnSecurity reported to them.

Planet Linux AustraliaMichael Still: Introducing Shaken Fist

Share

The first public commit to what would become OpenStack Nova was made ten years ago today — at Thu May 27 23:05:26 2010 PDT to be exact. So first off, happy tenth birthday to Nova!

A lot has happened in that time — OpenStack has gone from being two separate Open Source projects to a whole ecosystem, developers have come and gone (and passed away), and OpenStack has weathered the cloud wars of the last decade. OpenStack survived its early growth phase by deliberately offering a “big tent” to the community and associated vendors, with an expansive definition of what should be included. This has resulted in most developers being associated with a corporate sponser, and hence the decrease in the number of developers today as corporate interest wanes — OpenStack has never been great at attracting or retaining hobbist contributors.

My personal involvement with OpenStack started in November 2011, so while I missed the very early days I was around for a lot and made many of the mistakes that I now see in OpenStack.

What do I see as mistakes in OpenStack in hindsight? Well, embracing vendors who later lose interest has been painful, and has increased the complexity of the code base significantly. Nova itself is now nearly 400,000 lines of code, and that’s after splitting off many of the original features of Nova such as block storage and networking. Additionally, a lot of our initial assumptions are no longer true — for example in many cases we had to write code to implement things, where there are now good libraries available from third parties.

That’s not to say that OpenStack is without value — I am a daily user of OpenStack to this day, and use at least three OpenStack public clouds at the moment. That said, OpenStack is a complicated beast with a lot of legacy that makes it hard to maintain and slow to change.

For at least six months I’ve felt the desire for a simpler cloud orchestration layer — both for my own personal uses, and also as a test bed for ideas for what a smaller, simpler cloud might look like. My personal use case involves a relatively small environment which echos what we now think of as edge compute — less than 10 RU of machines with a minimum of orchestration and management overhead.

At the time that I was thinking about these things, the Australian bushfires and COVID-19 came along, and presented me with a lot more spare time than I had expected to have. While I’m still blessed to be employed, all of my social activities have been cancelled, so I find myself at home at a loose end on weekends and evenings at lot more than before.

Thus Shaken Fist was born — named for a Simpson’s meme, Shaken Fist is a deliberately small and highly opinionated cloud implementation aimed at working well in small deployments such as homes, labs, edge compute locations, deployed systems, and so forth.

I’d taken a bit of trouble with each feature in Shaken Fist to think through what the simplest and highest value way of doing something is. For example, instances always get a config drive and there is no metadata server. There is also only one supported type of virtual networking, and one supported hypervisor. That said, this means Shaken Fist is less than 5,000 lines of code, and small enough that new things can be implemented very quickly by a single middle aged developer.

Shaken Fist definitely has feature gaps — API authentication and scheduling are the most obvious at the moment — but I have plans to fill those when the time comes.

I’m not sure if Shaken Fist is useful to others, but you never know. Its apache2 licensed, and available on github if you’re interested.

Share

Worse Than FailureCodeSOD: This is Your Last Birthday

I have a philosophy on birthdays. The significant ones aren’t the numbers we usually choose- 18, 21, 40, whatever- it’s the ones where you need an extra bit. 2, 4, 8, and so on. By that standard, my next birthday landmark isn’t until 2044, and I’m a patient sort.

Christian inherited some legacy C# code which deals in birthdays. Specifically, it needs to be able to determine when your last birthday was. Now, you have to be a bit smarter than simply “lop off the year and insert this year,” since that could be a future birthday, but not that much smarter.

The basic algorithm most of us would choose, though, might start there. If their birthday is, say, 12/31/1969, then we could ask, is 12/31/2020 in the future? It is. Then their last birthday was on 12/31/2019. Whereas, for someone born on 1/1/1970, we know that 1/1/2020 is in the past, so their last birthday was 1/1/2020.

Christian’s predecessor didn’t want to do that. Instead, they found this… “elegant” approach:

static DateTime GetLastBirthday(DateTime dayOfBirth)
{
    var now = DateTime.Now;

    var former = dayOfBirth;
    var current = former.AddYears(1);

    while (current < DateTime.Now)
    {
        former = current;
        current = current.AddYears(1);
    }

    return former;
}

Start with their birthdate. Then add one to the year, and store that as current. While current is in the past, remember it as former, and then add one to current. When current is finally a date in the future, former must be a date in the past, and store their last birthday.

The kicker here, though, is that this isn’t used to calculate birthdays. It’s used to calculate the “Start of the Case Year”. Which operates like birthdays, or any anniversary for that matter.

var currentCaseYearStart = GetLastBirthday(caseStart);

Sure, that’s weird naming, but Christian has this to add:

Anyways, [for case year starts] it has a (sort of) off-by-one error.

Christian doesn’t expand on that, and I’m not entirely certain what the off-by-one-like behavior would be in that case, and I assume it has something to do with their business rules around case start dates.

Christian has simplified the date calculation, but has yet to rename it: it turns out this method is called in several places, but never to calculate a birthday.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

LongNowLong-term Perspectives During a Pandemic

Long Conversation speakers (from top left): Stewart Brand, Esther Dyson, David Eagleman, Ping fu, Katherine Fulton, Danny Hillis, Kevin Kelly, Ramez Naam, Alexander Rose, Paul Saffo, Peter Schwartz, Tiffany Shlain, Bina Venkataraman, and Geoffrey West.

On April 14th, 02020, The Long Now Foundation convened a Long Conversation¹ featuring members of our board and invited speakers. Over almost five hours of spirited discussion, participants reflected on the current moment, how it fits into our deeper future, and how we can address threats to civilization that are rare but ultimately predictable. The following are excerpts from the conversation, edited and condensed for clarity.

Stewart Brand is co-founder and President of The Long Now Foundation. Photograph: Mark Mahaney/Redux.

The Pandemic is Practice for Climate Change

Stewart Brand

I see the pandemic as practice for dealing with a much tougher problem, and one that has a different timescale, which is climate change. And this is where Long Now comes in, where now, after this — and it will sort out, it’ll take a lot longer to sort out than people think, but some aspects are already sorting out faster than people expected. As this thing sorts out and people arise and say: Well, that was weird and terrible. And we lost so and so, and so and so, and so and so, and so we grieve. But it looks like we’re coming through it. Humans in a sense caused it by making it so that bat viruses could get into humans more easily, and then connecting in a way that the virus could get to everybody. But also humans are able to solve it.

Well, all of that is almost perfectly mapped onto climate change, only with a different timescale. In a sense everybody’s causing it: by being part of a civilization, running it at a much higher metabolic rate, using that much more energy driven by fossil fuels, which then screwed up the atmosphere enough to screw up the climate enough to where it became a global problem caused by basically the global activity of everybody. And it’s going to engage global solutions.

Probably it’ll be innovators in cities communicating with other innovators in other cities who will come up with the needed set of solutions to emerge, and to get the economy back on its legs, much later than people want. But nevertheless, it will get back, and then we’ll say, “Okay, well what do you got next?” Because there’ll now be this point of reference. And it’ll be like, “If we can put a man on the moon, we should be able to blah, blah, blah.” Well, if we can solve the coronavirus, and stop a plague that affected everybody, we should be able to do the same damn thing for climate.

Watch Stewart Brand’s conversation with Geoffrey West.

Watch Stewart Brand’s conversation with Alexander Rose.


Esther Dyson is an investor, consultant, and Long Now Board member. Photograph: Christopher Michel.

The Impact of the Pandemic on Children’s Ability to Think Long-term

Esther Dyson

We are not building resilience into the population. I love the Long Now; I’m on the board. But at the same time we’re pretty intellectual. Thinking long-term is psychological. It’s what you learn right at the beginning. It’s not just an intellectual, “Oh gee, I’m going to be a long-term thinker. I’m going to read three books and understand how to do it.” It’s something that goes deep back into your past.

You know the marshmallow test, the famous Stanford test where you took a two-year-old and you said, “Okay, here’s a marshmallow. You sit here, the researcher is going to come back in 10 minutes, maybe 15 and if you haven’t eaten the first marshmallow you get a second one.” And then they studied these kids years later. And the ones who waited, who were able to have delayed gratification, were more successful in just about every way you can count. Educational achievement, income, whether they had a job at all, et cetera.

But the kids weren’t just sort of randomly, long-term or short-term. The ones that felt secure and who trusted their mother, who trusted — the kid’s not stupid; if he’s been living in a place where if they give you a marshmallow, grab it because you can’t trust what they say.

We’re raising a generation and a large population of people who don’t trust anybody and so they are going to grab—they’re going to think short-term. And the thing that scares me right now is how many kids are going to go through, whether it’s two weeks, two months, two years of living in these kinds of circumstances and having the kind of trauma that is going to make you into such a short-term thinker that you’re constantly on alert. You’re always fighting the current battle rather than thinking long-term. People need to be psychologically as well as intellectually ready to do that.

Watch Esther Dyson’s conversation with Peter Schwartz.

Watch Esther Dyson’s conversation with Ramez Naam.


David Eagleman is a world-renowned neuroscientist studying the structure of the brain, professor, entrepreneur, and author. He is also a Long Now Board member. Photograph: CNN.

The Neuroscience of the Unprecedented

David Eagleman

I’ve been thinking about this thing that I’m temporarily calling “the neuroscience of the unprecedented.” Because for all of us, in our lifetimes, this was unprecedented. And so the question is: what does that do to our brains? The funny part is it’s never been studied. In fact, I don’t even know if there’s an obvious way to study it in neuroscience. Nonetheless, I’ve been thinking a lot about this issue of our models of the world and how they get upended. And I think one of the things that I have noticed during the quarantine—and everybody I talked to has this feeling—is that it’s impossible to do long-term thinking while everything’s changing. And so I’ve started thinking about Maslow’s hierarchy of needs in terms of the time domain.

Here’s what I mean. If you have a good internal model of what’s happening and you understand how to do everything in the world, then it’s easy enough to think way into the distance. That’s like the top of the hierarchy, the top of the pyramid: everything is taken care of, all your physiologic needs, relationship needs, everything. When you’re at the top, you can think about the big picture: what kind of company you want to start, and why and where that goes, what that means for society and so on. When we’re in a time like this, where people are worried about if I don’t get that next Instacart delivery, I actually don’t have any food left, that kind of thing, it’s very hard to think long-term. When our internal models are our frayed, it’s hard to use those to make predictions about the future.

Watch David Eagleman’s conversation with Tiffany Shlain.

Watch David Eagleman’s conversation with Ping Fu.


The Virus as a Common Enemy and the Pandemic as a Whole Earth Event

Danny Hillis and Ping Fu

Danny Hillis: Do you think this is the first time the world has faced a problem, simultaneously, throughout the whole world?

Ping Fu: Well, how do you compare it to World War II? That was also simultaneous, although it didn’t impact every individual. In terms of something that touches every human being on Earth, this may be the first time.

Danny Hillis: Yeah. And also, we all are facing the same problem, whereas during wars, people face each other. They’re each other’s problem.

Ping Fu: I am worried we are making this virus an imaginary war.

Danny Hillis: Yes, I think that’s a problem. On the other hand, we are making it that, or at least our politicians are. But I don’t think people feel that they’re against other people. In fact, I think people realize, much like when they saw that picture of the whole earth, that there’s a lot of other people that are in the same boat they are.

Ping Fu: Well, I’m not saying that this particular imaginary war is necessarily negative, because in history we always see that when there is a common enemy, people get together, right? This feels like the first time the entire earth identified a common enemy, even though viruses have existed forever. We live with viruses all the time, but there was not a political social togetherness in identifying one virus as a common enemy of our humanity.

Danny Hillis: Does that permanently change us, because we all realize we’re facing a common enemy? I think we’ve had a common enemy before, but I don’t think it’s happened quickly enough, or people were aware of that enough. Certainly, one can imagine that global warming might have done that, but it happens so slowly. But this causes a lot of people to realize they’re in it together in real time, in a way, that I don’t think we’ve ever been before.

Ping Fu: When you talk about global warming, or clean air, clean water, it’s also a common enemy. It’s just more long term, so it’s not as urgent. And this one is now. That is making people react to it. But I’m hoping this will make us realize there are many other common enemies to the survival of humanity, so that we will deal with them in a more coherent or collaborative way.

Watch Ping Fu’s conversation with David Eagleman.

Watch Ping Fu’s conversation with Danny Hillis.

Watch Danny Hillis’ conversation with Geoffrey West.


Katherine Fulton is a philanthropist, strategist, author, who works for social change. She is also a Long Now Board member. Photograph: Christopher Michel.

An Opportunity for New Relationships and New Allies

Katherine Fulton

One of the things that fascinates me about this moment is that most big openings in history were not about one thing, they were about the way multiple things came together at the same time in surprising ways. And one of the things that happens at these moments is that it’s possible to think about new relationships and new allies.

For instance, a lot of small business people in this country are going to go out of business. And they’re going to be open to thinking about their future and what they do in the next business they start and who is their ally. In ways that don’t fit into any old ideological boxes I would guess.

When I look ahead at what these new institutions might be, I think they’re going to be hybrids. They’re going to bring people together to look at the cross-issue sectors or cross-business and nonprofit and cross-country. It’s going to bring people together in relationship that never would have been in relationship because you’ll need these new capabilities in different ways.

You often have a lot of social movement people who are very suspicious of business and historically very suspicious of technology — not so much now. So how might there be completely new partnerships? How might the tech companies, who are going to be massively empowered, the big tech companies by this, how might they invest in new kinds of partnerships and be much more enlightened in terms of creating the conditions in which their businesses will need to succeed?

So it seems to me we need a different kind of global institution than the ones that were invented after World War II.

Watch Katherine Fulton’s conversation with Ramez Naam.

Watch Katherine Fulton’s conversation with Kevin Kelly.


Kevin Kelly is Senior Maverick at Wired, a magazine he helped launch in 01993. He served as its Executive Editor from its inception until 01999. From 01984–01990 Kelly was publisher and editor of the Whole Earth Review, a journal of unorthodox technical news. He is also a Long Now board member. Photograph: Christopher Michel.

The Loss of Consensus around Truth

Kevin Kelly

We’re in a moment of transition, accelerated by this virus, where we’ve gone from trusting in authorities to this postmodern world we have to assemble truth. This has put us in a position where all of us, myself included, have difficulty in figuring out, like, “Okay, there’s all these experts claiming expertise even among doctors, and there’s a little bit of contradictory information right now.”

Science works well at getting a consensus when you have time to check each other, to have peer review, to go through publications, to take the doubts and to retest. But it takes a lot of time. And in this fast-moving era where this virus isn’t waiting, we don’t have the luxury of having that scientific consensus. So we are having to turn, it’s like, “Well this guy thinks this, this person thinks this, she thinks that. I don’t know.”

We’re moving so fast that we’re moving ahead of the speed of science, even though science is itself accelerating. That results in this moment where we don’t know who to trust. Most of what everybody knows is true. But sometimes it isn’t. So we have a procedure to figure that out with what’s called science where we can have a consensus over time and then we agree. But if we are moving so fast and we have AI come in and we have viruses happening at this global scale, which speeds things up, then we’re outstripping our ability to know things. I think that may be with us longer than just this virus time.

We are figuring out a new way to know, and in what knowledge we can trust. Young people have to understand in a certain sense that they have to assemble what they believe in themselves; they can’t just inherit that from an authority. You actually have to have critical thinking skills, you actually have to understand that for every expert there’s an anti-expert over here and you have to work at trying to figure out which one you’re going to believe. I think, as a society, we are engaged in this process of coming up with an evolution in how we know things and where to place our trust. Maybe we can make some institutions and devices and technologies and social etiquettes and social norms to help us in this new environment of assembling truth.

Watch Kevin Kelly’s conversation with Katherine Fulton.

Watch Kevin Kelly’s conversation with Paul Saffo.


Ramez Naam holds a number of patents in technology and artificial intelligence and was involved in key product development at Microsoft. He was also CEO of Apex Nanotechnologies. His books include the Nexus trilogy of science fiction thrillers. Photograph: Phil Klein/Ramez Naam.

The Pandemic Won’t Help Us Solve Climate Change

Ramez Naam

There’s been a lot of conversations, op-eds, and Twitter threads about what coronavirus teaches us about climate change. And that it’s an example of the type of thinking that we need.

I’m not so optimistic. I still think we’re going to make enormous headway against climate change. We’re not on path for two degrees Celsius but I don’t think we’re on path for the four or six degrees Celsius you sometimes hear talked about. When I look at it, coronavirus is actually a much easier challenge for people to conceptualize. And I think of humans as hyperbolic discounters. We care far more about the very near term than we do about the long-term. And we discount that future at a very, very steep rate.

And so even with coronavirus — well, first the coronavirus got us these incredible carbon emissions and especially air quality changes. You see these pictures of New Delhi in India before and after, like a year ago versus this week. And it’s just brown haze and crystal clear blue skies, it’s just amazing.

But it’s my belief that when the restrictions are lifted, people are going to get back in their cars. And we still have billions of people that barely have access to electricity, to transportation to meet all their needs and so on. And so that tells me something: that even though we clearly can see the benefit of this, nevertheless people are going to make choices that optimize for their convenience, their whatnot, that have this effect on climate.

And that in turn tells me something else, which is: in the environmentalist movement, there’s a couple of different trains of thought of how we address climate change. And on the more far left of the deep green is the notion of de-growth, limiting economic growth, even reducing the size of the economy. And I think what we’re going to see after coronavirus will make it clear that that’s just not going to happen. That people are barely willing to be in lockdown for something that could kill them a couple of weeks from now. They’re going to be even less willing to do that for something that they perceive as a multi-decade threat.

And so the solution still has to be that we make clean choices, clean electricity, clean transportation, and clean industry, cheaper and better than the old dirty ones. That’s the way that we win. And that’s a hard story to tell people to some extent. But it is an area where we’re making progress.

Watch Ramez Naam’s conversation with Esther Dyson.

Watch Ramez Naam’s conversation with Katherine Fulton.


Alexander Rose is an industrial designer and has been working with The Long Now Foundation and computer scientist Danny Hillis since 01997 to build a monument scale, all mechanical 10,000 Year Clock. As the director of Long Now, Alexander founded The Interval and has facilitated a range of projects including The Organizational Continuity Project, The Rosetta Project, Long Bets, Seminars About Long Term Thinking, Long Server and others. Photograph: Christopher Michel.

The Lessons of Long-Lived Organizations

Alexander Rose

Any organization that has lasted for centuries has lived through multiple events like this. Any business that’s been around for just exactly 102 years, lived through the last one of these pandemics that was much more vast, much less understood, came through with much less communication.

I’m talking right now to heads of companies that have been around for several hundred years and in some cases—some of the better-run family ones and some of the corporate ones that have good records—and they’re pulling from those times. But even more important than pulling the exact strategic things that helped them survive those times, they’re able to tell the story that they survived to their own corporate or organizational culture, which is really powerful. It’s a different narrative than what you hear from our government and others who are largely trying in a way to get out from under the gun of saying this was a predictable event even though it was. They’re trying to say that this was a complete black swan, that we couldn’t have known it was going to happen.

There’s two problems with that. One, it discounts all of this previous prediction work and planning work that in some cases has been heeded by some cultures and some cases not. But I think more importantly, it gets people out of this narrative that we have survived, that we can survive, that things are going to come back to normal, that they can come back so far to normal that we are actually going to be bad at planning for the next one in a hundred years if we don’t put in new safeguards.

And I think it’s crucial to get that narrative back in to the story that we do survive these things. Things do go back to normal. We’re watching movies right now on Netflix where you watch people touch, and interact, and it just seems alien. But I think we will forget it quicker than we adopted it.

Watch Alexander Rose’s conversation with Bina Venkataraman.

Watch Alexander Rose’s conversation with Stewart Brand.


Paul Saffo is a forecaster with over two decades experience helping stakeholders understand and respond to the dynamics of large-scale, long-term change. Photograph: Vodafone.

How Do We Inoculate Against Human Folly?

Paul Saffo

I think, in general, yes, we’ve got to work more on long-term thinking. But the failure with this pandemic was not long-term thinking at all. The failure was action.

I think long-term thinking will become more common. The question is can we take that and turn it to action, and can we get the combination of the long-term look ahead, but also the fine grain of understanding when something really is about to change?

I think that this recent event is a demonstration that the whole futurist field has fundamentally failed. All those forecasts had no consequence. All it took was the unharmonic convergence of some short sighted politicians who had their throat around policy to completely unwind all the foresight and all the preparation. So thinking about what’s different in 50 or 100 or 500 years, I think that the fundamental challenge is how do we inoculate civilization against human folly?

This is the first of pandemics to come, and I think despite the horror that has happened, despite the tragic loss of life, that we’re going to look at this pandemic the way we looked at the ’89 earthquake in San Francisco, and recognize that it was a pretty big one. It wasn’t the big one in terms of virus lethality, it’s more a political pandemic in terms of the idiotic response. The highest thing that could come out of this is if we finally take public health seriously.

Watch Paul Saffo’s conversation with Kevin Kelly.

Watch Paul Saffo’s conversation with Tiffany Shlain.


Peter Schwartz is the Senior Vice President for Global Government Relations and Strategic Planning for Salesforce.com. Prior to that, Peter co-founded Global Business Network, a leader in scenario planning in 01988, where he served as chairman until 02011. Photograph: Christopher Michel.

How to Convince Those in Positions of Power to Trust Scenario Planning

Peter Schwartz

Look, I was a consultant in scenario planning, and I can tell you that it was never a way to get more business to tell a CEO, “Listen, I gave you the scenarios and you didn’t listen.”

My job was to make them listen. To find ways to engage them in such a way that they took it seriously. If they didn’t, it was my failure. Central to the process of thinking about the future like that is finding out how you engage the mind of the decision maker. Good scenario planning begins with a deep understanding of the people who actually have to use the scenarios. If you don’t understand that, you’re not going to have any impact.

[The way to make Trump take this pandemic more seriously would’ve been] to make him a hero early. That is, find a way to tell the story in such a way that Donald Trump in January, as you’re actually warning him, can be a hero to the American people, because of course that is what he wants in every interaction, right? This is a narcissist, so how do you make him be a leader in his narcissism from day one?

The honest truth is that that was part of the strategy with some CEOs that I’ve worked with in the past. I think Donald Trump is an easy person to understand in that respect; he’s very visible. The problem was that he couldn’t see any scenario in which he was a winner, and so he had to deny. You have to give him a route out, a route where he can win, and that’s what I think the folks around him didn’t give him.

Watch Peter Schwartz’s conversation with Bina Venkataraman.

Watch Peter Schwartz’s conversation with Esther Dyson.


Honored by Newsweek as one of the “Women Shaping the 21st Century,” Tiffany Shlain is an Emmy-nominated filmmaker, founder of The Webby Awards and author of 24/6: The Power of Unplugging One Day A Week. Photograph: Anitab.

The Power of Unplugging During the Pandemic

Tiffany Shlain

We’ve been doing this tech shabbat for 10 years now, unplugging on Friday night and having a day off the network. People ask me: “Can you unplug during the pandemic?” Not only can I, but it has been so important for Ken and I and the girls at a moment when there’s such a blur about time. We know all the news out there, we just need to stay inside and have a day to be together without the screens and actually reflect, which is what the Long Now is all about.

I have found these Tech Shabbats a thousand times more valuable for my health because I’m sleeping so well on Friday night. I just feel like I get perspective, which I think I’m losing during the week because it’s so much coming at me all the time. I think this concept of a day of rest has lasted for 3000 years for a reason. And what does that mean today and what does that mean in a pandemic? It means that you go off the screens and be with your family in an authentic way, be with yourself in an authentic way if you’re not married or with kids. Just take a moment to process. There’s a lot going on and it would be a missed opportunity if we don’t put our pen to paper, and I literally mean paper, to write down some of our thoughts right now in a different way. It’s so good to put your mind in a different way.

The reason I started doing Tech Shabbats in the first place is that I lost my father to brain cancer and Ken’s and my daughter was born within days. And it was one of those moments where I felt like life was grabbing me by the shoulders and saying, “Focus on what’s important.” And that series of dramatic events made me change the way I lived. And I feel like this moment that we’re in is the earth and life grabbing us all by the shoulders and saying, “Focus on what’s important. Look at the way you’re living.” And so I’m hopeful that this very intense, painful experience gets us thinking about how to live differently, about what’s important, about what matters. I’m hopeful that real behavioral change can come out of this very dramatic moment we’re in.

Watch Tiffany Shlain’s conversation with Paul Saffo.

Watch Tiffany Shlain’s conversation with David Eagleman.


Bina Venkataraman is the editorial page editor of The Boston Globe. Previously, she served as a senior adviser for climate change innovation in the Obama White House, was the director of Global Policy Initiatives at the Broad Institute of MIT and Harvard, and taught in the Program on Science, Technology, and Society at MIT. Photograph: Podchaser.

We Need a Longer Historical Memory

Bina Venkataraman

We see this pattern—it doesn’t even have to be over generations—that when a natural disaster happens in an area, an earthquake or a flood, we see spikes in people going out and buying insurance right after those events, when they’re still salient in people’s memory, when they’re afraid of those events happening again. But as the historical memory fades, as time goes on, people don’t buy that insurance. They forget these disasters.

I think historical memory is just one component of a larger gap between prediction and action, and what is missing in that gap is imagination. Historical memory is one way to revive the imagination about what’s possible. But I also think it’s beyond that, because with some of the events and threats that are predicted, they might not have perfect analogs in history. I think about climate change as one of those areas where we’ve just never actually had a historical event or anything that even approximates it in human experience, and so cognitively it’s really difficult for people, whether it’s everyday people in our lives or leaders, to be able to take those threats or opportunities seriously.

People think about the moon landing as this incredible feat of engineering, and of course it was, but before it was a feat of engineering, it was a feat of imagination. To accomplish something that’s unprecedented in human experience takes leaps of imagination, and they can come from different sources, from either the source of competition, from the knowledge of history, and indeed from technology, from story, from myth.

Watch Bina Venkataraman’s conversation with Alexander Rose.

Watch Bina Venkataraman’s conversation with Peter Schwartz.


Theoretical physicist Geoffrey West was president of Santa Fe Institute from 2005 to 2009 and founded the high energy physics group at Los Alamos National Laboratory. Photograph: Minesh Bacrania Photography.

The Pandemic is a Red Light for Future Planetary Threats

Geoffrey West

If you get sick as an individual, you take time off. You go to bed. You may end up in hospital, and so forth. And then hopefully you recover in a week, or two weeks, a few days. And of course, you have built into your life a kind of capacity, a reserve, that even though you’ve shut down — you’re not working, you’re not really thinking, probably — nevertheless you have that reserve. And then you recover, and there’s been sufficient reserve of energy, metabolism, finances, that it doesn’t affect you. Even if it is potentially a longer illness.

I’ve been trying to think of that as a metaphor for what’s happening to the planet. And of course what you realize is that how little we actually have in reserve, that it didn’t take very much in terms of us as a globe, as a planet going to bed for a few days, a few weeks, a couple months that we quickly ran out of our resources.

And some of the struggle is to try to come to terms with that, and to reestablish ourselves. And part of the reason for that, is of course that we are, as individuals, in a sort of meta stable state, whereas the planet is, even on these very short timescales, continually changing and evolving. And we live at a time where the socioeconomic forces are themselves exponentially expanding. And this is what causes the problem for us, the acceleration of time and the continual pressures that are part of the fabric of society.

We’re in a much more strong position than we would have been 100 years ago during the Flu Epidemic of 01918. I’m confident that we’re going to get through this and reestablish ourselves. But I see it as a red light that’s going on. A little rehearsal for some of the bigger questions that we’re going to have to face in terms of threats for the planet.

Watch Geoffrey West’s conversation with Danny Hillis.

Watch Geoffrey West’s conversation with Stewart Brand.


Footnotes

[1] Long Conversation is a relay conversation of 20 minute one-to-one conversations; each speaker has an un-scripted conversation with the speaker before them, and then speaks with the next participant before they themselves rotate off. This relay conversation format was first presented under the auspices of Artangel in London at a Longplayer performance in 02009.

CryptogramWebsites Conducting Port Scans

Security researcher Charlie Belmer is reporting that commercial websites such as eBay are conducting port scans of their visitors.

Looking at the list of ports they are scanning, they are looking for VNC services being run on the host, which is the same thing that was reported for bank sites. I marked out the ports and what they are known for (with a few blanks for ones I am unfamiliar with):

  • 5900: VNC
  • 5901: VNC port 2
  • 5902: VNC port 3
  • 5903: VNC port 4
  • 5279:
  • 3389: Windows remote desktop / RDP
  • 5931: Ammy Admin remote desktop
  • 5939:
  • 5944:
  • 5950: WinVNC
  • 6039: X window system
  • 6040: X window system
  • 63333: TrippLite power alert UPS
  • 7070: RealAudio

No one seems to know why:

I could not believe my eyes, but it was quickly reproduced by me (see below for my observation).

I surfed around to several sites, and found one more that does this (the citibank site, see below for my observation)

I further see, at least across ebay.com and citibank.com the same ports, in the same sequence getting scanned. That implies there may be a library in use across both sites that is doing this. (I have not debugged into the matter so far.)

The questions:

  • Is this port scanning "a thing" built into some standard fingerprinting or security library? (if so, which?)
  • Is there a plugin for firefox that can block such behavior? (or can such blocking be added to an existing plugin)?

I'm curious, too.

Worse Than FailureCodeSOD: Is We Equal?

Testing for equality is hard. Equal references are certainly equal, but are equal values? What does it mean for two objects to “equal” each other? It’s especially hard in a language like JavaScript, which is “friendly” about type conversions.

In JavaScript land, you’re likely to favor a tool like “lodash”, which provides utility functions like isEqual.

Mohsin was poking around an old corner of their codebase, which hadn’t been modified in some time. Waiting there was this “helpful” function.

import _ from 'lodash';

export function areEqual(prevProps, nextProps) {
  if (_.isEqual(prevProps, nextProps)) {
    return true;
  }
  return false;
}

In this case, our unknown developer is the best kind of correct: grammatically correct. isEqual should rightly be called areEqual, since we’re testing if two objects “are equal” to each other.

Does that justify implementing a whole new method? Does it justify implementing it with an awkward construct where we use an if to determine if we should return true or false, instead of just, y’know, returning true or false.

isEqual already returns a boolean value, so you don’t need that if: return _.isEqual(…) would be quite enough. Given that functions are data in JavaScript, we could even shorten that by export const areEqual = _.isEqual.

Or, we could just not do this at all.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet Linux AustraliaRusty Russell: 57 Varieties of Pyrite: Exchanges Are Now The Enemy of Bitcoin

TL;DR: exchanges are casinos and don’t want to onboard anyone into bitcoin. Avoid.

There’s a classic scam in the “crypto” space: advertize Bitcoin to get people in, then sell suckers something else entirely. Over the last few years, this bait-and-switch has become the core competency of “bitcoin” exchanges.

I recently visited the homepage of Australian exchange btcmarkets.net: what a mess. There was a list of dozens of identical-looking “cryptos”, with bitcoin second after something called “XRP”; seems like it was sorted by volume?

Incentives have driven exchanges to become casinos, and they’re doing exactly what you’d expect unregulated casinos to do. This is no place you ever want to send anyone.

Incentives For Exchanges

Exchanges make money on trading, not on buying and holding. Despite the fact that bitcoin is the only real attempt to create an open source money, scams with no future are given false equivalence, because more assets means more trading. Worse than that, they are paid directly to list new scams (the crappier, the more money they can charge!) and have recently taken the logical step of introducing and promoting their own crapcoins directly.

It’s like a gold dealer who also sells 57 varieties of pyrite, which give more margin than selling actual gold.

For a long time, I thought exchanges were merely incompetent. Most can’t even give out fresh addresses for deposits, batch their outgoing transactions, pay competent fee rates, perform RBF or use segwit.

But I misunderstood: they don’t want to sell bitcoin. They use bitcoin to get you in the door, but they want you to gamble. This matters: you’ll find subtle and not-so-subtle blockers to simply buying bitcoin on an exchange. If you send a friend off to buy their first bitcoin, they’re likely to come back with something else. That’s no accident.

Looking Deeper, It Gets Worse.

Regrettably, looking harder at specific exchanges makes the picture even bleaker.

Consider Binance: this mainland China backed exchange pretending to be a Hong Kong exchange appeared out of nowhere with fake volume and demonstrated the gullibility of the entire industry by being treated as if it were a respected member. They lost at least 40,000 bitcoin in a known hack, and they also lost all the personal information people sent them to KYC. They aggressively market their own coin. But basically, they’re just MtGox without Mark Karpales’ PHP skills or moral scruples and much better marketing.

Coinbase is more interesting: an MBA-run “bitcoin” company which really dislikes bitcoin. They got where they are by spending big on regulations compliance in the US so they could operate in (almost?) every US state. (They don’t do much to dispel the wide belief that this regulation protects their users, when in practice it seems only USD deposits have any guarantee). Their natural interest is in increasing regulation to maintain that moat, and their biggest problem is Bitcoin.

They have much more affinity for the centralized coins (Ethereum) where they can have influence and control. The anarchic nature of a genuine open source community (not to mention the developers’ oft-stated aim to improve privacy over time) is not culturally compatible with a top-down company run by the Big Dog. It’s a running joke that their CEO can’t say the word “Bitcoin”, but their recent “what will happen to cryptocurrencies in the 2020s” article is breathtaking in its boldness: innovation is mainly happening on altcoins, and they’re going to overtake bitcoin any day now. Those scaling problems which the Bitcoin developers say they don’t know how to solve? This non-technical CEO knows better.

So, don’t send anyone to an exchange, especially not a “market leading” one. Find some service that actually wants to sell them bitcoin, like CashApp or Swan Bitcoin.

,

Krebs on SecurityReport: ATM Skimmer Gang Had Protection from Mexican Attorney General’s Office

A group of Romanians operating an ATM company in Mexico and suspected of bribing technicians to install sophisticated Bluetooth-based skimmers in cash machines throughout several top Mexican tourist destinations have enjoyed legal protection from a top anti-corruption official in the Mexican attorney general’s office, according to a new complaint filed with the government’s internal affairs division.

As detailed this week by the Mexican daily Reforma, several Mexican federal, state and municipal officers filed a complaint saying the attorney general office responsible for combating corruption had initiated formal proceedings against them for investigating Romanians living in Mexico who are thought to be part of the ATM skimming operation.

Florian Tudor (right) and his business associates at a press conference earlier this year. Image: Reforma.

Reforma said the complaint centers on Camilo Constantino Rivera, who heads the unit in the Mexican Special Prosecutor’s office responsible for fighting corruption. It alleges Rivera has an inherent conflict of interest because his brother has served as a security escort and lawyer for Floridan Tudor, the reputed boss of a Romanian crime syndicate recently targeted by the FBI for running an ATM skimming and human trafficking network that operates throughout Mexico and the United States.

Tudor, a.k.a. “Rechinu” or “The Shark,” and his ATM company Intacash, were the subject of a three part investigation by KrebsOnSecurity published in September 2015. That series tracked the activities of a crime gang which was rumored to be bribing and otherwise coercing ATM technicians into installing Bluetooth-based skimming devices inside cash machines throughout popular tourist destinations in and around Mexico’s Yucatan Peninsula — including Cancun, Cozumel, Playa del Carmen and Tulum.

In 2018, 44-year-old Romanian national Sorinel Constantin Marcu was found shot dead in his car in Mexico. Marcu’s older brother told KrebsOnSecurity shortly after the murder that his brother was Tudor’s personal bodyguard but at some point had a falling out with Tudor and his associates over money. Marcu the elder said his brother was actually killed in front of a new apartment complex being built and paid for by Mr. Tudor, and that the dead man’s body was moved to make it look like he was slain in his car instead.

On March 31, 2019, police in Cancun, Mexico arrested 42-year-old Tudor and 37-year-old Adrian Nicholae Cosmin for the possession of an illegal firearm and cash totaling nearly 500,000 pesos (~USD $26,000) in both American and Mexican denominations. Two months later, a judge authorized the search of several of Tudor’s properties.

The Reforma report says Rivera’s office subsequently initiated proceedings against and removed several agents who investigated the crime ring, alleging those agents abused their authority and conducted illegal searches. The complaint against Rivera charges that the criminal protection racket also included the former chief of police in Cancun.

In September 2019, prosecutors with the Southern District of New York unsealed indictments and announced arrests against 18 people accused of running an ATM skimming and money laundering operation that netted $20 million. The defendants in that case — nearly all of whom are Romanians living in the United States and Mexico — included Florian Claudio Martin, described by Romanian newspapers as “the brother of Rechinu,” a.k.a. Tudor.

The news comes on the heels of a public relations campaign launched by Mr. Tudor, who recently denounced harassment from the news media and law enforcement by taking out a full two-page ad in Novedades, the oldest daily newspaper in the Mexican state of Quintana Roo (where Cancun is located). In a news conference with members of the local press, Tudor also reportedly accused this author of having been hired by his enemies to slander him and ruin his legitimate business.

A two-page ad taken out earlier this year in a local newspaper by Florian Tudor, accusing the head of the state police department of spying on businessmen in order to extort and harass them.

Obviously, there is no truth to Tudor’s accusations, and this would hardly be the first time the reputed head of a transnational crime syndicate has insinuated that I was paid by his enemies to disrupt his operations.

Next week, KrebsOnSecurity will publish highlights from an upcoming lengthy investigation into Tudor and his company by the Organized Crime and Corruption Reporting Project (OCCRP), a consortium of investigative journalists operating in Eastern Europe, Central Asia and Central America.

Here’s a small teaser: Earlier this year, I was interviewed on camera by reporters with the OCCRP, who at one point in the discussion handed me a transcript of some text messages shared by law enforcement officials that allegedly occurred between Tudor and his associates directly after the publication of my 2015 investigation into Intacash.

The text messages suggested my story had blown the cover off their entire operation, and that they intended to shut it all down after the series was picked up in the Mexican newspapers. One text exchange seems to indicate the group even briefly contemplated taking out a hit on this author in retribution.

The Mexican attorney general’s office could not be immediately reached for comment. The “contact us” email link on the office’s homepage leads to a blank email address, and a message sent to the one email address listed there as the main contact for the Mexican government portal (gobmx@funcionpublica.gob.mx) bounced back as an attempt to deliver to a non-existent domain name.

Further reading:

Alleged Chief of Romanian ATM Skimming Gang Arrested in Mexico

Tracking a Bluetooth Skimmer Gang in Mexico

Tracking a Bluetooth Skimmer Gang in Mexico, Part II

Who’s Behind Bluetooth Skimming in Mexico?

TEDListening to nature: The talks of TED2020 Session 1

TED looks a little different this year, but much has also stayed the same. The TED2020 mainstage program kicked off Thursday night with a session of talks, performances and visual delights from brilliant, creative individuals who shared ideas that could change the world — and stories of people who already have. But instead of convening in Vancouver, the TED community tuned in to the live, virtual broadcast hosted by TED’s Chris Anderson and Helen Walters from around the world — and joined speakers and fellow community members on an interactive, TED-developed second-screen platform to discuss ideas, ask questions and give real-time feedback. Below, a recap of the night’s inspiring talks, performances and conversations.

Sharing incredible footage of microscopic creatures, Ariel Waldman takes us below meters-thick sea ice in Antarctica to explore a hidden ecosystem. She speaks at TED2020: Uncharted on May 21, 2020. (Photo courtesy of TED)

Ariel Waldman, Antarctic explorer, NASA advisor

Big idea: Seeing microbes in action helps us more fully understand (and appreciate) the abundance of life that surrounds us. 

How: Even in the coldest, most remote place on earth, our planet teems with life. Explorer Ariel Waldman introduces the thousands of organisms that call Antarctica home — and they’re not all penguins. Leading a five-week expedition, Waldman descended the sea ice and scaled glaciers to investigate and film myriad microscopic, alien-looking creatures. Her footage is nothing short of amazing — like wildlife documentary at the microbial level! From tiny nematodes to “cuddly” water bears, mini sea shrimp to geometric bugs made of glass, her camera lens captures these critters in color and motion, so we can learn more about their world and ours. Isn’t nature brilliant?

Did you know? Tardigrades, also known as water bears, live almost everywhere on earth and can even survive in the vacuum of space. 


Tracy Edwards, Trailblazing sailor

Big Idea: Despite societal limits, girls and women are capable of creating the future of their dreams. 

How: Though competitive sailing is traditionally dominated by men, women sailors have proven they are uniquely able to navigate the seas. In 1989, Tracy Edwards led the first all-female sailing crew in the Whitbread Round the World Yacht Race. Though hundreds of companies refused to sponsor the team and bystanders warned that an all-female team was destined to fail, Edwards knew she could trust in the ability of the women on her team. Despite the tremendous odds, they completed the trip and finished second in their class. The innovation, kindness and resourcefulness of the women on Edwards’s crew enabled them to succeed together, upending all expectations of women in sailing. Now, Edwards advocates for girls and women to dive into their dream fields and become the role models they seek to find. She believes women should understand themselves as innately capable, that the road to education has infinite routes and that we all have the ability to take control of our present and shape our futures.

Quote of the talk: “This is about teaching girls: you don’t have to look a certain way; you don’t have to feel a certain way; you don’t have to behave a certain way. You can be successful. You can follow your dreams. You can fight for them.”


Classical musicians Sheku Kanneh-Mason and Isata Kanneh-Mason perform intimate renditions of Sergei Rachmaninov’s “Muse” and Frank Bridge’s “Spring Song” at TED2020: Uncharted on May 21, 2020. (Photo courtesy of TED)

Virtuosic cellist Sheku Kanneh-Mason, whose standout performance at the wedding of Prince Harry and Meghan Markle made waves with music fans across the world, joins his sister, pianist Isata Kanneh-Mason, for an intimate living room performance of “Muse” by Sergei Rachmaninov and “Spring Song” by Frank Bridge.

And for a visual break, podcaster and design evangelist Debbie Millman shares an animated love letter to her garden — inviting us to remain grateful that we are still able to make things with our hands.


Dallas Taylor, Host/creator of Twenty Thousand Hertz podcast

Big idea: There is no such thing as true silence.

Why? In a fascinating challenge to our perceptions of sound, Dallas Taylor tells the story of a well-known, highly-debated and perhaps largely misunderstood piece of music penned by composer John Cage. Written in 1952, 4′33″ is more experience than expression, asking the listener to focus on and accept things the way they are, through three movements of rest — or, less technically speaking, silence. In its “silence,” Cage invites us to contemplate the sounds that already exist when we’re ready to listen, effectively making each performance a uniquely meditative encounter with the world around us. “We have a once in a lifetime opportunity to reset our ears,” says Taylor, as he welcomes the audience to settle into the first movement of 4’33” together. “Listen to the texture and rhythm of the sounds around you right now. Listen for the loud and soft, the harmonic and dissonant … enjoy the magnificence of hearing and listening.”

Quote of the talk: “Quietness is not when we turn our minds off to sound, but when we really start to listen and hear the world in all of its sonic beauty.”


Dubbed “the woman who redefined man” by her biographer, Jane Goodall has changed our perceptions of primates, people and the connection between the two. She speaks with head of TED Chris Anderson at TED2020: Uncharted on May 21, 2020. (Photo courtesy of TED)

Jane Goodall, Primatologist, conservationist

Big idea: Humanity’s long-term livelihood depends on conservation.

Why? After years in the field reinventing the way the world thinks about chimpanzees, their societies and their similarities to humans, Jane Goodall began to realize that as habitats shrink, humanity loses not only resources and life-sustaining biodiversity but also our core connection to nature. Worse still, as once-sequestered animals are pulled from their environments and sold and killed in markets, the risk of novel diseases like COVID-19 jumping into the human population rises dramatically. In conversation with head of TED Chris Anderson, Goodall tells the story of a revelatory scientific conference in 1986, where she awakened to the sorry state of global conservation and transformed from a revered naturalist into a dedicated activist. By empowering communities to take action and save natural habitats around the world, Goodall’s institute now gives communities tools they need to protect their environment. As a result of her work, conservation has become part of the DNA of cultures from China to countries throughout Africa, and is leading to visible transformations of once-endangered forests and habitats.

Quote of the talk: Every day you live, you make an impact on the planet. You can’t help making an impact … If we all make ethical choices, then we start moving towards a world that will be not quite so desperate to leave for our great-grandchildren.”

Rondam RamblingsA review of John Sanford's "Genetic Entropy"

1.  Introduction (Feel free to skip this part.  It's just some context for what comes next.) As regular readers will already know, I put a fair amount of effort into understanding points of view that I don't agree with.  I think if you're going to argue against a position it is incumbent upon you to understand what you're arguing against so that your arguments are actually on point and you're

Planet Linux AustraliaRussell Coker: Cruises and Covid19

Problems With Cruises

GQ has an insightful and detailed article about Covid19 and the Diamond Princess [1], I recommend reading it.

FastCompany has a brief article about bookings for cruises in August [2]. There have been many negative comments about this online.

The first thing to note is that the cancellation policies on those cruises are more lenient than usual and the prices are lower. So it’s not unreasonable for someone to put down a deposit on a half price holiday in the hope that Covid19 goes away (as so many prominent people have been saying it will) in the knowledge that they will get it refunded if things don’t work out. Of course if the cruise line goes bankrupt then no-one will get a refund, but I think people are expecting that won’t happen.

The GQ article highlights some serious problems with the way cruise ships operate. They have staff crammed in to small cabins and the working areas allow transmission of disease. These problems can be alleviated, they could allocate more space to staff quarters and have more capable air conditioning systems to put in more fresh air. During the life of a cruise ship significant changes are often made, replacing engines with newer more efficient models, changing the size of various rooms for entertainment, installing new waterslides, and many other changes are routinely made. Changing the staff only areas to have better ventilation and more separate space (maybe capsule-hotel style cabins with fresh air piped in) would not be a difficult change. It would take some money and some dry-dock time which would be a significant expense for cruise companies.

Cruises Are Great

People like social environments, they want to have situations where there are as many people as possible without it becoming impossible to move. Cruise ships are carefully designed for the flow of passengers. Both the layout of the ship and the schedule of events are carefully planned to avoid excessive crowds. In terms of meeting the requirement of having as many people as possible in a small area without being unable to move cruise ships are probably ideal.

Because there is a large number of people in a restricted space there are economies of scale on a cruise ship that aren’t available anywhere else. For example the main items on the menu are made in a production line process, this can only be done when you have hundreds of people sitting down to order at the same time.

The same applies to all forms of entertainment on board, they plan the events based on statistical knowledge of what people want to attend. This makes it more economical to run than land based entertainment where people can decide to go elsewhere. On a ship a certain portion of the passengers will see whatever show is presented each night, regardless of whether it’s singing, dancing, or magic.

One major advantage of cruises is that they are all inclusive. If you are on a regular holiday would you pay to see a singing or dancing show? Probably not, but if it’s included then you might as well do it – and it will be pretty good. This benefit is really appreciated by people taking kids on holidays, if kids do things like refuse to attend a performance that you were going to see or reject food once it’s served then it won’t cost any extra.

People Who Criticise Cruises

For the people who sneer at cruises, do you like going to bars? Do you like going to restaurants? Live music shows? Visiting foreign beaches? A cruise gets you all that and more for a discount price.

If Groupon had a deal that gave you a cheap hotel stay with all meals included, free non-alcoholic drinks at bars, day long entertainment for kids at the kids clubs, and two live performances every evening how many of the people who reject cruises would buy it? A typical cruise is just like a Groupon deal for non-stop entertainment from 8AM to 11PM.

Will Cruises Restart?

The entertainment options that cruises offer are greatly desired by many people. Most cruises are aimed at budget travellers, the price is cheaper than a hotel in a major city. Such cruises greatly depend on economies of scale, if they can’t get the ships filled then they would need to raise prices (thus decreasing demand) to try to make a profit. I think that some older cruise ships will be scrapped in the near future and some of the newer ships will be sold to cruise lines that cater to cheap travel (IE P&O may scrap some ships and some of the older Princess ships may be transferred to them). Overall I predict a decrease in the number of middle-class cruise ships.

For the expensive cruises (where the cheapest cabins cost over $1000US per person per night) I don’t expect any real changes, maybe they will have fewer passengers and higher prices to allow more social distancing or something.

I am certain that cruises will start again, but it’s too early to predict when. Going on a cruise is about as safe as going to a concert or a major sporting event. No-one is predicting that sporting stadiums will be closed forever or live concerts will be cancelled forever, so really no-one should expect that cruises will be cancelled forever. Whether companies that own ships or stadiums go bankrupt in the mean time is yet to be determined.

One thing that’s been happening for years is themed cruises. A group can book out an entire ship or part of a ship for a themed cruise. I expect this to become much more popular when cruises start again as it will make it easier to fill ships. In the past it seems that cruise lines let companies book their ships for events but didn’t take much of an active role in the process. I think that the management of cruise lines will look to aggressively market themed cruises to anyone who might help, for starters they could reach out to every 80s and 90s pop group – those fans are all old enough to be interested in themed cruises and the musicians won’t be asking for too much money.

Conclusion

Humans are social creatures. People want to attend events with many other people. Covid 19 won’t be the last pandemic, and it may not even be eradicated in the near future. The possibility of having a society where no-one leaves home unless they are in a hazmat suit has been explored in science fiction, but I don’t think that’s a plausible scenario for the near future and I don’t think that it’s something that will be caused by Covid 19.

CryptogramBluetooth Vulnerability: BIAS

This is new research on a Bluetooth vulnerability (called BIAS) that allows someone to impersonate a trusted device:

Abstract: Bluetooth (BR/EDR) is a pervasive technology for wireless communication used by billions of devices. The Bluetooth standard includes a legacy authentication procedure and a secure authentication procedure, allowing devices to authenticate to each other using a long term key. Those procedures are used during pairing and secure connection establishment to prevent impersonation attacks. In this paper, we show that the Bluetooth specification contains vulnerabilities enabling to perform impersonation attacks during secure connection establishment. Such vulnerabilities include the lack of mandatory mutual authentication, overly permissive role switching, and an authentication procedure downgrade. We describe each vulnerability in detail, and we exploit them to design, implement, and evaluate master and slave impersonation attacks on both the legacy authentication procedure and the secure authentication procedure. We refer to our attacks as Bluetooth Impersonation AttackS (BIAS).

Our attacks are standard compliant, and are therefore effective against any standard compliant Bluetooth device regardless the Bluetooth version, the security mode (e.g., Secure Connections), the device manufacturer, and the implementation details. Our attacks are stealthy because the Bluetooth standard does not require to notify end users about the outcome of an authentication procedure, or the lack of mutual authentication. To confirm that the BIAS attacks are practical, we successfully conduct them against 31 Bluetooth devices (28 unique Bluetooth chips) from major hardware and software vendors, implementing all the major Bluetooth versions, including Apple, Qualcomm, Intel, Cypress, Broadcom, Samsung, and CSR.

News articles.

Worse Than FailureA Vintage Printer

IBM 1130 (16758008839)

Remember Robert, the student who ruined his class curve back in the 1960s? Well, proving the old adage that the guy who graduates last from medical school is still a doctor, he managed to find another part-time job at a small hospital, earning just enough to pay his continued tuition.

Industry standard in those days was the IBM System/360 series, but it was out of the price range of this hospital. Instead, they had an IBM 1130, which was designed to be used in laboratories and small scientific research facilities. It used FORTRAN, which was pretty inappropriate for business use, but a set of subroutines offered by IBM contained routines for dealing with currency values and formatting. The hospital captured charges on punch cards and those were used as input to a billing program.

The printer was a monstrous beast, spinning a drum of characters and firing hammers to print characters as they went by. In order to print in specific boxes on the billing forms, it was necessary to advance the paper to a specific point on the page. This was done using a loop of paper tape that had 12 channels in its width. A hole was punched at the line in the tape where the printer needed to stop. Wire brushes above the tape would hit the hole, making contact with the metal drum inside the loop and stopping the paper feed.

There was one box in the billing form that was used infrequently, only every few days. When the program issued the code to skip to that channel, paper would begin spewing for a few seconds, and then the printer would shut down with a fault. This required stopping, removing the paper, typing the necessary data into the partially-printed bill, and then restarting the job from the point of failure.

IBM Field Engineering was called, but was unable to find a reason for the problem. Their considered opinion was that it was a software fault. After dealing with the problem on a fairly regular basis, things escalated. The IBM Systems Engineer assigned to the site was brought in.

Robert's boss, the author of the billing software, had relied on an IDEAL subroutine package provided by IBM—technically unsupported, but written by IBM employees, so generally one would assume it was safe to use. The Systems Engineer spent a while looking over that package, but eventually declared it innocent and moved on. He checked over the code Robert's boss had written, but ultimately that, too, failed to provide any answers.

"Then it must be the machine," Robert's boss stated.

This was the wrong thing to say. "It couldn't be the machine!" The Engineer, a prideful young woman, bristled at the insinuation. "These machines are checked. Everything's checked before it leaves the factory!"

Tempers flared, voices on the edge of shouting. Robert ducked back into the room with the computer, followed rapidly by the Field Engineer who had come along earlier in the day to do his own checks. Trying to pretend they couldn't hear the argument, the pair began another once-over on the machine, looking for any sign of mechanical fault.

"Hey, a question," said Robert, holding the thick cable that connected the printer to the computer. "Could it be a problem with the cable?"

The Field Engineer unplugged the cable and examined it. "The pin for that channel doesn't look seated," he admitted sheepishly. "Let's replace it and see what happens."

That day Robert learned two valuable lessons in debugging. Number one: when in doubt, go over each piece of the machine, no matter how unlikely. Number two: never tell an IBM Engineer that the problem is on their end.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Planet Linux AustraliaStewart Smith: op-build v2.5 firmware for the Raptor Blackbird

Well, following on from my post where I excitedly pointed out that Raptor Blackbird support: all upstream in op-build v2.5, that means I can do another in my series of (close to) upstream Blackbird firmware builds.

This time, the only difference from straight upstream op-build v2.5 is my fixes for buildroot so that I can actually build it on Fedora 32.

So, head over to https://www.flamingspork.com/blackbird/op-build-v2.5-blackbird-images/ and grab blackbird.pnor to flash it on your blackbird, let me know how it goes!

Planet Linux AustraliaMatthew Oliver: GNS3 FRR Appliance

In my spare time, what little I have, I’ve been wanting to play with some OSS networking projects. For those playing along at home, during last Suse hackweek I played with wireguard, and to test the environment I wanted to set up some routing.
For which I used FRR.

FRR is a pretty cool project, if brings the networking routing stack to Linux, or rather gives us a full opensource routing stack. As most routers are actually Linux anyway.

Many years ago I happened to work at Fujitsu working in a gateway environment, and started playing around with networking. And that was my first experience with GNS3. An opensource network simulator. Back then I needed to have a copy of cisco IOS images to really play with routing protocols, so that make things harder, great open source product but needed access to proprietary router OSes.

FRR provides a CLI _very_ similar to ciscos, and make we think, hey I wonder if there is an FRR appliance we can use in GNS3?
And there was!!!

When I downloaded it and decompressed the cow2 image it was 1.5GB!!! For a single router image. It works great, but what if I wanted a bunch of routers to play with things like OSPF or BGP etc. Surely we can make a smaller one.

Kiwi

At Suse we use kiwi-ng to build machine images and release media. And to make things even easier for me we already have a kiwi config for small OpenSuse Leap JEOS images, jeos is “just enough OS”. So I hacked one to include FRR. All extra tweaks needed to the image are also easily done by bash hook scripts.

I wont go in to too much detail how because I created a git repo where I have it all including a detailed README: https://github.com/matthewoliver/frr_gns3

So feel free to check that would and build and use the image.

But today, I went one step further. OpenSuse’s Open Build System, which is used to build all RPMs for OpenSuse, but can also build debs and whatever build you need, also supports building docker containers and system images using kiwi!

So have now got the OBS to build the image for me. The image can be downloaded from: https://download.opensuse.org/repositories/home:/mattoliverau/images/

And if you want to send any OBS requests to change it the project/package is: https://build.opensuse.org/package/show/home:mattoliverau/FRR-OpenSuse-Appliance

To import it into GNS3 you need the gns3a file, which you can find in my git repo or in the OBS project page.

The best part is this image is only 300MB, which is much better then 1.5GB!
I did have it a little smaller, 200-250MB, but unfortunately the JEOS cut down kernel doesn’t contain the MPLS modules, so had to pull in the full default SUSE kernel. If this became a real thing and not a pet project, I could go and build a FRR cutdown kernel to get the size down, but 300MB is already a lot better then where it was at.

Hostname Hack

When using GNS3 and you place a router, you want to be able to name the router and when you access the console it’s _really_ nice to see the router name you specified in GNS3 as the hostname. Why, because if you have a bunch, you want want a bunch of tags all with the localhost hostname on the commandline… this doesn’t really help.

The FRR image is using qemu, and there wasn’t a nice way to access the name of the VM from inside the container, and now an easy way to insert the name from outside. But found 1 approach that seems to be working, enter my dodgy hostname hack!

I also wanted to to it without hacking the gns3server code. I couldn’t easily pass the hostname in but I could pass it in via a null device with the router name its id:

/dev/virtio-ports/frr.router.hostname.%vm-name%

So I simply wrote a script that sets the hostname based on the existence of this device. Made the script a systemd oneshot service to start at boot and it worked!

This means changing the name of the FRR router in the GNS3 interface, all you need to do is restart the router (stop and start the device) and it’ll apply the name to the router. This saves you having to log in as root and running hostname yourself.

Or better, if you name all your FRR routers before turning them on, then it’ll just work.

In conclusion…

Hopefully now we can have a fully opensource, GNS3 + FRR appliance solution for network training, testing, and inspiring network engineers.

Worse Than FailureCodeSOD: Classic WTF: A Char'd Enum

It's a holiday in the US today, so we're reaching back into the archives while doing some quarantine grilling. This classic has a… special approach to handling enums. Original. --Remy

Ah yes, the enum. It's a convenient way to give an integer a discrete domain of values, without having to worry about constants. But you see, therein lies the problem. What happens if you don't want to use an integer? Perhaps you'd like to use a string? Or a datetime? Or a char?

If that were the case, some might say just make a class that acts similarly, or then you clearly don't want an enum. But others, such as Dan Holmes' colleague, go a different route. They make sure they can fit chars into enums.

'******* Asc Constants ********
Private Const a = 65
Private Const b = 66
Private Const c = 67
Private Const d = 68
Private Const e = 69
Private Const f = 70
Private Const H = 72
Private Const i = 73
Private Const l = 76
Private Const m = 77
Private Const n = 78
Private Const O = 79
Private Const p = 80
Private Const r = 82
Private Const s = 83
Private Const t = 84
Private Const u = 85
Private Const x = 88

  ... snip ...

'******* Status Enums *********
Public Enum MessageStatus
  MsgError = e
  MsgInformation = i
  ProdMsg = p
  UpLoad = u
  Removed = x
End Enum

Public Enum PalletTable
  Shipped = s   'Pallet status code
  Available = a
End Enum
[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 04)

Here’s part four of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

In this installment, we meet Kurt, the crustypunk high-tech dumpster-diver. Kurt is loosely based on my old friend Darren Atkinson, who pulled down a six-figure income by recovering, repairing and reselling high-tech waste from Toronto’s industrial suburbs. Darren was the subject of the first feature I ever sold to Wired, Dumpster Diving, which was published in the September, 1997 issue.

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

Planet Linux AustraliaFrancois Marier: Printing hard-to-print PDFs on Linux

I recently found a few PDFs which I was unable to print due to those files causing insufficient printer memory errors:

I found a detailed explanation of what might be causing this which pointed the finger at transparent images, a PDF 1.4 feature which apparently requires a more recent version of PostScript than what my printer supports.

Using Okular's Force rasterization option (accessible via the print dialog) does work by essentially rendering everything ahead of time and outputing a big image to be sent to the printer. The quality is not very good however.

Converting a PDF to DjVu

The best solution I found makes use of a different file format: .djvu

Such files are not PDFs, but can still be opened in Evince and Okular, as well as in the dedicated DjVuLibre application.

As an example, I was unable to print page 11 of this paper. Using pdfinfo, I found that it is in PDF 1.5 format and so the transparency effects could be the cause of the out-of-memory printer error.

Here's how I converted it to a high-quality DjVu file I could print without problems using Evince:

pdf2djvu -d 1200 2002.04049.pdf > 2002.04049-1200dpi.djvu

Converting a PDF to PDF 1.3

I also tried the DjVu trick on a different unprintable PDF, but it failed to print, even after lowering the resolution to 600dpi:

pdf2djvu -d 600 dow-faq_v1.1.pdf > dow-faq_v1.1-600dpi.djvu

In this case, I used a different technique and simply converted the PDF to version 1.3 (from version 1.6 according to pdfinfo):

ps2pdf13 -r1200x1200 dow-faq_v1.1.pdf dow-faq_v1.1-1200dpi.pdf

This eliminates the problematic transparency and rasterizes the elements that version 1.3 doesn't support.

,

Krebs on SecurityRiding the State Unemployment Fraud ‘Wave’

When a reliable method of scamming money out of people, companies or governments becomes widely known, underground forums and chat networks tend to light up with activity as more fraudsters pile on to claim their share. And that’s exactly what appears to be going on right now as multiple U.S. states struggle to combat a tsunami of phony Pandemic Unemployment Assistance (PUA) claims. Meanwhile, a number of U.S. states are possibly making it easier for crooks by leaking their citizens’ personal data from the very websites the unemployment scammers are using to file bogus claims.

Last week, the U.S. Secret Service warned of “massive fraud” against state unemployment insurance programs, noting that false filings from a well-organized Nigerian crime ring could end up costing the states and federal government hundreds of millions of dollars in losses.

Since then, various online crime forums and Telegram chat channels focused on financial fraud have been littered with posts from people selling tutorials on how to siphon unemployment insurance funds from different states.

Denizens of a Telegram chat channel newly rededicated to stealing state unemployment funds discussing cashout methods.

Yes, for roughly $50 worth of bitcoin, you too can quickly jump on the unemployment fraud “wave” and learn how to swindle unemployment insurance money from different states. The channel pictured above and others just like it are selling different “methods” for defrauding the states, complete with instructions on how best to avoid getting your phony request flagged as suspicious.

Although, at the rate people in these channels are “flexing” — bragging about their fraudulent earnings with screenshots of recent multiple unemployment insurance payment deposits being made daily — it appears some states aren’t doing a whole lot of fraud-flagging.

A still shot from a video a fraudster posted to a Telegram channel overrun with people engaged in unemployment insurance fraud shows multiple $800+ payments in one day from Massachusetts’ Department of Unemployment Assistance (DUA).

A federal fraud investigator who’s helping to trace the source of these crimes and who spoke with KrebsOnSecurity on condition of anonymity said many states have few controls in place to spot patterns in fraudulent filings, such as multiple payments going to the same bank accounts, or filings made for different people from the same Internet address.

In too many cases, he said, the deposits are going into accounts where the beneficiary name does not match the name on the bank account. Worse still, the source said, many states have dramatically pared back the amount of information required to successfully request an unemployment filing.

“The ones we’re seeing worst hit are the states that aren’t asking where you worked,” the investigator said. “It used to be they’d have a whole list of questions about your previous employer, and you had to show you were trying to find work. But now because of the pandemic, there’s no such requirement. They’ve eliminated any controls they had at all, and now they’re just shoveling money out the door based on Social Security number, name, and a few other details that aren’t hard to find.”

CANARY IN THE GOLDMINE

Earlier this week, email security firm Agari detailed a fraud operation tied to a seasoned Nigerian cybercrime group it dubbed “Scattered Canary,” which has been busy of late bilking states and the federal government out of economic stimulus and unemployment payments. Agari said this group has been filing hundreds of successful claims, all effectively using the same email address.

“Scattered Canary uses Gmail ‘dot accounts’ to mass-create accounts on each target website,” Agari’s Patrick Peterson wrote. “Because Google ignores periods when interpreting Gmail addresses, Scattered Canary has been able to create dozens of accounts on state unemployment websites and the IRS website dedicated to processing CARES Act payments for non-tax filers (freefilefillableforms.com).”

Image: Agari.

Indeed, the very day the IRS unveiled its site for distributing CARES Act payments last month, KrebsOnSecurity warned that it was very likely to be abused by fraudsters to intercept stimulus payments from U.S. citizens, mainly because the only information required to submit a claim was name, date of birth, address and Social Security number.

Agari notes that since April 29, Scattered Canary has filed at least 174 fraudulent claims for unemployment with the state of Washington.

“Based on communications sent to Scattered Canary, these claims were eligible to receive up to $790 a week for a total of $20,540 over a maximum of 26 weeks,” Peterson wrote. “Additionally, the CARES Act includes $600 in Federal Pandemic Unemployment Compensation each week through July 31. This adds up to a maximum potential loss as a result of these fraudulent claims of $4.7 million.”

STATE WEB SITE WOES

A number of states have suffered security issues with the PUA websites that exposed personal details of citizens filing unemployment insurance claims. Perhaps the most galling example comes from Arkansas, whose site exposed the SSNs, bank account and routing numbers for some 30,000 applicants.

In that instance, The Arkansas Times alerted the state after hearing from a computer programmer who was filing for unemployment on the site and found he could see other applicants’ data simply by changing the site’s URL slightly. State officials reportedly ignored the programmer’s repeated attempts to get them to fix the issue, and when it was covered by the newspaper the state governor accused the person who found it of breaking the law.

Over the past week, several other states have discovered similar issues with their PUA application sites, including Colorado, Illinois, and Ohio.

Planet Linux AustraliaMichael Still: A totally cheating sour dough starter

Share

This is the third in a series of posts documenting my adventures in making bread during the COVID-19 shutdown. I’d like to imagine I was running science experiments in making bread on my kids, but really all I was trying to do was eat some toast.

I’m not sure what it was like in other parts of the world, but during the COVID-19 pandemic Australia suffered a bunch of shortages — toilet paper, flour, and yeast were among those things stores simply didn’t have any stock of. Luckily we’d only just done a costco shop so were ok for toilet paper and flour, but we were definitely getting low on yeast. The obvious answer is a sour dough starter, but I’d never done that thing before.

In the end my answer was to cheat and use this recipe. However, I found the instructions unclear, so here’s what I ended up doing:

Starting off

  • 2 cups of warm water
  • 2 teaspoons of dry yeast
  • 2 cups of bakers flour

Mix these three items together in a plastic container with enough space for the mix to double in size. Place in a warm place (on the bench on top of the dish washer was our answer), and cover with cloth secured with a rubber band.

Feeding

Once a day you should feed your starter with 1 cup of flour and 1 cup of warm water. Stir throughly.

Reducing size

The recipe online says to feed for five days, but the size of my starter was getting out of hand by a couple of days, so I started baking at that point. I’ll describe the baking process in a later post. The early loaves definitely weren’t as good as the more recent ones, but they were still edible.

Hybernation

Once the starter is going, you feed daily and probably need to bake daily to keep the starters size under control. That obviously doesn’t work so great if you can’t eat an entire loaf of bread a day. You can hybernate the starter by putting it in the fridge, which means you only need to feed it once a week.

To wake a hybernated starter up, take it out of the fridge and feed it. I do this at 8am. That means I can then start the loaf for baking at about noon, and the starter can either go back in the fridge until next time or stay on the bench being fed daily.

I have noticed that sometimes the starter comes out of the fridge with a layer of dark water on top. Its worked out ok for us to just ignore that and stir it into the mix as part of the feeding process. Hopefully we wont die.

Share