Planet Russell

,

Worse Than FailureYes == No

For decades, I worked in an industry where you were never allowed to say no to a user, no matter how ridiculous the request. You had to suck it up and figure out a way to deliver on insane requests, regardless of the technical debt they inflicted.

Canada Stop sign.svg

Users are a funny breed. They say things like I don't care if the input dialog you have works; the last place I worked had a different dialog to do the same thing, and I want that dialog here! With only one user saying stuff like that, it's semi-tolerable. When you have 700+ users and each of them wants a different dialog to do the same thing, and nobody in management will say no, you need to start creating table-driven dialogs (x-y coordinates, width, height, label phrasing, field layout within the dialog, different input formats, fonts, colors and so forth). Multiply that by the number of dialogs in your application and it becomes needlessly pointlessly impossibly difficult.

But it never stops there. Often, one user will request that you move a field from another dialog onto their dialog - just for them. This creates all sorts of havoc with validation logic. Multiply it by hundreds of users and you're basically creating a different application for each of them - each with its own validation logic, all in the same application.

After just a single handful of users demanding changes like this, it can quickly become a nightmare. Worse, once it starts, the next user to whom you say no tells you that you did it for the other guy and so you have to do it for them too! After all, each user is the most important user, right?

It doesn't matter that saying no is the right thing to do. It doesn't matter that it will put a zero-value load on development and debugging time. It doesn't matter that sucking up development time to do it means there are less development hours for bug fixes or actual features.

When management refuses to say no, it can turn your code into a Pandora's-Box-o-WTF™

However, there is hope. There is a way to tell the users no without actually saying no. It's by getting them to say it for you and then withdrawing their urgent, can't-live-without-it, must-have-or-the-world-will-end request.

You may ask how?

The trick is to make them see the actual cost of implementing their teeny tiny little feature.

Yes, we can add that new button to provide all the functionality of Excel in an in-app
calculator, but it will take x months (years) to do it, AND it will push back all of the
other features in the queue. Shall I delay the next release and the other feature requests
so we can build this for you, or would you like to schedule it for a future release?

Naturally you'll have to answer questions like "But it's just a button; why would it take that much effort?"

This is a good thing because it forces them down the rabbit hole into your world where you are the expert. Now you get to explain to them the realities of software development, and the full cost of their little request.

Once they realize the true cost that they'd have to pay, the urgency of the request almost always subsides to nice to have and gets pushed forward so as to not delay the scheduled release.

And because you got them to say it for you, you didn't have to utter the word no.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet Linux AustraliaMichael Still: Adding oslo privsep to a new project, a worked example

Share

You’ve decided that using sudo to run command lines as root is lame and that it is time to step up and do things properly. How do you do that? Well, here’s a simple guide to adding oslo privsep to your project!

In a previous post I showed you how to add a new method that ran with escalated permissions. However, that’s only helpful if you already have privsep added to your project. This post shows you how to do that thing to your favourite python project. In this case we’ll use OpenStack Cinder as a worked example.

Note that Cinder already uses privsep because of its use of os-brick, so the instructions below skip adding oslo.privsep to requirements.txt. If your project has never ever used privsep at all, you’ll need to add a line like this to requirements.txt:

oslo.privsep

For reference, this post is based on OpenStack review 566,479, which I wrote as an example of how to add privsep to a new project. If you’re after a complete worked example in a more complete form than this post then the review might be useful to you.

As a first step, let’s add the code we’d want to write to actually call something with escalated permissions. In the Cinder case I chose the cgroups throttling code for this example. So first off we’d need to create the privsep directory with the relevant helper code:

diff --git a/cinder/privsep/__init__.py b/cinder/privsep/__init__.py
new file mode 100644
index 0000000..7f826a8
--- /dev/null
+++ b/cinder/privsep/__init__.py
@@ -0,0 +1,32 @@
+# Copyright 2016 Red Hat, Inc
+# Copyright 2017 Rackspace Australia
+# Copyright 2018 Michael Still and Aptira
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+"""Setup privsep decorator."""
+
+from oslo_privsep import capabilities
+from oslo_privsep import priv_context
+
+sys_admin_pctxt = priv_context.PrivContext(
+ 'cinder',
+ cfg_section='cinder_sys_admin',
+ pypath=__name__ + '.sys_admin_pctxt',
+ capabilities=[capabilities.CAP_CHOWN,
+ capabilities.CAP_DAC_OVERRIDE,
+ capabilities.CAP_DAC_READ_SEARCH,
+ capabilities.CAP_FOWNER,
+ capabilities.CAP_NET_ADMIN,
+ capabilities.CAP_SYS_ADMIN],
+)

This code defines the permissions that our context (called cinder_sys_admin in this case) has. These specific permissions in the example above should correlate with those that you’d get if you ran a command with sudo. There was a bit of back and forth about what permissions to use and how many contexts to have while we were implementing privsep in OpenStack Nova, but we’ll discuss those in a later post.

Next we need the code which actually does the privileged thing:

diff --git a/cinder/privsep/cgroup.py b/cinder/privsep/cgroup.py
new file mode 100644
index 0000000..15d47e0
--- /dev/null
+++ b/cinder/privsep/cgroup.py
@@ -0,0 +1,35 @@
+# Copyright 2016 Red Hat, Inc
+# Copyright 2017 Rackspace Australia
+# Copyright 2018 Michael Still and Aptira
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+"""
+Helpers for cgroup related routines.
+"""
+
+from oslo_concurrency import processutils
+
+import cinder.privsep
+
+
+@cinder.privsep.sys_admin_pctxt.entrypoint
+def cgroup_create(name):
+    processutils.execute('cgcreate', '-g', 'blkio:%s' % name)
+
+
+@cinder.privsep.sys_admin_pctxt.entrypoint
+def cgroup_limit(name, rw, dev, bps):
+    processutils.execute('cgset', '-r',
+                         'blkio.throttle.%s_bps_device=%s %d' % (rw, dev, bps),
+                         name)

Here we just provide two methods which manipulate cgroups. That allows us to make this change to the throttling implementation in Cinder:

diff --git a/cinder/volume/throttling.py b/cinder/volume/throttling.py
index 39cbbeb..3c6ddaa 100644
--- a/cinder/volume/throttling.py
+++ b/cinder/volume/throttling.py
@@ -22,6 +22,7 @@ from oslo_concurrency import processutils
 from oslo_log import log as logging
 
 from cinder import exception
+import cinder.privsep.cgroup
 from cinder import utils
 
 
@@ -65,8 +66,7 @@ class BlkioCgroup(Throttle):
         self.dstdevs = {}
 
         try:
-            utils.execute('cgcreate', '-g', 'blkio:%s' % self.cgroup,
-                          run_as_root=True)
+            cinder.privsep.cgroup.cgroup_create(self.cgroup)
         except processutils.ProcessExecutionError:
             LOG.error('Failed to create blkio cgroup \'%(name)s\'.',
                       {'name': cgroup_name})
@@ -81,8 +81,7 @@ class BlkioCgroup(Throttle):
 
     def _limit_bps(self, rw, dev, bps):
         try:
-            utils.execute('cgset', '-r', 'blkio.throttle.%s_bps_device=%s %d'
-                          % (rw, dev, bps), self.cgroup, run_as_root=True)
+            cinder.privsep.cgroup.cgroup_limit(self.cgroup, rw, dev, bps)
         except processutils.ProcessExecutionError:
             LOG.warning('Failed to setup blkio cgroup to throttle the '
                         'device \'%(device)s\'.', {'device': dev})

These last two snippets should be familiar from the previous post about pivsep in this series. Finally for the actual implementation of privsep, we need to make sure that rootwrap has permissions to start the privsep helper daemon. You’ll get one daemon per unique security context, but in this case we only have one of those so we’ll only need one rootwrap entry. Note that I also remove the previous rootwrap entries for cgset and cglimit while I’m here.

diff --git a/etc/cinder/rootwrap.d/volume.filters b/etc/cinder/rootwrap.d/volume.filters
index abc1517..d2d1720 100644
--- a/etc/cinder/rootwrap.d/volume.filters
+++ b/etc/cinder/rootwrap.d/volume.filters
@@ -43,6 +43,10 @@ lvdisplay4: EnvFilter, env, root, LC_ALL=C, LVM_SYSTEM_DIR=, LVM_SUPPRESS_FD_WAR
 # This line ties the superuser privs with the config files, context name,
 # and (implicitly) the actual python code invoked.
 privsep-rootwrap: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, os_brick.privileged.default, --privsep_sock_path, /tmp/.*
+
+# Privsep calls within cinder iteself
+privsep-rootwrap-sys_admin: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, cinder.privsep.sys_admin_pctxt, --privsep_sock_path, /tmp/.*
+
 # The following and any cinder/brick/* entries should all be obsoleted
 # by privsep, and may be removed once the os-brick version requirement
 # is updated appropriately.
@@ -93,8 +97,6 @@ ionice_1: ChainingRegExpFilter, ionice, root, ionice, -c[0-3], -n[0-7]
 ionice_2: ChainingRegExpFilter, ionice, root, ionice, -c[0-3]
 
 # cinder/volume/utils.py: setup_blkio_cgroup()
-cgcreate: CommandFilter, cgcreate, root
-cgset: CommandFilter, cgset, root
 cgexec: ChainingRegExpFilter, cgexec, root, cgexec, -g, blkio:\S+
 
 # cinder/volume/driver.py

And because we’re not bad people we’d of course write a release note about the changes we’ve made…

diff --git a/releasenotes/notes/privsep-rocky-35bdfe70ed62a826.yaml b/releasenotes/notes/privsep-rocky-35bdfe70ed62a826.yaml
new file mode 100644
index 0000000..e78fb00
--- /dev/null
+++ b/releasenotes/notes/privsep-rocky-35bdfe70ed62a826.yaml
@@ -0,0 +1,13 @@
+---
+security:
+    Privsep transitions. Cinder is transitioning from using the older style
+    rootwrap privilege escalation path to the new style Oslo privsep path.
+    This should improve performance and security of Nova in the long term.
+  - |
+    privsep daemons are now started by Cinder when required. These daemons can
+    be started via rootwrap if required. rootwrap configs therefore need to
+    be updated to include new privsep daemon invocations.
+upgrade:
+  - |
+    The following commands are no longer required to be listed in your rootwrap
+    configuration: cgcreate; and cgset.

This code will now work. However, we’ve left out one critical piece of the puzzle — testing. If this code was uploaded like this, it would fail in the OpenStack gate, even though it probably passed on your desktop. This is because many of the gate jobs are setup in such a way that they can’t run rootwrapped commands, which in this case means that the rootwrap daemon wont be able to start.

I found this quite confusing in Nova when I was implementing things and had missed a step. So I wrote a simple test fixture that warns me when I am being silly:

diff --git a/cinder/test.py b/cinder/test.py
index c8c9e6c..a49cedb 100644
--- a/cinder/test.py
+++ b/cinder/test.py
@@ -302,6 +302,9 @@ class TestCase(testtools.TestCase):
         tpool.killall()
         tpool._nthreads = 20
 
+        # NOTE(mikal): make sure we don't load a privsep helper accidentally
+        self.useFixture(cinder_fixtures.PrivsepNoHelperFixture())
+
     def _restore_obj_registry(self):
         objects_base.CinderObjectRegistry._registry._obj_classes = \
             self._base_test_obj_backup
diff --git a/cinder/tests/fixtures.py b/cinder/tests/fixtures.py
index 6e275a7..79e0b73 100644
--- a/cinder/tests/fixtures.py
+++ b/cinder/tests/fixtures.py
@@ -1,4 +1,6 @@
 # Copyright 2016 IBM Corp.
+# Copyright 2017 Rackspace Australia
+# Copyright 2018 Michael Still and Aptira
 #
 #    Licensed under the Apache License, Version 2.0 (the "License"); you may
 #    not use this file except in compliance with the License. You may obtain
@@ -21,6 +23,7 @@ import os
 import warnings
 
 import fixtures
+from oslo_privsep import daemon as privsep_daemon
 
 _TRUE_VALUES = ('True', 'true', '1', 'yes')
 
@@ -131,3 +134,29 @@ class WarningsFixture(fixtures.Fixture):
                     ' This key is deprecated. Please update your policy '
                     'file to use the standard policy values.')
         self.addCleanup(warnings.resetwarnings)
+
+
+class UnHelperfulClientChannel(privsep_daemon._ClientChannel):
+    def __init__(self, context):
+        raise Exception('You have attempted to start a privsep helper. '
+                        'This is not allowed in the gate, and '
+                        'indicates a failure to have mocked your tests.')
+
+
+class PrivsepNoHelperFixture(fixtures.Fixture):
+    """A fixture to catch failures to mock privsep's rootwrap helper.
+
+    If you fail to mock away a privsep'd method in a unit test, then
+    you may well end up accidentally running the privsep rootwrap
+    helper. This will fail in the gate, but it fails in a way which
+    doesn't identify which test is missing a mock. Instead, we
+    raise an exception so that you at least know where you've missed
+    something.
+    """
+
+    def setUp(self):
+        super(PrivsepNoHelperFixture, self).setUp()
+
+        self.useFixture(fixtures.MonkeyPatch(
+            'oslo_privsep.daemon.RootwrapClientChannel',
+            UnHelperfulClientChannel))

Now if you fail to mock a privsep’ed call, then you’ll get something like this:

==============================
Failed 1 tests - output below:
==============================

cinder.tests.unit.test_volume_throttling.ThrottleTestCase.test_BlkioCgroup
--------------------------------------------------------------------------

Captured traceback:
~~~~~~~~~~~~~~~~~~~
    Traceback (most recent call last):
      File "/srv/src/openstack/cinder/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1305, in patched
        return func(*args, **keywargs)
      File "cinder/tests/unit/test_volume_throttling.py", line 66, in test_BlkioCgroup
        throttle = throttling.BlkioCgroup(1024, 'fake_group')
      File "cinder/volume/throttling.py", line 69, in __init__
        cinder.privsep.cgroup.cgroup_create(self.cgroup)
      File "/srv/src/openstack/cinder/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 206, in _wrap
        self.start()
      File "/srv/src/openstack/cinder/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 217, in start
        channel = daemon.RootwrapClientChannel(context=self)
      File "cinder/tests/fixtures.py", line 141, in __init__
        raise Exception('You have attempted to start a privsep helper. '
    Exception: You have attempted to start a privsep helper. This is not allowed in the gate, and indicates a failure to have mocked your tests.

The last bit is the most important. The fixture we installed has detected that you’ve failed to mock a privsep’ed call and has informed you. So, the last step of all is fixing our tests. This normally involves changing where we mock, as many unit tests just lazily mock the execute() call. I try to be more granular than that. Here’s what that looked like in this throttling case:

diff --git a/cinder/tests/unit/test_volume_throttling.py b/cinder/tests/unit/test_volume_throttling.py
index 82e2645..edbc2d9 100644
--- a/cinder/tests/unit/test_volume_throttling.py
+++ b/cinder/tests/unit/test_volume_throttling.py
@@ -29,7 +29,9 @@ class ThrottleTestCase(test.TestCase):
             self.assertEqual([], cmd['prefix'])
 
     @mock.patch.object(utils, 'get_blkdev_major_minor')
-    def test_BlkioCgroup(self, mock_major_minor):
+    @mock.patch('cinder.privsep.cgroup.cgroup_create')
+    @mock.patch('cinder.privsep.cgroup.cgroup_limit')
+    def test_BlkioCgroup(self, mock_limit, mock_create, mock_major_minor):
 
         def fake_get_blkdev_major_minor(path):
             return {'src_volume1': "253:0", 'dst_volume1': "253:1",
@@ -37,38 +39,25 @@ class ThrottleTestCase(test.TestCase):
 
         mock_major_minor.side_effect = fake_get_blkdev_major_minor
 
-        self.exec_cnt = 0
+        throttle = throttling.BlkioCgroup(1024, 'fake_group')
+        with throttle.subcommand('src_volume1', 'dst_volume1') as cmd:
+            self.assertEqual(['cgexec', '-g', 'blkio:fake_group'],
+                             cmd['prefix'])
 
-        def fake_execute(*cmd, **kwargs):
-            cmd_set = ['cgset', '-r',
-                       'blkio.throttle.%s_bps_device=%s %d', 'fake_group']
-            set_order = [None,
-                         ('read', '253:0', 1024),
-                         ('write', '253:1', 1024),
-                         # a nested job starts; bps limit are set to the half
-                         ('read', '253:0', 512),
-                         ('read', '253:2', 512),
-                         ('write', '253:1', 512),
-                         ('write', '253:3', 512),
-                         # a nested job ends; bps limit is resumed
-                         ('read', '253:0', 1024),
-                         ('write', '253:1', 1024)]
-
-            if set_order[self.exec_cnt] is None:
-                self.assertEqual(('cgcreate', '-g', 'blkio:fake_group'), cmd)
-            else:
-                cmd_set[2] %= set_order[self.exec_cnt]
-                self.assertEqual(tuple(cmd_set), cmd)
-
-            self.exec_cnt += 1
-
-        with mock.patch.object(utils, 'execute', side_effect=fake_execute):
-            throttle = throttling.BlkioCgroup(1024, 'fake_group')
-            with throttle.subcommand('src_volume1', 'dst_volume1') as cmd:
+            # a nested job
+            with throttle.subcommand('src_volume2', 'dst_volume2') as cmd:
                 self.assertEqual(['cgexec', '-g', 'blkio:fake_group'],
                                  cmd['prefix'])
 
-                # a nested job
-                with throttle.subcommand('src_volume2', 'dst_volume2') as cmd:
-                    self.assertEqual(['cgexec', '-g', 'blkio:fake_group'],
-                                     cmd['prefix'])
+        mock_create.assert_has_calls([mock.call('fake_group')])
+        mock_limit.assert_has_calls([
+            mock.call('fake_group', 'read', '253:0', 1024),
+            mock.call('fake_group', 'write', '253:1', 1024),
+            # a nested job starts; bps limit are set to the half
+            mock.call('fake_group', 'read', '253:0', 512),
+            mock.call('fake_group', 'read', '253:2', 512),
+            mock.call('fake_group', 'write', '253:1', 512),
+            mock.call('fake_group', 'write', '253:3', 512),
+            # a nested job ends; bps limit is resumed
+            mock.call('fake_group', 'read', '253:0', 1024),
+            mock.call('fake_group', 'write', '253:1', 1024)])

…and we’re done. This post has been pretty long so I am going to stop here for now. However, hopefully I’ve demonstrated that its actually not that hard to implement privsep in a project, even with some slight testing polish.

Share

The post Adding oslo privsep to a new project, a worked example appeared first on Made by Mikal.

,

Planet DebianRaphaël Hertzog: My Free Software Activities in April 2018

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

pkg-security team

I improved the packaging of openvas-scanner and openvas-manager so that they mostly work out of the box with a dedicated redis database pre-configured and with certificates created in the postinst. I merged a patch for cross-build support in mac-robber and another patch from chkrootkit to avoid unexpected noise in quiet mode.

I prepared an update of openscap-daemon to fix the RC bug #896766 and to update to a new upstream release. I pinged the package maintainer to look into the autopkgtest failure (that I did not introduce). I sponsored hashcat 4.1.0.

Distro Tracker

While it slowed down, I continued to get merge requests. I merged two of them for some newcomer bugs:

I reviewed a merge request suggesting to add a “team search” feature.

I did some work of my own too: I fixed many exceptions that have been seen in production with bad incoming emails and with unexpected maintainer emails. I also updated the contributor guide to match the new workflow with salsa and with the new pre-generated database and its associated helper script (to download it and configure the application accordingly). During this process I also filed a GitLab issue about the latest artifact download URL not working as advertised.

I filed many issues (#13 to #19) for things that were only stored in my personal TODO list.

Misc Debian work

Bug Reports. I filed bug #894732 on mk-build-deps to filter build dependencies to include/install based on build profiles. For reprepro, I always found the explanation about FilterList very confusing (bug #895045). I filed and fixed a bug on mirrorbrain with redirection to HTTPS URLs.

I also investigated #894979 and concluded that the CA certificates keystore file generated with OpenJDK 9 is not working properly with OpenJDK 8. This got fixed in ca-certificates-java.

Sponsorship. I sponsored pylint-plugin-utils 0.2.6-2.

Packaging. I uploaded oca-core (still in NEW) and ccextractor for Freexian customers. I also uploaded python-num2words (dependency for oca-core). I fixed the RC bug #891541 on lua-posix.

Live team. I reviewed better handling of missing host dependency on live-build and reviewed a live-boot merge request to ensure that the FQDN returned by DHCP was working properly in the initrd.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianRuss Allbery: podlators 4.11

podlators is the CPAN distribution that contains Pod::Man, Pod::Text, and its subclasses, used for rendering Perl documentation as man pages or text files. It's been part of Perl core for a very long time.

This release cleans up a bunch of long-standing bugs in Pod::Text and its subclasses thanks to a series of excellent bug reports from eponymous alias. The default value of the sentence option was documented incorrectly, the width option was ignored in Pod::Text::Termcap, that module also worked incorrectly when COLUMNS wasn't set in the environment, and both Pod::Text::Termcap and Pod::Text::Color had various problems with wrapping that are now fixed. Long =item text was miswrapped due to an incorrect length calculation, and text attributes are now cleared at the end of each line and then reapplied to work better with common pagers.

In addition, the none value for the errors option of Pod::Man and Pod::Text now works correctly, thanks to a bug report from Olly Betts.

You can get the latest version from the podlators distribution page.

Cory DoctorowDonald Trump is a pathogen evolved to thrive in an attention-maximization ecosystem


My latest Locus column is The Engagement-Maximization Presidency, and it proposes a theory to explain the political phenomenon of Donald Trump: we live in a world in which communications platforms amplify anything that gets “engagement” and provides feedback on just how much your message has been amplified so you can tune and re-tune for maximum amplification.


Peter Watts’s 2002 novel Maelstrom illustrates a beautiful, terrifying example of this, in which a mindless, self-modifying computer virus turns itself into a chatbot that impersonates patient zero in a world-destroying pandemic; even though the virus doesn’t understand what it’s doing or how it’s doing it, it’s able to use feedback to refine its strategies, gaining control over more resources with which to try more strategies.
P>
It’s a powerful metaphor for the kind of cold reading we see Trump engaging in at his rallies, and for the presidency itself. I think it also explains why getting Trump of Twitter is impossible: it’s his primary feedback tool, and without it, he wouldn’t know what kinds of rhetoric to double down on and what to quietly sideline.

Maelstrom is concerned with a pandemic that is started by its protago­nist, Lenie Clark, who returns from a deep ocean rift bearing an ancient, devastating pathogen that burns its way through the human race, felling people by the millions.

As Clark walks across the world on a mission of her own, her presence in a message or news story becomes a signal of the utmost urgency. The filters are firewalls that give priority to some packets and suppress others as potentially malicious are programmed to give highest priority to any news that might pertain to Lenie Clark, as the authorities try to stop her from bringing death wherever she goes.

Here’s where Watt’s evolutionary bi­ology shines: he posits a piece of self-modifying malicious software – something that really exists in the world today – that automatically generates variations on its tactics to find computers to run on and reproduce itself. The more computers it colonizes, the more strategies it can try and the more computational power it can devote to analyzing these experiments and directing its randomwalk through the space of all possible messages to find the strategies that penetrate more firewalls and give it more computational power to devote to its task.

Through the kind of blind evolution that produces predator-fooling false eyes on the tails of tropical fish, the virus begins to pretend that it is Lenie Clark, sending messages of increasing convincingness as it learns to impersonate patient zero. The better it gets at this, the more welcoming it finds the firewalls and the more computers it infects.

At the same time, the actual pathogen that Lenie Clark brought up from the deeps is finding more and more hospitable hosts to reproduce in: thanks to the computer virus, which is directing public health authorities to take countermeasures in all the wrong places. The more effective the computer virus is at neutralizing public health authorities, the more the biological virus spreads. The more the biological virus spreads, the more anxious the public health authorities become for news of its progress, and the more computers there are trying to suck in any intelligence that seems to emanate from Lenie Clark, supercharging the computer virus.

Together, this computer virus and biological virus co-evolve, symbiotes who cooperate without ever intending to, like the predator that kills the prey that feeds the scavenging pathogen that weakens other prey to make it easier for predators to catch them.


The Engagement-Maximization Presidency [Cory Doctorow/Locus]


(Image: Kevin Dooley, CC-BY; Trump’s Hair)

Krebs on SecurityStudy: Attack on KrebsOnSecurity Cost IoT Device Owners $323K

A monster distributed denial-of-service attack (DDoS) against KrebsOnSecurity.com in 2016 knocked this site offline for nearly four days. The attack was executed through a network of hacked “Internet of Things” (IoT) devices such as Internet routers, security cameras and digital video recorders. A new study that tries to measure the direct cost of that one attack for IoT device users whose machines were swept up in the assault found that it may have cost device owners a total of $323,973.75 in excess power and added bandwidth consumption.

My bad.

But really, none of it was my fault at all. It was mostly the fault of IoT makers for shipping cheap, poorly designed products (insecure by default), and the fault of customers who bought these IoT things and plugged them onto the Internet without changing the things’ factory settings (passwords at least.)

The botnet that hit my site in Sept. 2016 was powered by the first version of Mirai, a malware strain that wriggles into dozens of IoT devices left exposed to the Internet and running with factory-default settings and passwords. Systems infected with Mirai are forced to scan the Internet for other vulnerable IoT devices, but they’re just as often used to help launch punishing DDoS attacks.

By the time of the first Mirai attack on this site, the young masterminds behind Mirai had already enslaved more than 600,000 IoT devices for their DDoS armies. But according to an interview with one of the admitted and convicted co-authors of Mirai, the part of their botnet that pounded my site was a mere slice of firepower they’d sold for a few hundred bucks to a willing buyer. The attack army sold to this ne’er-do-well harnessed the power of just 24,000 Mirai-infected systems (mostly security cameras and DVRs, but some routers, too).

These 24,000 Mirai devices clobbered my site for several days with data blasts of up to 620 Gbps. The attack was so bad that my pro-bono DDoS protection provider at the time — Akamai — had to let me go because the data firehose pointed at my site was starting to cause real pain for their paying customers. Akamai later estimated that the cost of maintaining protection against my site in the face of that onslaught would have run into the millions of dollars.

We’re getting better at figuring out the financial costs of DDoS attacks to the victims (5, 6 or 7 -digit dollar losses) and to the perpetrators (zero to hundreds of dollars). According to a report released this year by DDoS mitigation giant NETSCOUT Arbor, fifty-six percent of organizations last year experienced a financial impact from DDoS attacks for between $10,000 and $100,000, almost double the proportion from 2016.

But what if there were also a way to work out the cost of these attacks to the users of the IoT devices which get snared by DDos botnets like Mirai? That’s what researchers at University of California, Berkeley School of Information sought to determine in their new paper, “rIoT: Quantifying Consumer Costs of Insecure Internet of Things Devices.

If we accept the UC Berkeley team’s assumptions about costs borne by hacked IoT device users (more on that in a bit), the total cost of added bandwidth and energy consumption from the botnet that hit my site came to $323,973.95. This may sound like a lot of money, but remember that broken down among 24,000 attacking drones the per-device cost comes to just $13.50.

So let’s review: The attacker who wanted to clobber my site paid a few hundred dollars to rent a tiny portion of a much bigger Mirai crime machine. That attack would likely have cost millions of dollars to mitigate. The consumers in possession of the IoT devices that did the attacking probably realized a few dollars in losses each, if that. Perhaps forever unmeasured are the many Web sites and Internet users whose connection speeds are often collateral damage in DDoS attacks.

Image: UC Berkeley.

Anyone noticing a slight asymmetry here in either costs or incentives? IoT security is what’s known as an “externality,” a term used to describe “positive or negative consequences to third parties that result from an economic transaction. When one party does not bear the full costs of its actions, it has inadequate incentives to avoid actions that incur those costs.”

In many cases negative externalities are synonymous with problems that the free market has a hard time rewarding individuals or companies for fixing or ameliorating, much like environmental pollution. The common theme with externalities is that the pain points to fix the problem are so diffuse and the costs borne by the problem so distributed across international borders that doing something meaningful about it often takes a global effort with many stakeholders — who can hopefully settle upon concrete steps for action and metrics to measure success.

The paper’s authors explain the misaligned incentives on two sides of the IoT security problem:

-“On the manufacturer side, many devices run lightweight Linux-based operating systems that open doors for hackers. Some consumer IoT devices implement minimal security. For example, device manufacturers may use default username and password credentials to access the device. Such design decisions simplify device setup and troubleshooting, but they also leave the device open to exploitation by hackers with access to the publicly-available or guessable credentials.”

-“Consumers who expect IoT devices to act like user-friendly ‘plug-and-play’ conveniences may have sufficient intuition to use the device but insufficient technical knowledge to protect or update it. Externalities may arise out of information asymmetries caused by hidden information or misaligned incentives. Hidden information occurs when consumers cannot discern product characteristics and, thus, are unable to purchase products that reflect their preferences. When consumers are unable to observe the security qualities of software, they instead purchase products based solely on price, and the overall quality of software in the market suffers.”

The UK Berkeley researchers concede that their experiments — in which they measured the power output and bandwidth consumption of various IoT devices they’d infected with a sandboxed version of Mirai — suggested that the scanning and DDoSsing activity prompted by a Mirai malware infection added almost negligible amounts in power consumption for the infected devices.

Thus, most of the loss figures cited for the 2016 attack rely heavily on estimates of how much the excess bandwidth created by a Mirai infection might cost users directly, and as such I suspect the $13.50 per machine estimates are on the high side.

No doubt, some Internet users get online via an Internet service provider that includes a daily “bandwidth cap,” such that over-use of the allotted daily bandwidth amount can incur overage fees and/or relegates the customer to a slower, throttled connection for some period after the daily allotted bandwidth overage.

But for a majority of high-speed Internet users, the added bandwidth use from a router or other IoT device on the network being infected with Mirai probably wouldn’t show up as an added line charge on their monthly bills. I asked the researchers about the considerable wiggle factor here:

“Regarding bandwidth consumption, the cost may not ever show up on a consumer’s bill, especially if the consumer has no bandwidth cap,” reads an email from the UC Berkeley researchers who wrote the report, including Kim Fong, Kurt Hepler, Rohit Raghavan and Peter Rowland.

“We debated a lot on how to best determine and present bandwidth costs, as it does vary widely among users and ISPs,” they continued. “Costs are more defined in cases where bots cause users to exceed their monthly cap. But even if a consumer doesn’t directly pay a few extra dollars at the end of the month, the infected device is consuming actual bandwidth that must be supplied/serviced by the ISP. And it’s not unreasonable to assume that ISPs will eventually pass their increased costs onto consumers as higher monthly fees, etc. It’s difficult to quantify the consumer-side costs of unauthorized use — which is likely why there’s not much existing work — and our stats are definitely an estimate, but we feel it’s helpful in starting the discussion on how to quantify these costs.”

Measuring bandwidth and energy consumption may turn out to be a useful and accepted tool to help more accurately measure the full costs of DDoS attacks. I’d love to see these tests run against a broader range of IoT devices in a much larger simulated environment.

If the Berkeley method is refined enough to become accepted as one of many ways to measure actual losses from a DDoS attack, the reporting of such figures could make these crimes more likely to be prosecuted.

Many DDoS attack investigations go nowhere because targets of these attacks fail to come forward or press charges, making it difficult for prosecutors to prove any real economic harm was done. Since many of these investigations die on the vine for a lack of financial damages reaching certain law enforcement thresholds to justify a federal prosecution (often $50,000 – $100,000), factoring in estimates of the cost to hacked machine owners involved in each attack could change that math.

But the biggest levers for throttling the DDoS problem are in the hands of the people running the world’s largest ISPs, hosting providers and bandwidth peering points on the Internet today. Some of those levers I detailed in the “Shaming the Spoofers” section of The Democraticization of Censorship, the first post I wrote after the attack and after Google had brought this site back online under its Project Shield program.

By the way, we should probably stop referring to IoT devices as “smart” when they start misbehaving within three minutes of being plugged into an Internet connection. That’s about how long your average cheapo, factory-default security camera plugged into the Internet has before getting successfully taken over by Mirai. In short, dumb IoT devices are those that don’t make it easy for owners to use them safely without being a nuisance or harm to themselves or others.

Maybe what we need to fight this onslaught of dumb devices are more network operators turning to ideas like IDIoT, a network policy enforcement architecture for consumer IoT devices that was first proposed in December 2017.  The goal of IDIoT is to restrict the network capabilities of IoT devices to only what is essential for regular device operation. For example, it might be okay for network cameras to upload a video file somewhere, but it’s definitely not okay for that camera to then go scanning the Web for other cameras to infect and enlist in DDoS attacks.

So what does all this mean to you? That depends on how many IoT things you and your family and friends are plugging into the Internet and your/their level of knowledge about how to secure and maintain these devices. Here’s a primer on minimizing the chances that your orbit of IoT things become a security liability for you or for the Internet at large.

CryptogramRay Ozzie's Encryption Backdoor

Last month, Wired published a long article about Ray Ozzie and his supposed new scheme for adding a backdoor in encrypted devices. It's a weird article. It paints Ozzie's proposal as something that "attains the impossible" and "satisfies both law enforcement and privacy purists," when (1) it's barely a proposal, and (2) it's essentially the same key escrow scheme we've been hearing about for decades.

Basically, each device has a unique public/private key pair and a secure processor. The public key goes into the processor and the device, and is used to encrypt whatever user key encrypts the data. The private key is stored in a secure database, available to law enforcement on demand. The only other trick is that for law enforcement to use that key, they have to put the device in some sort of irreversible recovery mode, which means it can never be used again. That's basically it.

I have no idea why anyone is talking as if this were anything new. Several cryptographers have already explained explained why this key escrow scheme is no better than any other key escrow scheme. The short answer is (1) we won't be able to secure that database of backdoor keys, (2) we don't know how to build the secure coprocessor the scheme requires, and (3) it solves none of the policy problems around the whole system. This is the typical mistake non-cryptographers make when they approach this problem: they think that the hard part is the cryptography to create the backdoor. That's actually the easy part. The hard part is ensuring that it's only used by the good guys, and there's nothing in Ozzie's proposal that addresses any of that.

I worry that this kind of thing is damaging in the long run. There should be some rule that any backdoor or key escrow proposal be a fully specified proposal, not just some cryptography and hand-waving notions about how it will be used in practice. And before it is analyzed and debated, it should have to satisfy some sort of basic security analysis. Otherwise, we'll be swatting pseudo-proposals like this one, while those on the other side of this debate become increasingly convinced that it's possible to design one of these things securely.

Already people are using the National Academies report on backdoors for law enforcement as evidence that engineers are developing workable and secure backdoors. Writing in Lawfare, Alan Z. Rozenshtein claims that the report -- and a related New York Times story -- "undermine the argument that secure third-party access systems are so implausible that it's not even worth trying to develop them." Susan Landau effectively corrects this misconception, but the damage is done.

Here's the thing: it's not hard to design and build a backdoor. What's hard is building the systems -- both technical and procedural -- around them. Here's Rob Graham:

He's only solving the part we already know how to solve. He's deliberately ignoring the stuff we don't know how to solve. We know how to make backdoors, we just don't know how to secure them.

A bunch of us cryptographers have already explained why we don't think this sort of thing will work in the foreseeable future. We write:

Exceptional access would force Internet system developers to reverse "forward secrecy" design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today's Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

Finally, Matthew Green:

The reason so few of us are willing to bet on massive-scale key escrow systems is that we've thought about it and we don't think it will work. We've looked at the threat model, the usage model, and the quality of hardware and software that exists today. Our informed opinion is that there's no detection system for key theft, there's no renewability system, HSMs are terrifically vulnerable (and the companies largely staffed with ex-intelligence employees), and insiders can be suborned. We're not going to put the data of a few billion people on the line an environment where we believe with high probability that the system will fail.

Sociological ImagesPocket-sized Politics

Major policy issues like gun control often require massive social and institutional changes, but many of these issues also have underlying cultural assumptions that make the status quo seem normal. By following smaller changes in the way people think about issues, we can see gradual adjustments in our culture that ultimately make the big changes more plausible.

Photo Credit: Emojipedia

For example, today’s gun debate even drills down to the little cartoons on your phone. There’s a whole process for proposing and reviewing new emoji, but different platforms have their own control over how they design the cartoons in coordination with the formal standards. Last week, Twitter pointed me to a recent report from Emojipedia about platform updates to the contested “pistol” emoji, moving from a cartoon revolver to a water pistol:

In an update to the original post, all major vendors have committed to this design change for “cross-platform compatibility.”

There are a couple ways to look at this change from a sociological angle. You could tell a story about change from the bottom-up, through social movements like the March For Our Lives, calling for gun reform in the wake of mass shootings. These movements are drawing attention to the way guns permeate American culture, and their public visibility makes smaller choices about the representation of guns more contentious. Apple didn’t comment directly on the intentions behind the redesign when it came out, but it has weighed in on the politics of emoji design in the past.

You could also tell a story about change from the top-down, where large tech companies have looked to copy Apple’s innovation for consistency in a contentious and uncertain political climate (sociologists call this “institutional isomorphism”). In the diagram, you can see how Apple’s early redesign provided an alternative framework for other companies to take up later on, just like Google and Microsoft adopted the dominant pistol design in earlier years.

Either way, if you favor common sense gun reform, redesigning emojis is obviously not enough. But cases like this help us understand how larger shifts in social norms are made up of many smaller changes that challenge the status quo.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianNorbert Preining: Gaming: The House of Da Vinci

Recently the gaming time has dropped close to zero, but The House of Da Vinci was just perfectly suited to my low time burst play mode. A game in the style of the Room Series, available on all kinds of platforms (PC, Mac, Android, iOS or Amazon Kindle – who wants to play on Kindle?), took a few hours of my spare time, mostly while soaking in the bath tub.

The game is set at the height of the Renaissance, beginning of the 16th century, and the player is invited by Leonardo Da Vinci to see his most recent spectacular invention. But it turned out that neither Da Vinci is there, nor is it that easy to enter or progress through the various level. A typical puzzle game with a big part of point-click-search-point-click-search repetition, it still kept me interested throughout the whole game.

I played the game on a tablet (Amazon Fire 10) and that was definitely better than on my mobile, and much more fun. The graphics is very well done. That helped to overcome the fatigue I normally get from games that require to player to tap on each singly pixel in the hope to find a hidden door, hidden key, hidden foobar. I like the riddles part, but not the searching part.

Fortunately, to keep the player’s frustration at bay, there is an excellent and well thought out hint system. Increasingly detailed hints, from rather general suggestions to detailed hints help the player when he got stuck. Something that happened to me quite to often (see above).

The game controls, at least on the tablet, worked very nicely, no problem whatsoever. I guess playing it on a small screen device (like my mobile) will make it even more harder to find some of the special spots, though. But for a bit bigger devices it is a big fun and nice getaway from normal work load.

During the game one can collect also some non-essential items and later on (after finishing) visit Da Vinci’s garden with his inventions (or better, those one has found) on display. Other than these gimmicks I’m not aware of any further hidden Easter egg or similar, but one never knows.

All in all a very nice and enjoyable game for a few hours of relaxation. After having started the game on my mobile I switched to the tablet and restarted there, and I realized that the game actually also has a replaying value. Guess I need to go through it once again. So all in all, recommended if you like puzzles!

Planet DebianDirk Eddelbuettel: RcppGSL 0.3.4

A minor update version 0.3.4 of RcppGSL is now on CRAN. It contains an improved Windows build system (thanks, Jeroen!) and updates the C++ headers by removing dynamic exception specifications which C++11 frowns upon, and the compilers lets us know that in no uncertain terms. Builds using RcppGSL will now be more quiet. And as always, an extra treat for Solaris.

The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

No user-facing new code or features were added. The NEWS file entries follow below:

Changes in version 0.3.4 (2018-05-06)

  • Windows builds were updated (Jeroen Ooms in #16).

  • Remove dynamic exception specifications which are deprecated with C++11 or later (Dirk in #17).

  • Accomodate Solaris by being more explicit about sqrt.

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianTimo Jyrinki: Converting an existing installation to LUKS using luksipc - 2018 notes

Time for a laptop upgrade. Encryption was still not the default for the new Dell XPS 13 Developer Edition (9370) that shipped with Ubuntu 16.04 LTS, so I followed my own notes from 3 years ago together with the official documentation to convert the unencrypted OEM Ubuntu installation to LUKS during the weekend. This only took under 1h altogether.

On this new laptop model, EFI boot was already in use, Secure Boot was enabled and the SSD had GPT from the beginning. The only thing I wanted to change thus was the / to be encrypted.

Some notes for 2018 to clarify what is needed and what is not needed:
  • Before luksipc, remember to resize existing partitions to have 10 MB of free space at the end of the / partition, and also create a new partition of eg 1 GB size partition for /boot.
  • To get the code and compile luksipc on Ubuntu 16.04.4 LTS live USB, just apt install git build-essential is needed. cryptsetup package is already installed.
  • After luksipc finishes and you've added your own passphrase and removed the initial key (slot 0), it's useful to cryptsetup luksOpen it and mount it still under the live session - however, when using ext4, the mounting fails due to a size mismatch in ext4 metadata! This is simple to correct: sudo resize2fs /dev/mapper/root. Nothing else is needed.
  • I mounted both the newly encrypted volume (to /mnt) and the new /boot volume (to /mnt2 which I created), and moved /boot/* from the former to latter.
  • I edited /etc/fstab of the encrypted volume to add the /boot partition
  • Mounted as following in /mnt:
    • mount -o bind /dev dev
    • mount -o bind /sys sys
    • mount -t proc proc proc
  • Then:
    • chroot /mnt
    • mount -a # (to mount /boot and /boot/efi)
    • Edited files /etc/crypttab (added one line: root UUID none luks) and /etc/grub/default (I copied over my overkill configuration that specifies all of cryptopts and cryptdevice some of which may be obsolete, but at least one of them and root=/dev/mapper/root is probably needed).
    • Ran grub-install ; update-grub ; mkinitramfs -k all -c (notably no other parameters were needed)
    • Rebooted.
  • What I did not need to do:
    • Modify anything in /etc/initramfs-tools.
If the passphrase input shows on your next boot, but your correct passphrase isn't accepted, it's likely that the initramfs wasn't properly updated yet. I first forgot to run the mkinitramfs command and faced this.

Worse Than FailureCodeSOD: CHashMap

There’s a phenomenon I think of as the “evolution of objects” and it impacts novice programmers. They start by having piles of variables named things like userName0, userName1, accountNum0, accountNum1, etc. This is awkward and cumbersome, and then they discover arrays. string* userNames, int[] accountNums. This is also awkward and cumbersome, and then they discover hash maps, and can do something like Map<string, string>* users. Most programmers go on to discover “wait, objects do that!”

Not so Brian’s co-worker, Dagny. Dagny wanted to write some C++, but didn’t want to learn that pesky STL or have to master templates. Dagny also considered themselves a “performance junkie”, so they didn’t want to bloat their codebase with peer-reviewed and optimized code, and instead decided to invent that wheel themselves.

Thus was born CHashMap. Now, Brian didn’t do us the favor of including any of the implementation of CHashMap, claiming he doesn’t want to “subject the readers to the nightmares that would inevitably arise from viewing this horror directly”. Important note for submitters: we want those nightmares.

Instead, Brian shares with us how the CHashMap is used, and from that we can infer a great deal about how it was implemented. First, let’s simply look at some declarations:

    CHashMap bills;
    CHashMap billcols;
    CHashMap summary;
    CHashMap billinfo;

Note that CHashMap does not take any type parameters. This is because it’s “type ignorant”, which is like being type agnostic, but with more char*. For example, if you want to get, say, the “amount_due” field, you might write code like this:

    double amount = 0;
    amount = Atof(bills.Item("amount_due"));

Yes, everything, keys and values, is simply a char*. And, as a bonus, in the interests of code clarity, we can see that Dagny didn’t do anything dangerous, like overload the [] operator. It would certainly be confusing to be able to index the hash map like it were any other collection type.

Now, since everything is stored as a char*, it’s onto you to convert it back to the right type, but since chars are just bytes if you don’t look too closely… you can store anything at that pointer. So, for example, if you wanted to get all of a user’s billing history, you might do something like this…

    CHashMap bills;
    CHashMap billcols;
    CHashMap summary;
    CHashMap billinfo;

    int nbills = dbQuery (query, bills, billcols);
    if (nbills > 0) {
        // scan the bills for amounts and/or issues
        double amount;
        double amountDue = 0;
        int unresolved = 0;

        for (int r=0; r<nbills; r++) {
            if (Strcmp(bills.Item("payment_status",r),BILL_STATUS_REMITTED) != 0) {
                billinfo.Clear();
                amount = Atof(bills.Item("amount_due",r));
                if (amount >= 0) {
                    amountDue += amount;
                    if (Strcmp(bills.Item("status",r),BILL_STATUS_WAITING) == 0) {
                        unresolved += 1;
                        billinfo.AddItem ("duedate", FormatTime("YYYY-MM-DD hh:mm:ss",cvtUTC(bills.Item("due_date",r))));
                        billinfo.AddItem ("biller", bills.Item("account_display_name",r));
                        billinfo.AddItem ("account", bills.Item("account_number",r));
                        billinfo.AddItem ("amount", amount);
                    }
                }
                else {
                    amountDue += 0;
                    unresolved += 1;
                    billinfo.AddItem ("duedate", FormatTime("YYYY-MM-DD hh:mm:ss",cvtUTC(bills.Item("due_date",r))));
                    billinfo.AddItem ("biller", bills.Item("account_display_name",r));
                    billinfo.AddItem ("account", bills.Item("account_number",r));
                    billinfo.AddItem ("amount", "???");
                }
                summary.AddItem ("", &billinfo);
            }
        }
    }

Look at that summary.AddItem ("", &billinfo) line. Yes, that is an empty key. Yes, they’re pointing it at a reference to the billinfo (which also gets Clear()ed a few lines earlier, so I have no idea what’s happening there). And yes, they’re doing this assignment in a loop, but don’t worry! CHashMap allows multiple values per key! That "" key will hold everything.

So, you have multi-value keys which can themselves point to nested CHashMaps, which means you don’t need any complicated JSON or XML classes, you can just use CHashMap as your hammer/foot-gun.

    //code which fetches account details from JSON
    CHashMap accounts;
    CHashMap details;
    CHashMap keys;

    rc = getAccounts (userToken, accounts);
    if (rc == 0) {
        for (int a=1; a<=accounts.Count(); a++) {
            cvt.jsonToKeys (accounts.Item(a), keys);
            rc = getAccountDetails (userToken, keys.Item("accountId"), details);
        }
    }
    // Details of getAccounts
    int getAccounts (const char * user, CHashMap& rows) {
      // <snip>
      AccountClass account;
      for (int a=1; a<=count; a++) {
        // Populate the account class
        // <snip>
        rows.AddItem ("", account.jsonify(t));
      }

With this kind of versatility, is it any surprise that pretty much every function in the application depends on a CHashMap somehow? If that doesn’t prove its utility, I don’t know what will. How could you do anything better? Use classes? Don’t make me laugh!

As a bonus, remember this line above? billinfo.AddItem ("duedate", FormatTime("YYYY-MM-DD hh:mm:ss",cvtUTC(bills.Item("due_date",r))))? Well, Brian has this to add:

it’s worth mentioning that our DB stores dates in the typical format: “YYYY-MM-DD hh:mm:ss”. cvtUTC is a function that converts a date-time string to a time_t value, and FormatTime converts a time_t to a date-time string.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet DebianDaniel Pocock: Powering a ham radio transmitter

Last week I announced the crowdfunding campaign to help run a ham radio station at OSCAL. Thanks to all those people who already donated or expressed interest in volunteering.

Modern electronics are very compact and most of what I need to run the station can be transported in my hand luggage. The two big challenges are power supplies and antenna masts. In this blog post there are more details about the former.

Here is a picture of all the equipment I hope to use:

The laptop is able to detect incoming signals using the RTL-SDR dongle and up-converter. After finding a signal, we can then put the frequency into the radio transmitter (in the middle of the table), switch the antenna from the SDR to the radio and talk to the other station.

The RTL-SDR and up-converter run on USB power and a phone charger. The transmitter, however, needs about 22A at 12V DC. This typically means getting a large linear power supply or a large battery.

In the photo, I've got a Varta LA60 AGM battery, here is a close up:

There are many ways to connect to a large battery. For example, it is possible to use terminals like these with holes in them for the 8 awg wire or to crimp ring terminals onto a wire and screw the ring onto any regular battery terminal. The type of terminal with these extra holes in it is typically sold for car audio purposes. In the photo, the wire is 10 awg superflex. There is a blade fuse along the wire and the other end has a PowerPole 45 plug. You can easily make cables like this yourself with a PowerPole crimping tool, everything can be purchased online from sites like eBay.

The wire from the battery goes into a fused power distributor with six PowerPole outlets for connecting the transmitter and other small devices, for example, the lamp in the ATU or charging the handheld:

The AGM battery in the photo weighs about 18kg and is unlikely to be accepted in my luggage, hence the crowdfunding campaign to help buy one for the local community. For many of the young people and students in the Balkans, the price of one of the larger AGM batteries is equivalent to about one month of their income so nobody there is going to buy one on their own. Please consider making a small donation if you would like to help as it won't be possible to run demonstrations like this without power.

Planet DebianNOKUBI Takatsugu: Now mikutter is not work

mikutter (deb), a Twitter client written in Ruby, isn’t work now because of the Consumer Key had suspended (Japanese blog).

Planet DebianThorsten Glaser: mksh bugfix — thank you for the music

I’m currently working on an mksh(1) and bc(1) script that takes a pitch standard (e.g. “A₄ = 440 Hz” or “C₄ = 256 Hz”) and a config file describing a temperament (e.g. the usual equal temperament, or Pythagorean untempered pure fifths (with the wolf), or “just” intonation, Werckmeister Ⅲ, Vallotti or Bach/Lehman 1722 (to name a few; these are all temperaments that handle enharmonics the same or, for Pythagorean in out case, ignore the fact they’re unplayable). Temperaments are rule-based, like in ttuner. Well, I’m not quite there yet, but I’m already able to display the value for MuseScore to adjust its pitch standard (it can only take A₄-based values), a frequency table, and a list and table of cent deltas (useful for using or comparing with other tuners). Of course, right now, the cent deltas are all 0 because, well, they are equal temperament against equal temperament (as baseline), but I can calculate that with arbitrary and very high precision!

For outputting, I wanted to make the tables align nicely; column(1), which I normally use, was out because it always left-aligns, so I used string padding in Korn Shell — except I’m also a Unicode BMP fan, so I had F♯ and B♭ in my table headings, which were for some reason correctly right-aligned (for when the table values were integers) but not padded right when aligning with the decimal dot. So I worked around it, but also investigated.

Turns out that the desired length was used as second snprintf(3) argument, instead of, as in the right-align case, the buffer size. This worked only until multibyte characters happened. A fun bug, which only took about three minutes to find, and is covered by a new check in the testsuite even. Thought I’d share.

Feedback on and improvements for the tuner, once it’ll be done, are, of course, also welcome. I plan to port the algorithm (once I’ve got it down in a programming language I know well) to QML for inclusion in the tuner MuseScore plugin, even. Check here, for now, for my work in progress… it’s quite big already despite doing basically nothing. Foundation laid (or so…).

,

Planet DebianDirk Eddelbuettel: RcppMsgPack 0.2.2

Am maintenance release of RcppMsgPack got onto CRAN this afternoon. It contains a single static_cast fix to address a warning which g++-8.1 shows whereas older compilers remained silent---and CRAN asked us to address this.

MessagePack itself is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it is faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves. RcppMsgPack brings both the C++ headers of MessagePack as well as clever code (in both R and C++) Travers wrote to access MsgPack-encoded objects directly from R.

Changes in version 0.2.2 (2018-05-06)

  • Apply a static_cast from upstream to suppress a warning from g++-8.1 as requested by CRAN.

Courtesy of CRANberries, there is also a diffstat report for this release.

More information is on the RcppRedis page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianRuss Allbery: rra-c-util 7.1

This is my collection of portability libraries and test suite support for my C projects.

The big news in this release was already discussed in the corresponding C TAP Harness release: much better support for valgrind checking. Not only has valgrind been enabled in this package to check the portability and utility layers, but the TAP helper library has been updated to support spawning remctld with valgrind for remctl tests. (The primary consumer of this support will be the remctl package, of course.) This release also includes a valgrind/logs check that is meant to run last in a test list and scans a directory of valgrind logs, reporting test failures for any that contain valgrind errors. If the test suite isn't run under valgrind, this test is just skipped.

Some other fixes in this release:

  • Fix misplaced va_end in putil_log_failure.

  • Avoid testing the client IP address in network_bind_all tests. Some hosts in the compiler farm for open source testing have a weird network configuration where localhost packets appear to come from an IP address other than 127.0.0.1, resulting in spurious failures.

  • Define UINT32_MAX on systems that don't have it.

  • Skip more Autoconf and Automake files in the all_files function of Test::RRA::Automake.

This release also contains a few other, more minor fixes, and switches the license of all of my public domain source files to the FSF all-permissive license in the name of standardizing on licenses registered with the SPDX project and vetted by actual lawyers.

You can get the latest release from the rra-c-util distribution page.

Planet DebianRuss Allbery: C TAP Harness 4.3

This is my test harness and C library for supporting TAP (Test Anything Protocol, the same used by Perl) tests for C projects.

The big change in this release is test options: a test list passed to runtests can now include space-separated options after the name of the test to run. The two supported options are valgrind, which controls whether the test can be run under valgrind, and libtool, which further controls how the test is run under valgrind.

The background on this is a bit interesting.

I've had Makefile targets to run test suites under valgrind for various packages for a while, using valgrind's --trace-children=yes option and running runtests under valgrind. However, this has a serious problem: valgrind tries to trace into every child, including shell scripts or Perl scripts used in the test suite. This results in valgrind logs that are basically always unuseful: lots of false positives or uninteresting small memory leaks in standard system tools.

For projects that use Libtool, the situation becomes far worse, since Libtool wraps all the executables in the build tree with shell scripts that set up an appropriate library path. valgrind traces into those shell scripts and all the various programs they run. As a result, running valgrind with --trace-children=yes on a non-trivial test suite such as remctl's could easily result in 10,000 or more valgrind log files, most of which were just noise.

As a result, while the valgrind target was there, I rarely ran it or tried to dig through the results. And that, in turn, led to security vulnerabilities like the recent one in remctl.

Test options are a more comprehensive fix. When the valgrind test option is set, C TAP Harness will look at the C_TAP_VALGRIND environment variable. If it is set, that test will be run, directly by the harness, using the command given in that environment variable. A typical setting might be:

VALGRIND_COMMAND = $(PATH_VALGRIND) --leak-check=full             \
        --log-file=$(abs_top_builddir)/tests/tmp/valgrind/log.%p  \
        --suppressions=$(abs_top_srcdir)/tests/data/valgrind.supp

and then a Makefile target like:

# Used by maintainers to run the main test suite under valgrind.
check-valgrind: $(check_PROGRAMS)
        rm -rf $(abs_top_builddir)/tests/tmp
        mkdir $(abs_top_builddir)/tests/tmp
        mkdir $(abs_top_builddir)/tests/tmp/valgrind
        C_TAP_VALGRIND="$(VALGRIND_COMMAND)"                    \
            C_TAP_LIBTOOL="$(top_builddir)/libtool"             \
            tests/runtests -l '$(abs_top_srcdir)/tests/TESTS'

(this is take from the current version of remctl). Note that --trace-children=yes can be omitted, which avoids recursing into any small helper programs that tests run, and only tests with the valgrind option set will be run under valgrind (the rest will be run as normal). This cuts down massively on the noise. Tests that explicitly want to invoke valgrind on some other program can use the presence of C_TAP_VALGRIND in the environment to decide whether to do that and what command to use.

The C_TAP_LIBTOOL setting above is the other half of the fix. Packages that use Libtool may have tests that are Libtool-generated shell wrappers, so just using the above would still run valgrind on a shell script. But if the libtool test option is also set, C TAP Harness now knows to invoke the test with libtool --mode=execute and then the valgrind command, which causes Libtool to expand all of the shell wrappers to actual binaries first and then run valgrind only on the actual binary. The path to libtool is taken from the C_TAP_LIBTOOL environment variable.

The final piece is an additional test that scans the generated valgrind log files. That piece is part of rra-c-util, so I'll talk about it with that release.

There are two more changes in this release. First, borrowing an idea from Rust, diagnostics from test failures are now reported as the left value and the right value, instead of the seen and wanted values. This allows one to write tests without worrying about the order of arguments to the is_* functions, which turned out to be hard to remember and mostly a waste of energy to keep consistent. And second, I fixed an old bug with NULL handling in is_string that meant a NULL might compare as equal to a literal string "(null)". This test is now done properly.

The relationship between C TAP Harness and rra-c-util got a bit fuzzier in this release since I wanted to import more standard tests from rra-c-util and ended up importing the Perl test framework. This increases the amount of stuff that's in the C TAP Harness distribution that's canonically maintained in rra-c-util. I considered moving the point of canonical maintenance to C TAP Harness, but it felt rather out of scope for this package, so decided to live with it. Maybe I should just merge the two packages, but I know a lot of people who use pieces of C TAP Harness but don't care about rra-c-util, so it seems worthwhile to keep them separate.

I did replace the tests/docs/pod-t and tests/docs/pod-spelling-t tests in C TAP Harness with the rra-c-util versions in this release since I imported the Perl test framework anyway. I think most users of the old C TAP Harness versions will be able to use the rra-c-util versions without too much difficulty, although it does require importing the Perl modules from rra-c-util into a tests/tap/perl directory.

You can get the latest release from the C TAP Harness distribution page.

Planet DebianThorsten Alteholz: My Debian Activities in April 2018

FTP master

This month I accepted 145 packages and rejected 5 uploads. The overall number of packages that got accepted this month was 260.

Debian LTS

This was my forty sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 16.25h. During that time I did LTS uploads of:

    [DLA 1353-1] wireshark security update for 12 CVEs
    [DLA 1364-1] openslp-dfsg security update for one CVE
    [DLA 1367-1] slurm-llnl security update for one CVE

I also started to work on the next bunch of wireshark CVEs and I intend to upload packages for Jessie and Stretch as well.
Other packages I started are krb5 and cups.

Last but not least I did a week of frontdesk duties, where I check lots of CVEs for their impact on Wheezy.

Other stuff

During April I did uploads of …

  • pescetti to fix a FTBFS with Java 9 due to -source/-target only
  • salliere to fix a FTBFS with Java 9 due to -source/-target only
  • libb64 to fix a FTCBFS
  • chktex to fix a FTBFS with TeX Live 2018

Thanks to all the people who sent patches!

I also finished the libosmocore transistion this month by uploading the following osmocom packages to unstable and alongside fixing some bugs that our tireless QA tools detected:

Further I uploaded osmo-fl2k already two days after release. It is a nice software that enables an USB-VGA converter to be used as transmitter for all kind of signals. Of course just use it in a shielded room!

As Nicolas Mora, the upstream author of the oauth2 server glewlwyd, wanted to be more involved in Debian packaging, I sponsored some of his first packages. They are all new versions of his software:

I also uploaded a new upstream version of dateutils.

Last but not least I worked on some apcupsd bugs and I am down to 16 bugs now.

Planet Linux AustraliaDavid Rowe: FreeDV 700D and SSB Comparison

Mark, VK5QI has just performed a SSB versus FreeDV 700D comparison between his home in Adelaide and the Manly Warringah Radio Society WebSDR SDR in Sydney, about 1200km away. The band was 40m, and the channel very poor, with some slow fading. Mark used SVN revision 3581, built himself on Ubuntu, with an interleaver setting (Tools-Options menu) of 1 frame. Transmit power for SSB and FreeDV 700D was about the same.

I’m still finishing off FreeDV 700D integration and tuning the mode – but this is a very encouraging start. Thanks Mark!

Don MartiUnlocking the hidden European mode in web ads

It would make me really happy to be able to yellow-list Google web ads in Privacy Badger. (Yellow-listed domains are not blocked, but have their cookies restricted in order to cut back on cross-site tracking.) That's because a lot of news and cultural sites use DoubleClick for Publishers and other Google services to deliver legit, context-based advertising. Unfortunately, as far as I can tell, Google mixes in-context ads with crappy, spam-like, targeted stuff. What I want is something like Doc Searls style ads: Just give me ads not based on tracking me.

Until now, there has been no such setting. There could have been, if Do Not Track (DNT) had turned out to be a thing, but no. But there is some good news. Instead of one easy-to-use DNT, sites are starting to give us harder-to-find, but still usable, settings, in order to enable GDPR-compliant ads for Europe. Here's Google's: Ads personalization settings in Google’s publisher ad tags - DoubleClick for Publishers Help.

Wait a minute? Google respects DNT now?

Sort of. GDPR-compliant terms written by Google aren't exactly the same as EFF's privacy-friendly Do Not Track (DNT) Policy All these different tracking policies are reminding me of open source licenses for some reason. but close enough. The catch is that as an end user, you can't just turn on Google's European mode. You have to do some JavaScript. I think I figured out how to do this in a simple browser extension to unlock secret European status.

Google doesn't appear to have their European mode activated yet, so I added a do-nothing "European mode" to the Aloodo project, for testing. I'm not able to yellow-list Google yet, but when GDPR takes effect later this month I'll test it some more.

In the meantime, I'll keep looking for other examples of hidden European mode, and see if I can figure out how to activate them.

Planet DebianRuss Allbery: DocKnot 1.05

DocKnot is the program that generates the top-level web pages and README files for my software packages (and eventually should take over generation of the LICENSE files, doing release management, and may eventually incorporate the tools for the macro language I use to generate my web pages).

For this release, I did a bit more research about the little badges that people have started displaying in their README files on GitHub and in a few other places (Rust's crates.io, for instance). DocKnot can now generate the Travis-CI badges, and also uses Shields.io for CPAN badges for things I upload to CPAN. I will probably add more as I work on other packages that have meaningful badges.

This release also moves the documentation of environment variables to set to enable more parts of the test suite to the testing section of README files. I have no idea why I put that in the requirements section originally; this should make more sense.

There are a few additions to support generating web pages for remctl, which previously wasn't using DocKnot, and a bug fix for README generation when there are additional bootstrapping instructions. (Previous versions miswrapped the resulting paragraph.)

DocKnot now requires Perl 5.24 or later, since right now I'm the only user and don't need to support anything older than Debian stable, and that way I can start using the neat new $ref->@* Perl syntax instead of the much noisier @{$ref} syntax.

You can get the latest release from the DocKnot distribution page.

Planet DebianNorbert Preining: Debian/TeX Live 2018.20180505-1

The first big bunch of updates of TeX Live 2018. During the freeze for DVD production, several bugs have been found and fixed. In particular compatibility of csqoutes with the shiny new LaTeX release, as well as some other related fixes. That hopefully will fix most if not all build failures that were introduced with the TL2018 upload.

I guess there will be some bad bugs still lurking around, but now that regular updates of TeX Live are done again on a daily basis, they should be fixed rather soon.

Enjoy.

New packages

bath-bst, bezierplot, cascade, citeref, clrdblpg, colophon, competences, dsserif, fduthesis, includernw, kurdishlipsum, latex-via-exemplos, libertinus-otf, manyind, milsymb, modulus, musikui, stix2-otf, stix2-type1.

Updated packages

academicons, acmart, adjustbox, aleph, apnum, archaeologie, asymptote, asymptote.x86_64-linux, babel, babel-french, babel-ukrainian, beamerposter, bib2gls, biblatex-gb7714-2015, biblatex-publist, bibtexperllibs, bxjscls, caption, chngcntr, cleveref, colortbl, context-filter, context-handlecsv, context-vim, cooking-units, cslatex, csquotes, ctex, datatool, datetime2-estonian, datetime2-greek, datetime2-hebrew, datetime2-icelandic, datetime2-ukrainian, dccpaper, doclicense, exercisebank, factura, findhyph, fithesis, fonts-tlwg, gbt7714, genealogytree, glossaries-extra, graph35, gzt, handin, hecthese, hyph-utf8, hyphen-bulgarian, hyphen-german, hyphen-latin, hyphen-thai, ifmtarg, jlreq, ketcindy, koma-script-examples, l3kernel, l3packages, langsci, latex, latex-bin, latex2e-help-texinfo, latexindent, latexpand, libertinus, libertinust1math, lshort-german, ltximg, lua-check-hyphen, luamplib, luaotfload, luatexko, lwarp, lyluatex, m-tx, make4ht, marginnote, markdown, mathalfa, newtx, newunicodechar, nicematrix, novel, nwejm, pgfornament-han, pgfplots, pkgloader, platex, plex-otf, polexpr, polyglossia, pst-func, pstricks, regexpatch, reledmac, roboto, robustindex, siunitx, siunitx, skdoc, stix, t2, tetex, tex4ebook, tex4ht, texdef, texdoc, texlive-cz, texlive-sr, thuthesis, tikzsymbols, tools, univie-ling, updmap-map, uplatex, xcharter, xecjk, xellipsis, xetex, xetexko, yathesis, zhlipsum, zxjafont, zxjatype.

Planet DebianJoey Hess: more fun with reactive-banana-automation

My house knows when people are using the wifi, and keeps the inverter on so that the satellite internet is powered up, unless the battery is too low. When nobody is using the wifi, the inverter turns off, except when it's needed to power the fridge.

Sounds a little complicated, doesn't it? The code to automate that using my reactive-banana-automation library is almost shorter than the English description, and certianly clearer.

inverterPowerChange :: Sensors t -> MomentAutomation (Behavior (Maybe PowerChange))
inverterPowerChange sensors = do
    lowpower <- lowpowerMode sensors
    fridgepowerchange <- fridgePowerChange sensors
    wifiusers <- numWifiUsers sensors
    return $ react <$> lowpower <*> fridgepowerchange <*> wifiusers
where
    react lowpower fridgepowerchange wifiusers
            | lowpower = Just PowerOff
            | wifiusers > 0 = Just PowerOn
            | otherwise = fridgepowerchange

Of course, there are complexities under the hood, like where does numWifiUsers come from? (It uses inotify to detect changes to the DHCP leases file, and tracks when leases expire.) I'm up to 1200 lines of custom code for my house, only 170 lines of which are control code like the above.

But that code is the core, and it's where most of the bugs would be. The goal is to avoid most of the bugs by using FRP and Haskell the way I have, and the rest by testing.

For testing, I'm using doctest to embed test cases along with the FRP code. I designed reactive-banana-automation to work well with this style of testing. For example, here's how it determines when the house needs to be in low power mode, including the tests:

-- | Start by assuming we're not in low power mode, to avoid
-- entering it before batteryVoltage is available.
--
-- If batteryVoltage continues to be unavailable, enter low power mode for
-- safety.
-- 
-- >>> runner <- observeAutomation (runInverterUnless lowpowerMode) (mkSensors (pure ()))
-- >>> runner $ \sensors -> gotEvent (dhcpClients sensors) []
-- []
-- >>> runner $ \sensors -> sensorUnavailable (batteryVoltage sensors)
-- [InverterPower PowerOff]
-- >>> runner $ \sensors -> batteryVoltage sensors =: Volts 25
-- [InverterPower PowerOn]
-- >>> runner $ \sensors -> batteryVoltage sensors =: Volts 20
-- [InverterPower PowerOff]
-- >>> runner $ \sensors -> batteryVoltage sensors =: Volts 25
-- [InverterPower PowerOn]
-- >>> runner $ \sensors -> sensorUnavailable (batteryVoltage sensors)
-- [InverterPower PowerOff]
lowpowerMode :: Sensors t -> MomentAutomation (Behavior Bool)
lowpowerMode sensors = automationStepper False
    =<< fmap calc <$> getEventFrom (batteryVoltage sensors)
  where
    -- Below 24.0 (really 23.5 or so) is danger zone for lead acid.
    calc (Sensed v) = v < Volts 24.1
    calc SensorUnavailable = True

The sensor data is available over http, so I can run this controller code in test mode, on my laptop, and observe how it reacts to real-world circumstances.

joey@darkstar:~/src/homepower>./controller test
InverterPower PowerOn
FridgeRelay PowerOff

Previously: my haskell controlled offgrid fridge

,

Planet DebianRuss Allbery: remctl 3.15

As promised in the security advisory accompanying remctl 3.14, this release of remctl (and upcoming releases of C TAP Harness and rra-c-util) implements proper valgrind testing. In the process, I found and fixed a potential data loss bug: a server-side command accepting input on standard input that exited before consuming all of its input could have its output truncated due to a logic bug in the remctld server.

This release also adds some more protocol verification (I didn't see any bugs; this just tightens things out of paranoia) and much-improved maintainer support for static analysis and compiler warning checks.

You can get the latest release from the remctl distribution page.

Planet DebianSune Vuorela: Modern C++ and Qt

layout->addWidget(std::make_unique<QLabel>().release());

– ’cause raw new’s are bad.

Rondam RamblingsIn your face, liberal haters!

The New York Times reports that California is now world's 5th largest economy.  Only the U.S. as a whole, China, Japan and Germany are bigger.  On top of that the vast majority of that growth came from the coastal areas, where the liberals live. Meanwhile, in Kansas, the Republican experiment in stimulating economic growth by cutting taxes has gone down in screaming flames: The experiment with

,

CryptogramFriday Squid Blogging: US Army Developing 3D-Printable Battlefield Robot Squid

The next major war will be super weird.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet Linux AustraliaDavid Rowe: FreeDV 1600 Sample Clock Offset Bug

So I’m busy integrating FreeDV 700D into the FreeDV GUI program. The 700D modem works on larger frames (160ms) than the previous modes (e.g. 20ms for FreeDV 1600) so I need to adjust FIFO sizes.

As a reference I tried FreeDV 1600 between two laptops (one tx, one rx) and noticed it was occasionally losing frame sync, generating bit errors, and producing the occasional bloop in the audio. After a little head scratching I discovered a bug in the FreeDV 1600 FDMDV modem! Boy, is my face red.

The FMDMV modem was struggling with sample clock differences between the mod and demod. I think the bug was introduced when I did some (too) clever refactoring to reduce FDMDV memory consumption while developing the SM1000 back in 2014!

Fortunately I have a trail of unit test programs, leading back from FreeDV GUI, to the FreeDV API (freedv_tx and freedv_rx), then individual unit tests for each modem (fdmdv_mod/fdmdv_demod), and finally Octave simulation code (fdmdv.m, fdmdv_demod.m and friends) for the modem.

Octave (or an equivalent vector based scripting language like Python/numpy) is much easier to work with than C for complex DSP problems. So after a little work I reproduced the problem using the Octave version of the FDMDV modem – bit errors happening every time there was a timing jump.

The modulator sends parallel streams of symbols at about 50 baud. These symbols are output at a sample rate of 8000 Hz. Part of the demodulators job is to estimate the best place to sample each received modem symbol, this is called timing estimation. When the tx and rx are separate, the two sample clocks are slightly different – your 8000 Hz clock will be a few Hz different to mine. This means the timing estimate is a moving target, and occasionally we need to compenstate by talking a few more or few less samples from the 8000 Hz sample stream.

In the plot below the Octave demodulator was fed with a signal that is transmitted at 8010 Hz instead of the nominal 8000 Hz. So the tx is sampling faster than the rx. The y axis is the timing estimate in samples, x axis time in seconds. For FreeDV 1600 there are 160 samples per symbol (50 baud at 8 kHz). The timing estimate at the rx drifts forwards until we hit a threshold, set at +/- 40 samples (quarter of a symbol). To avoid the timing estimate drifting too far, we take a one-off larger block of samples from the input, the timing takes a step backwards, then starts drifting up again.

Back to the bug. After some head scratching, messing with buffer shifts, and rolling back phases I eventually fixed the problem in the Octave code. Next step is to port the code to C. I used my test framework that automatically compares a bunch of vectors (states) in the Octave code to the equivalent C code:

octave:8> system("../build_linux/unittest/tfdmdv")
sizeof FDMDV states: 40032 bytes
ans = 0
octave:9> tfdmdv
tx_bits..................: OK
tx_symbols...............: OK
tx_fdm...................: OK
pilot_lut................: OK
pilot_coeff..............: OK
pilot lpf1...............: OK
pilot lpf2...............: OK
S1.......................: OK
S2.......................: OK
foff_coarse..............: OK
foff_fine................: OK
foff.....................: OK
rxdec filter.............: OK
rx filt..................: OK
env......................: OK
rx_timing................: OK
rx_symbols...............: OK
rx bits..................: OK
sync bit.................: OK
sync.....................: OK
nin......................: OK
sig_est..................: OK
noise_est................: OK

passes: 46 fails: 0

Great! This system really lets me move fast once the Octave code is written and tested. Next step is to test the C version of the FDMDV modem using the command line arguments. Note how I used sox to insert a sample rate offset by changing the same rate of the raw sample stream:

build_linux/src$ ./fdmdv_get_test_bits - 30000 | ./fdmdv_mod - - | sox -t raw -r 8000 -s -2 - -t raw -r 7990 - | ./fdmdv_demod - - 14 demod_dump.txt | ./fdmdv_put_test_bits -
-----------------+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
bits 29568  errors 0  BER 0.0000

Zero errors, despite 10Hz sample clock offset. Yayyyyy. The C demodulator outputs a bunch of vectors that can be plotted with an Octave helper program:

octave:6> fdmdv_demod_c("../build_linux/src/demod_dump.txt",28000)

The FDMDV modem is integrated with Codec 2 in the FreeDV API. This can be tested using the freedv_tx/freedv_rx programs. For convenience, I generated some 60 second test files at different sample rates. Here is how I test using the freedv_rx program:

./freedv_rx 1600 ~/Desktop/ve9qrp_1600_8010.raw - | aplay -f S16

The ouput audio sounds good, no bloops, and by examining the freedv_rx_log.txt file I can see the demodulator didn’t loose sync. Cool.

Here is a table of the samples I used for testing:

No clock offset Simulates Tx sample rate 10Hz slower than Rx Simulates Tx sampling 10Hz faster than Rx

Finally, the FreeDV API is linked with the FreeDV GUI program. Here is a video of me testing different sample clock offsets using the raw files in the table above. Note there is no audio in this video as my screen recorder fights with FreeDV for use of sound cards. However the decoded FreeDV audio should be uninterrupted, there should be no re-syncs, and zero bit errors:

The fix has been checked into codec2-dev SVN rev 3556, and will make it’s way into FreeDV GUI 1.3, to be released in late May 2018.

Reading Further

FDMDV modem
README_fdmdv.txt
Steve Ports an OFDM modem from Octave to C, some more on the Octave/C automated test framework and porting complex DSP algorithms.
Testing a FDMDV Modem. Early blog post on FDMDV modem with some more disucssion on sample clock offsets
Timing Estimation for PSK modems, talks a little about how we generate a timing estimate

Planet DebianJonathan McDowell: The excellent selection of Belfast Tech conferences

Yesterday I was lucky enough to get to speak at BelTech, giving what I like to think of as a light-hearted rant entitled “10 Stupid Reasons You’re Not Using Free Software”. It’s based on various arguments I’ve heard throughout my career about why companies shouldn’t use or contribute to Free Software, and, as I’m sure you can guess, I think they’re mostly bad arguments. I only had a 20 minute slot for it, which was probably about right, and it seemed to go down fairly well. Normally the conferences I would pitch talks to would end up with me preaching to the converted, but I felt this one provided an audience who were probably already using Free software but hadn’t thought about it that much.

And that got me to thinking “Isn’t it fantastic that we have this range of events and tech groups in Belfast?”. I remember the days when the only game in town was the Belfast LUG (now on something like it’s 5th revival and still going strong), but these days you could spend every night of the month at a different tech event covering anything from IoT to Women Who Code to DevOps to FinTech. There’s a good tech community that’s built up, with plenty of cross over between the different groups.

An indicator of that is the number of conferences happening in the city, with many of them now regular fixtures in the annual calendar. In addition to BelTech I’ve already attended BelFOSS and Women Techmakers this year. Product Camp Belfast is happening today. NIDevConf is just over a month away (I’ll miss this year due to another commitment, but thoroughly enjoyed last year). WordCamp Belfast isn’t the sort of thing I’d normally notice, but the opportunity to see Heather Burns speak on the GDPR is really tempting. Asking around about what else is happening turned up B-Sides, Big Data Belfast and DigitalDNA.

How did we end up with such a vibrant mix of events (and no doubt many more I haven’t noticed)? They might not be major conferences that pull in an international audience, but in some ways I find that more heartening - there’s enough activity in the local tech scene to make this number of events make sense. And I think that’s pretty cool.

CryptogramDetecting Laptop Tampering

Micah Lee ran a two-year experiment designed to detect whether or not his laptop was ever tampered with. The results are inconclusive, but demonstrate how difficult it can be to detect laptop tampering.

Worse Than FailureError'd: Version-itis

"No thanks, I'm holding out for version greater than or equal to 3.6 before upgrading," writes Geoff G.

 

"Looks like Twilio sent me John Doe's receipt by mistake," wrote Charles L.

 

"Little do they know that I went back in time and submitted my resume via punch card!" Jim M. writes.

 

Richard S. wrote, "I went to request a password reset from an old site that is sending me birthday emails, but it looks like the reCAPTCHA is no longer available and the site maintainers have yet to notice."

 

"It's nice to see that this new Ultra Speed Plus™ CD burner lives up to its name, but honestly, I'm a bit scared to try some of these," April K. writes.

 

"Sometimes, like Samsung's website, you have to accept that it's just ok to fail sometimes," writes Alessandro L.

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianDaniel Pocock: GoFundMe: errors and bait-and-switch

Yesterday I set up a crowdfunding campaign to purchase some equipment for the ham radio demo at OSCAL.

It was the first time I tried crowdfunding and the financial goal didn't seem very big (a good quality AGM battery might only need EUR 250) so I only spent a little time looking at some of the common crowdfunding sites and decided to try GoFundMe.

While the campaign setup process initially appeared quite easy, it quickly ran into trouble after the first donation came in. As I started setting up bank account details to receive the money, errors started appearing:

I tried to contact support and filled in the form, typing a message about the problem. Instead of sending my message to support, it started trying to show me long lists of useless documents. Finally, after clicking through several screens of unrelated nonsense, another contact form appeared and the message I had originally typed had been lost in their broken help system and I had to type another one. It makes you wonder, if you can't even rely on a message you type in the contact form being transmitted accurately, how can you rely on them to forward the money accurately?

When I finally got a reply from their support department, it smelled more like a phishing attack, asking me to give them more personal information and email them a high resolution image of my passport.

If that was really necessary, why didn't they ask for it before the campaign went live? I felt like they were sucking people in to get money from their friends and then, after the campaign gains momentum, holding those beneficiaries to ransom and expecting them to grovel for the money.

When a business plays bait-and-switch like this and when their web site appears to be broken in more ways than one (both the errors and the broken contact form), I want nothing to do with them. I removed the GoFundMe links from my blog post and replaced them with direct links to Paypal. Not only does this mean I avoid the absurdity of emailing copies of my passport, but it also cuts out the five percent fee charged by GoFundMe, so more money reaches the intended purpose.

Another observation about this experience is the way GoFundMe encourages people to share the link to their own page about the campaign and not the link to the blog post. Fortunately in most communication I had with people about the campaign I gave them a direct link to my blog post and this makes it easier for me to change the provider handling the money by simply removing links from my blog to GoFundMe.

While the funding goal hasn't been reached yet, my other goal, learning a little bit about the workings of crowdfunding sites, has been helped along by this experience. Before trying to run something like this again I'll look a little harder for a self-hosted solution that I can fully run through my blog.

I've told GoFundMe to immediately refund all money collected through their site so donors can send money directly through the Paypal donate link on my blog. If you would like to see the ham radio station go ahead at OSCAL, please donate, I can't take my own batteries with me by air.

Planet Linux AustraliaSimon Lyall: Audiobooks – April 2018

Viking Britain: An Exploration by Thomas Williams

Pretty straightforward, Tells as the uptodate research (no Winged Helmets 😢) and easy to follow (easier if you have a map of the UK) 7/10

Contact by Carl Sagan

I’d forgotten how different it was from the movie in places. A few extra characters and plot twists. many more details and explanations of the science. 8/10

The Path Between the Seas: The Creation of the Panama Canal, 1870-1914 by David McCullough

My monthly McCullough book. Great as usual. Good picture of the project and people. 8/10

Winter World: The Ingenuity of Animal Survival by Bernd Heinrich

As per the title this spends much of the time on [varied strategies for] Winter adaptation vs Summer World’s more general coverage. A great listen 8/10

A Man on the Moon: The Voyages of the Apollo Astronauts by Andrew Chaikin

Great overview of the Apollo missions. The Author interviewed almost all the astronauts. Lots of details about the missions. Excellent 9/10

Walkaway by Cory Doctorow

Near future Sci Fi. Similar feel to some of his other books like Makers. Switches between characters & audiobook switches narrators to match. Fastforward the Sex Scenes 💤. Mostly works 7/10

The Neanderthals Rediscovered: How Modern Science Is Rewriting Their Story by Michael A. Morse

Pretty much what the subtitle advertises. Covers discoveries from the last 20 years which make other books out of date. Tries to be Neanderthals-only. 7/10

The Great Quake: How the Biggest Earthquake in North America Changed Our Understanding of the Planet by Henry Fountain

Straightforward story of the 1964 Alaska Earthquake. Follows half a dozen characters & concentrates on worst damaged areas. 7/10

Share

Rondam RamblingsI don't know where I'm a gonna go when the volcano blows

Hawaii's Kilauea volcano is erupting.  So is one in Vanuatu.  And there is increased activity in Yellowstone.  Hang on to your hats, folks, Jesus's return must be imminent. (In case you didn't know, the title is a line from a Jimmy Buffet song.)

Planet DebianNorbert Preining: Onyx Boox Note 10.3 – first impressions

I recently go my hands on a new gadget, the Onyx Boox Note. I have now owned a Kindle Paperwhite (2nd gen), a Kobo Glo, a Kobo GloHD, and now the Onyx Boox Note. The prime reason for me getting this device were two: (i) ability to mark up pdfs and epubs with comments (something I need for research, review, check,…), (ii) the great pdf readability (automatic crop support), which is of course also related to the bigger screen.

The Note main screen gives the last read book and some others from the library, plus direct access to some apps. One can scroll through the most recently read books at the top by swiping right. I would have preferred a bit smaller icons on the big screen to see more of the books, maybe in a future firmware version.

Not too many applications are available, but the Play Store is there and one can get most programs. Unfortunately it seems that k9-mail – my main mail program on Android – does not support the Note.

Reading epubs is quite normal an experience. Nothing to complain here. Usual settings etc.

Where the Note is great is PDFs, which are a huge pain on my smaller devices. Neither the Kindle nor the Kobo have decent PDF support in my opinion, while the Note allows for auto-cropping (as seen in the image below), as well as manual cropping and several other features. Simply great.

Another wonderful feature is that one can scribble directly in the pdf or epub, and the notes will be saved. In addition to that, there is also a commenting mode in landscape format with the document on the left and the notes on the right, see below. Very useful, both of the modes.

Besides adding notes to pdfs and epubs, one can also have a notebook. Here is the Notes main interface screen, which allows selecting previous notes, adding new, and some more operations (I still don’t know the function of most icons).

Here is a simple example of scribbling around. Surprisingly good. I will see how much my normal paper note taking will be replaced by this.

Note taking and markup can of course be done with the finger, but the pen that comes with the device is much better. The sleeve that comes with the device has a holder for the pen, so it could be around wherever you go.

Finally some hardware specs from one of the hardware info programs.


I have used the Note now for two weeks for reading, pdf markup, and a bit of note taking. For now I have a very good impression, good battery run time, durable feeling. What I am missing is a backlight for reading in the night. I guess with more usage time I will find more points to criticize, but for now I think it was an excellent purchase.

Planet DebianJunichi Uekawa: Seems like my raspberry pi root filesystems break after about 2 years.

Seems like my raspberry pi root filesystems break after about 2 years. Presumably because I have everything including /var/log there. Fails to write. Is there a good way to monitor wear like SMART for HDD ? Quick search didn't give me much.

,

Planet Linux AustraliaMichael Still: How to make a privileged call with oslo privsep

Share

Once you’ve added oslo privsep to your project, how do you make a privileged call? Its actually really easy to do. In this post I will assume you already have privsep running for your project, which at the time of writing limits you to OpenStack Nova in the OpenStack universe.

The first step is to write the code that will run with escalated permissions. In Nova, we have chosen to only have one set of escalated permissions, so its easy to decide which set to use. I’ll document how we reached that decision and alternative approaches in another post.

In Nova, all code that runs with escalated permissions is in the nova/privsep directory, which is a pattern I’d like to see repeated in other projects. This is partially because privsep maintains a whitelist of methods that are allowed to be run this way, but its also because it makes it very obvious to callers that the code being called is special in some way.

Let’s assume that we’re going to add a simple method which manipulates the filesystem of a hypervisor node as root. We’d write a method like this in a file inside nova/privsep:

import nova.privsep

...

@nova.privsep.sys_admin_pctxt.entrypoint
def update_motd(message):
    with open('/etc/motd', 'w') as f:
        f.write(message)

This method updates /etc/motd, which is the text which is displayed when a user interactively logs into the hypervisor node. “motd” stands for “message of the day” by the way. Here we just pass a new message of the day which clobbers the old value in the file.

The important thing is that entrypoint decorator at the start of the method. That’s how privsep decides to run this method with escalated permissions, and decides what permissions to use. In Nova at the moment we only have one set of escalated permissions, which we called sys_admin_pctxt because we’re artists. I’ll discuss in a later post how we came to that decision and what the other options were.

We can then call this method from anywhere else in Nova like this:

import nova.privsep.motd

...

nova.privsep.motd('This node is currently idle')

Note that we do imports for privsep code slightly differently. We always import the entire path, instead of creating a shortcut to just the module we’re using. In other words, we don’t do:

from nova.privsep import motd

...

motd('This node is a banana')

The above code would work, but is frowned on because it is less obvious here that the motd() method runs with escalated permissions — you’d have to go and read the imports to tell that.

That’s really all there is to it. The only other thing to mention is that there is a bit of a wart — code with escalated permissions can only use Nova code that is within the privsep directory. That’s been a problem when we’ve wanted to use a utility method from outside that path inside escalated code. The restriction happens for good reasons, so instead what we do in this case is move the utility into the privsep directory and fix up all the other callers to call the new location. Its not perfect, but its what we have for now.

There are some simple review criteria that should be used to assess a patch which implements new code that uses privsep in OpenStack Nova. They are:

  • Don’t use imports which create aliases. Use the “import nova.privsep.motd” form instead.
  • Keep methods with escalated permissions as simple as possible. Remember that these things are dangerous and should be as easy to understand as possible.
  • Calculate paths to manipulate inside the escalated method — so, don’t let someone pass in a full path and the contents to write to that file as root, instead let them pass in the name of the network interface or whatever that you are manipulating and then calculate the path from there. That will make it harder for callers to use your code to clobber random files on the system.

Adding new code with escalated permissions is really easy in Nova now, and much more secure and faster than it was when we only had sudo and root command lines to do these sorts of things. Let me know if you have any questions.

Share

The post How to make a privileged call with oslo privsep appeared first on Made by Mikal.

Krebs on SecurityTwitter to All Users: Change Your Password Now!

Twitter just asked all 300+ million users to reset their passwords, citing the exposure of user passwords via a bug that stored passwords in plain text — without protecting them with any sort of encryption technology that would mask a Twitter user’s true password. The social media giant says it has fixed the bug and that so far its investigation hasn’t turned up any signs of a breach or that anyone misused the information. But if you have a Twitter account, please change your account password now.

Or if you don’t trust links in blogs like this (I get it) go to Twitter.com and change it from there. And then come back and read the rest of this. We’ll wait.

In a post to its company blog this afternoon, Twitter CTO Parag Agrawal wrote:

“When you set a password for your Twitter account, we use technology that masks it so no one at the company can see it. We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone.

A message posted this afternoon (and still present as a pop-up) warns all users to change their passwords.

“Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password. You can change your Twitter password anytime by going to the password settings page.”

Agrawal explains that Twitter normally masks user passwords through a state-of-the-art encryption technology called “bcrypt,” which replaces the user’s password with a random set of numbers and letters that are stored in Twitter’s system.

“This allows our systems to validate your account credentials without revealing your password,” said Agrawal, who says the technology they’re using to mask user passwords is the industry standard.

“Due to a bug, passwords were written to an internal log before completing the hashing process,” he continued. “We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again.”

Agrawal wrote that while Twitter has no reason to believe password information ever left Twitter’s systems or was misused by anyone, the company is still urging all Twitter users to reset their passwords NOW.

A letter to all Twitter users posted by Twitter CTO Parag Agrawal

Twitter advises:
-Change your password on Twitter and on any other service where you may have used the same password.
-Use a strong password that you don’t reuse on other websites.
Enable login verification, also known as two factor authentication. This is the single best action you can take to increase your account security.
-Use a password manager to make sure you’re using strong, unique passwords everywhere.

This may be much ado about nothing disclosed out of an abundance of caution, or further investigation may reveal different findings. It doesn’t matter for right now: If you’re a Twitter user and if you didn’t take my advice to go change your password yet, go do it now! That is, if you can.

Twitter.com seems responsive now, but some period of time Thursday afternoon Twitter had problems displaying many Twitter profiles, or even its homepage. Just a few moments ago, I tried to visit the Twitter CTO’s profile page and got this (ditto for Twitter.com):

What KrebsOnSecurity and other Twitter users got when we tried to visit twitter.com and the Twitter CTO’s profile page late in the afternoon ET on May 3, 2018.

If for some reason you can’t reach Twitter.com, try again soon. Put it on your to-do list or calendar for an hour from now. Seriously, do it now or very soon.

And please don’t use a password that you have used for any other account you use online, either in the past or in the present. A non-comprehensive list (note to self) of some password tips are here.

I have sent some more specific questions about this incident in to Twitter. More updates as available.

Update, 8:04 p.m. ET: Went to reset my password at Twitter and it said my new password was strong, but when I submitted it I was led to a dead page. But after logging in again at twitter.com the new password worked (and the old didn’t anymore). Then it prompted me to enter one-time code from app (you do have 2-factor set up on Twitter, right?) Password successfully changed!

Planet DebianBenjamin Mako Hill: Climbing Mount Rainier

Mount Rainier is an enormous glaciated volcano in Washington state. It’s  4,392 meters tall (14,410 ft) and extraordinary prominent. The mountain is 87 km (54m) away from Seattle. On clear days, it dominates the skyline.

Drumheller Fountain and Mt. Rainier on the University of Washington CampusDrumheller Fountain and Mt. Rainier on the University of Washington Campus (Photo by Frank Fujimoto)

Rainier’s presence has shaped the layout and structure of Seattle. Important roads are built to line up with it. The buildings on the University of Washington’s campus, where I work, are laid out to frame it along the central promenade. People in Seattle typically refer to Rainier simply as “the mountain.”  It is common to here Seattlites ask “is the mountain out?”

Having grown up in Seattle, I have an deep emotional connection to the mountain that’s difficult to explain to people who aren’t from here. I’ve seen Rainier thousands of times and every single time it takes my breath away. Every single day when I bike to work, I stop along UW’s “Rainier Vista” and look back to see if the mountain is out. If it is, I always—even if I’m running late for a meeting—stop for a moment to look at it. When I lived elsewhere and would fly to visit Seattle, seeing Rainier above the clouds from the plane was the moment that I felt like I was home.

Given this connection, I’ve always been interested in climbing Mt. Rainier.  Doing so typically takes at least a couple days and is difficult. About half of people who attempt typically fail to reach the top. For me, climbing rainier required an enormous amount of training and gear because, until recently, I had no experience with mountaineering. I’m not particularly interested in climbing mountains in general. I am interested in Rainier.

On Tuesday, Mika and I made our first climbing attempt and we both successfully made it to the summit. Due to the -15°C (5°F) temperatures and 88kph (55mph) winds at the top, I couldn’t get a picture at the top. But I feel like I’ve built a deeper connection with an old friend.


Other than the picture from UW campus, photos were all from my climb and taken by (in order): Jennifer Marie, Jonathan Neubauer, Mika Matsuzaki, Jonathan Neubauer, Jonathan Neubauer, Mika Matsuzaki, and Jake Holthaus.

Rondam RamblingsA quantum mechanics puzzle

Time to take a break from politics and sociology and geek out about quantum mechanics for a while. Consider a riff on a Michelson-style interferometer that looks like this: A source of laser light shines on a half-silvered mirror angled at 45 degrees (the grey rectangle).  This splits the beam in two.  The two beams are in actuality the same color as the original, but I've drawn them in

Planet DebianSilva Arapi: Digital Born Media Carnival July 2017

As described in their website, Digital Born Media Carnival was a gathering of hundred of online media representatives, information explorers and digital rights enthusiasts. The event took place on 14 – 18 July in Kotor, Montenegro. I found out about it as one of the members of Open Labs Hackerspace shared the news on our forum. While struggling if I should attend or not because of a very busy period at work and at the University, the whole thing sounded very interesting and intriguing at the same time, so I decided to join the group of people who were also planning to go and apply with a workshop session too. No regrets at all! This turned out to be one of the greatest events I’ve attended so far and had a great impact in what I somehow decided to do next, regarding my work as a hacktivist and as a digital rights enthusiast.


The organizers of the Carnival had announced on the website that they were looking for online media representatives, journalists, bloggers, content creators, human right defenders, hacktivists, new media startups etc and as a hactivist I found myself willing to join and learn more about some topics which for a while had been very intriguing to me, while I was also looking at this as an opportunity to meet with other people with common interests as me.

I applied with a workshop where I was going to introduce some simple tools for people to better preserve their privacy online. The session was accepted and I was invited to lead altogether with Andrej Petrovski the sessions on Digital Security track, located in the Sailing club “Lahor”. I held my workshop there on Saturday late in the morning and I really enjoyed it. Most of the attendees where journalists or people not with a technical background, and they showed a lot of interest, asked me many questions and shared some stories, I also received very good feedback on the workshop and it really gave me some really good vibes since this was the first time for me speaking on cyber security in an important event of this kind, as it was the DBMC’17.

I spent the other days on the Carnival attending different workshops and talks, meeting new people, discussing with friends and enjoying the sun. We would go to the beach on the afternoon and had very cool drone photo shooting 😉

DBMC drone photo shootingDBMC drone photo shooting – Kotor, Montenegro

This was a great work from the SHARE Foundation and hopefully there will be other events as such in the near future and I would totally recommend for people to attend! If you are new with the topics discussed there, this is a great way to start. If you have been on the field for a while, this is the place to meet other professionals as you. If you are looking for an event which you can also combine with some days of vacation but also be in touch with causes you care about, this would once again be the place to go.

Planet DebianDaniel Pocock: Turning a dictator's pyramid into a ham radio station

(Update: due to concerns about GoFundMe, I changed the links in this blog post so people can donate directly through PayPal. Anybody who tried to donate through GoFundMe should be receiving a refund.)

I've launched a crowdfunding campaign to help get more equipment for a bigger and better ham radio demo at OSCAL (19-20 May, Tirana). Please donate if you would like to see this go ahead. Just EUR 250 would help buy a nice AGM battery - if 25 people donate EUR 10 each, we can buy one of those.

You can help turn the pyramid of Albania's former communist dictator into a ham radio station for OSCAL 2018 on 19-20 May 2018. This will be a prominent demonstration of ham radio in the city center of Tirana, Albania.

Under the rule of Enver Hoxha, Albanians were isolated from the outside world and used secret antennas to receive banned television transmissions from Italy. Now we have the opportunity to run a ham station and communicate with the whole world from the very pyramid where Hoxha intended to be buried after his death.

Donations will help buy ham and SDR equipment for communities in Albania and Kosovo and assist hams from neighbouring countries to visit the conference. We would like to purchase deep-cycle batteries, 3-stage chargers, 50 ohm coaxial cable, QSL cards, PowerPole connectors, RTL-SDR dongles, up-convertors (Ham-it-up), baluns, egg insulators and portable masts for mounting antennas at OSCAL and future events.

The station is co-ordinated by Daniel Pocock VK3TQR from the Debian Project's ham radio team.

Donations of equipment and volunteers are also very welcome. Please contact Daniel directly if you would like to participate.

Any donations in excess of requirements will be transferred to one or more of the hackerspaces, radio clubs and non-profit organizations supporting education and leadership opportunities for young people in the Balkans. Any equipment purchased will also remain in the region for community use.

Please click here to donate if you would like to help this project go ahead. Without your contribution we are not sure that we will have essential items like the deep-cycle batteries we need to run ham radio transmitters.

Google AdsenseUpdated impressions metrics for AdSense

Reporting plays a key role in the AdSense experience. In fact, two-thirds of our partners consider it the most important feature within AdSense. Helping partners understand and improve their performance metrics is a top priority. Today, we’re announcing updates to how we report AdSense impressions.

The Interactive Advertising Bureau (IAB) and the Media Rating Council (MRC) in partnership with other industry bodies such as the Mobile Marketing Association (MMA), periodically review and update industry standards for impression measurement. They recommend guidelines to standardize how impressions are counted across formats and platforms.

What’s changing? 
Over the last year, to remain consistent with these standards, we have transitioned our ad serving platforms from served impressions to downloaded impressions. Served impressions are counted at the point that we find an ad for an ad request on the server. Downloaded impressions are counted for each ad request once at least one of the returned ads has begun to load on the user’s device. Starting today, publishers will see updated metrics in AdSense that reflect this change.
How will it impact partners?
Switching to counting impressions on download helps to improve viewability rates, consistency across measurement providers, and aligns the impression metric better with advertiser value. For example, if an ad failed to download or if the user closed the tab before it arrived, it will no longer be counted as an impression. As a result, some AdSense users might see decreases in their impression counts, and corresponding improvements in their impression RPM, CTR, and other impression-based metrics. Your earnings, however, should not be impacted.

We will continue to review and update our impression counting technology as new industry standards are defined and implemented.

For more information please visit our Help Center.

Posted by:
Andrew Gildfind, Product Manager

Cory DoctorowAnnouncing “Petard,” a new science fiction story reading on my podcast

Here’s the first part of my reading (MP3) of Petard, a story from MIT Tech Review’s Twelve Tomorrows, edited by Bruce Sterling; a story inspired by, and dedicated to, Aaron Swartz — about elves, Net Neutrality, dorms and the collective action problem.

MP3

Planet DebianJulien Danjou: A simple filtering syntax tree in Python

A simple filtering syntax tree in Python

Working on various pieces of software those last years, I noticed that there's always a feature that requires implementing some DSL.

The problem with DSL is that it is never the road that you want to go. I remember how creating my first DSL was fascinating: after using programming languages for years, I was finally designing my own tiny language!

A new language that my users would have to learn and master. Oh, it had nothing new, it was a subset of something, inspired by my years of C, Perl or Python, who knows. And that's the terrible part about DSL: they are an marvelous tradeoff between the power that they give to users, allowing them to define precisely their needs and the cumbersomeness of learning a language that will be useful in only one specific situation.

In this blog post, I would like to introduce a very unsophisticated way of implementing the syntax tree that could be used as a basis for a DSL. The goal of that syntax tree will be filtering. The problem it will solve is the following: having a piece of data, we want the user to tell us if the data matches their conditions or not.

To give a concrete example: a machine wants to grant the user the ability to filter the beans that it should keep. What the machine passes to the filter is the size of the current grain, and the filter should return either true or false, based on the condition defined by the user: for example, only keep beans that are bigger that are between 1 and 2 centimeters or between 4 and 6 centimeters.

The number of conditions that the users can define could be quite considerable, and we want to provide at least a basic set of predicate operators: equal, greater than and lesser than. We also want the user to be able to combine those, so we'll add the logical operators or and and.

A set of conditions can be seen as a tree, where leaves are either predicates, and in that case, do not have children, or are logical operators, and have children. For example, the propositional logic formula φ1 ∨ (φ2 ∨ φ3) can be represented with as a tree like this:

A simple filtering syntax tree in Python

Starting with this in mind, it appears that the natural solution is going to be recursive: handle the predicate as terminal, and if the node is a logical operator, recurse over its children.
Since we will be doing Python, we're going to use Python to evaluate our syntax tree.

The simplest way to write a tree in Python is going to be using dictionaries. A dictionary will represent one node and will have only one key and one value: the key will be the name of the operator (equal, greater than, or, and…) and the value will be the argument of this operator if it is a predicate, or a list of children (as dictionaries) if it is a logical operator.

For example, to filter our bean, we would create a tree such as:

{"or": [
  {"and": [
    {"ge": 1},
    {"le": 2},
  ]},
  {"and": [
    {"ge": 4},
    {"le": 6},
  ]},
]}

The goal here is to walk through the tree and evaluate each of the leaves of the tree and returning the final result: if we passed 5 to this filter, it would return True, and if we passed 10 to this filter, it would return False.

Here's how we could implement a very depthless filter that only handles predicates (for now):

import operator

class InvalidQuery(Exception):
    pass

class Filter(object):
    binary_operators = {
        "eq": operator.eq,
        "gt": operator.gt,
        "ge": operator.ge,
        "lt": operator.lt,
        "le": operator.le,
    }

    def __init__(self, tree):
        # Parse the tree and store the evaluator
        self._eval = self.build_evaluator(tree)

    def __call__(self, value):
        # Call the evaluator with the value
        return self._eval(value)

    def build_evaluator(self, tree):
        try:
            # Pick the first item of the dictionary.
            # If the dictionary has multiple keys/values
            # the first one (= random) will be picked.
            # The key is the operator name (e.g. "eq")
            # and the value is the argument for it
            operator, nodes = list(tree.items())[0]
        except Exception:
            raise InvalidQuery("Unable to parse tree %s" % tree)
        try:
            # Lookup the operator name
            op = self.binary_operators[operator]
        except KeyError:
            raise InvalidQuery("Unknown operator %s" % operator)
        # Return a function (lambda) that takes
        # the filtered value as argument and returns
        # the result of the predicate evaluation
        return lambda value: op(value, nodes)

You can use this Filter class by passing a predicate such as {"eq": 4}:

>>> f = Filter({"eq": 4})
>>> f(2)
False
>>> f(4)
True

This Filter class works but is quite limited as we did not provide logical operators. Here's a complete implementation that supports binary operators and and or:


import operator


class InvalidQuery(Exception):
    pass


class Filter(object):
    binary_operators = {
        u"=": operator.eq,
        u"==": operator.eq,
        u"eq": operator.eq,

        u"<": operator.lt,
        u"lt": operator.lt,

        u">": operator.gt,
        u"gt": operator.gt,

        u"<=": operator.le,
        u"≤": operator.le,
        u"le": operator.le,

        u">=": operator.ge,
        u"≥": operator.ge,
        u"ge": operator.ge,

        u"!=": operator.ne,
        u"≠": operator.ne,
        u"ne": operator.ne,
    }

    multiple_operators = {
        u"or": any,
        u"∨": any,
        u"and": all,
        u"∧": all,
    }

    def __init__(self, tree):
        self._eval = self.build_evaluator(tree)

    def __call__(self, value):
        return self._eval(value)

    def build_evaluator(self, tree):
        try:
            operator, nodes = list(tree.items())[0]
        except Exception:
            raise InvalidQuery("Unable to parse tree %s" % tree)
        try:
            op = self.multiple_operators[operator]
        except KeyError:
            try:
                op = self.binary_operators[operator]
            except KeyError:
                raise InvalidQuery("Unknown operator %s" % operator)
            return lambda value: op(value, nodes)
        # Iterate over every item in the list of the value linked
        # to the logical operator, and compile it down to its own
        # evaluator.
        elements = [self.build_evaluator(node) for node in nodes]
        return lambda value: op((e(value) for e in elements))

To support the and and or operators, we leverage the all and any built-in Python functions. They are called with an argument that is a generator that evaluates each one of the sub-evaluator, doing the trick.

Unicode is the new sexy, so I've also added Unicode symbols support.

And it is now possible to implement our full example:

>>> f = Filter(
...     {"∨": [
...         {"∧": [
...             {"≥": 1},
...             {"≤": 2},
...         ]},
...         {"∧": [
...             {"≥": 4},
...             {"≤": 6},
...         ]},
...     ]})
>>> f(5)
True
>>> f(8)
False
>>> f(1)
True

As an exercise, you could try to add the not operator, which deserve its own category as it is a unary operator!

In the next blog post, we will see how to improve that filter with more features, and how to implement a domain-specific language on top of it, to make humans happy when writing the filter!

A simple filtering syntax tree in Python

Hole and Henni – François Charlier, 2018
In this drawing, the artist represents the deepness of functional programming and how its horse power can help you escape many dark situations.

Mark ShuttleworthScam alert

Am writing briefly to say that I believe a scam or pyramid scheme is currently using my name fraudulently in South Africa. I am not going to link to the websites in question here, but if you are being pitched a make-money-fast story that refers to me and crypto-currency, you are most likely being targeted by fraudsters.

CryptogramLC4: Another Pen-and-Paper Cipher

Interesting symmetric cipher: LC4:

Abstract: ElsieFour (LC4) is a low-tech cipher that can be computed by hand; but unlike many historical ciphers, LC4 is designed to be hard to break. LC4 is intended for encrypted communication between humans only, and therefore it encrypts and decrypts plaintexts and ciphertexts consisting only of the English letters A through Z plus a few other characters. LC4 uses a nonce in addition to the secret key, and requires that different messages use unique nonces. LC4 performs authenticated encryption, and optional header data can be included in the authentication. This paper defines the LC4 encryption and decryption algorithms, analyzes LC4's security, and describes a simple appliance for computing LC4 by hand.

Almost two decades ago I designed Solitaire, a pen-and-paper cipher that uses a deck of playing cards to store the cipher's state. This algorithm uses specialized tiles. This gives the cipher designer more options, but it can be incriminating in a way that regular playing cards are not.

Still, I like seeing more designs like this.

Hacker News thread.

Worse Than FailureCodeSOD: The Same Date

Oh, dates. Oh, dates in Java. They’re a bit of a dangerous mess, at least prior to Java 8. That’s why Java 8 created its own date-time libraries, and why JodaTime was the gold standard in Java date handling for many years.

But it doesn’t really matter what date handling you do if you’re TRWTF. An Anonymous submitter passed along this method, which is meant to set the start and end date of a search range, based on a number of days:

private void setRange(int days){
        DateFormat df = new SimpleDateFormat("yyyy-MM-dd")
        Date d = new Date();
        Calendar c = Calendar.getInstance()
        c.setTime(d);

        Date start =  c.getTime();

        if(days==-1){
                c.add(Calendar.DAY_OF_MONTH, -1);
                assertThat(c.getTime()).isNotEqualTo(start)
        }
        else if(days==-7){
                c.add(Calendar.DAY_OF_MONTH, -7)
                assertThat(c.getTime()).isNotEqualTo(start)
        }
        else if (days==-30){
                c.add(Calendar.DAY_OF_MONTH, -30)
                assertThat(c.getTime()).isNotEqualTo(start)
        }
        else if (days==-365){
                c.add(Calendar.DAY_OF_MONTH, -365)
                assertThat(c.getTime()).isNotEqualTo(start)
        }

        from = df.format(start).toString()+"T07:00:00.000Z"
        to = df.format(d).toString()+"T07:00:00.000Z"
}

Right off the bat, days only has a handful of valid values- a day, a week, a month(ish) or a year(ish). I’m sure passing it as an int would never cause any confusion. The fact that they don’t quite grasp what variables are for is a nice touch. I’m also quite fond of how they declare a date format at the top, but then also want to append a hard-coded timezone to the format, which again, I’m sure will never cause any confusion or issues. The assertThat calls check that the Calendar.add method does what it’s documented to do, making those both pointless and stupid.

But that’s all small stuff. The real magic is that they never actually use the calendar after adding/subtracting dates. They obviously meant to include d = c.getTime() someplace, but forgot. Then, without actually testing the code (they have so many asserts, why would they need to test?) they checked it in. It wasn’t until QA was checking the prerelease build that anyone noticed, “Hey, filtering by dates doesn’t work,” and an investigation revealed that from and to always had the same value.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianNeil Williams: Upgrading the home server rack

My original home server rack is being upgraded to use more ARM machines as the infrastructure of the lab itself. I've also moved house, so there is more room for stuff and kit. This has allowed space for a genuine machine room. I will be using that to host test devices which are do not need manual intervention despite repeated testing. (I'll also have the more noisy / brightly illuminated devices in the machine room.) The more complex devices will sit on shelves in the office upstairs. (The work to put the office upstairs was a major undertaking involving my friends Steve and Andy - embedding ethernet cables into the walls of four rooms in the new house. Once that was done, the existing ethernet cable into the kitchen could be fixed (Steve) and then connected to my new Ubiquity AP, (a present from Steve and Andy)).

Before I moved house, I found that the wall mounted 9U communications rack was too confined once there were a few devices in use. A lot of test devices now need many cables to each device. (Power, ethernet, serial, second serial and USB OTG and then add a relay board with it's own power and cables onto the DUT....)

Devices like beaglebone-black, cubietruck and other U-Boot devices will go downstairs, albeit in a larger Dell 24U rack purchased from Vince who has moved to a larger rack in his garage. Vince also had a gigabit 16 port switch available which will replace the Netgear GS108 8-port Gigabit Ethernet Unmanaged Switch downstairs.

I am currently still using the same microserver to run various other services around the house (firewall, file server etc.): HP 704941-421 ProLiant Micro Server

I've now repurposed a reconditioned Dell Compact Form Factor desktop box to be my main desktop machine in my office. This was formerly my main development dispatcher and the desktop box was chosen explicitly to get more USB host controllers on the motherboard than is typically available with an x86 server. There have been concerns that this could be causing bottlenecks when running multiple test jobs which all try to transfer several hundred megabytes of files over USB-OTG at the same time.

I've now added a SynQuacer Edge ARM64 Server to run a LAVA dispatcher in the office, controlling several of the more complex devices to test in LAVA - Hikey 620, HiKey 960 and Dragonboard 410c via a Cambrionix PP15s to provide switchable USB support to enable USB network dongles attached to the USB OTG port which is also used for file deployment during test jobs. There have been no signs of USB bottlenecks at this stage.

This arm64 machine then supports running test jobs on the development server used by the LAVA software team as azrael.codehelp. It runs headless from the supplied desktop tower case. I needed to use a PCIe network card from TPlink to get the device operating but this limitation should be fixed with new firmware. (I haven't had time to upgrade the firmware on that machine yet, still got the rest of the office to kit out and the rack to build.) The development server itself is an ARM64 virtual machine, provided by the Linaro developer cloud and is used with a range of other machines to test the LAVA codebase, doing functional testing.

The new dispatcher is working fine, I've not had any issues with running test jobs on some of the most complex devices used in LAVA. I haven't needed to extend the RAM from the initial 4G and the 24 cores are sufficient for the work I've done using the machine so far.

The rack was moved into place yesterday (thanks to Vince & Steve) but the patch panel which Andy carefully wired up is not yet installed and there are cables everywhere, so a photo will have to wait. The plan now is to purchase new UPS batteries and put each of the rack, the office and the ISP modem onto dedicated UPS. The objective is not to keep the lab running in the event of a complete power cut lasting hours, just to survive brown outs and power cuts lasting a minute or two, e.g. when I finally get around to labelling up the RCD downstairs. (The new house was extended a few yours before I bought it and the organisation of the circuits is a little unexpected in some parts of the house.)

Once the UPS batteries are in, the microserver, a PDU, the network switch and patch panel, as well as the test devices, will go into the rack in the machine room. I've recently arranged to add a second SynQuacer server into the rack - this time fitted into a 1U server case. (Definite advantage of the new full depth rack over the previous half-depth comms box.) I expect this second SynQuacer to have a range of test devices to complement our existing development staging instance which runs the nightly builds which are available for both amd64 and arm64.

I'll post again once I've got the rest of the rack built and the second SynQuacer installed. The hardest work, by far, has been fitting out the house for the cabling. Setting up the machines, installing and running LAVA has been trivial in comparison. Thanks to Martin Stadler for the two SynQuacer machines and the rest of the team in Linaro Enterprise Group (LEG) for getting this ARM64 hardware into useful roles to support wider development. With the support from Debian for building the arm64 packages, the new machine simply sits on the network and does "TheRightThing" without fuss or intervention. I can concentrate on the test devices and get on with things. The fact that the majority of my infrastructure now runs on ARM64 servers is completely invisible to my development work.

Planet DebianSean Whitton: git-annex-export and reprepro

Introduction

I wanted to set up my own apt repository to distribute packages to my own computers. This repository must be PGP-signed, but I want to use my regular PGP key rather than a PGP key stored on the server, because I don’t want to trust my server with root access to my laptop.

Further, I want to be able to add to my repo while offline, rather than dputting .changes files to my server.

The standard tools, mini-dinstall and reprepro, are designed to be executed on the same machine that will serve the apt repository. To satisfy the above, though, I need to be able to execute the repository generator offline, on my laptop.

Two new features of git-annex, git-annex-export and v6 repositories, can allow us to execute the repository generator offline and then copy the contents of the repository to the server in an efficient way.

(v6 repositories are not production-ready but the data in this repo is replaceable: I backup the reprepro config files, and the packages can be regenerated from the (d)git repositories containing the source packages.)

Schematic instructions

This should be enough to get you going if you have some experience with git-annex and reprepro.

In the following, athena is a host I can ssh to. On that host, I assume that Apache is set up to serve /srv/debian as the apt repository, with .htaccess rules to deny access to the conf/ and db/ subdirectories and to enable the following of symlinks.

  1. apt-get install git-annex reprepro
  2. git init a new git repository on laptop.
  3. Create conf/distributions, conf/options, conf/do-sync.sh and .gitattributes per below.
  4. Create other files such as README, sample foo.list, etc. if desired.
  5. git add the various plain text files we just created and commit.
  6. git annex init --version=6.
  7. Add an origin remote, git config remote.origin.annex-ignore true and git push -u origin master git-annex. I.e. store repository metadata somewhere.
  8. git config --local annex.thin true to save disc space.
  9. git config --local annex.addunlocked true so that reprepro can modify files.
  10. Tell git-annex about the /srv/debian directory on athena: git annex initremote athena type=rsync rsyncurl=athena:/srv/debian autoenable=true exporttree=yes encryption=none
  11. Tell git-annex that the /srv/debian directory on athena should track the contents of the master branch: git annex export --fast --to=athena --tracking master
  12. Now you can reprepro include foo.changes, reprepro export and git annex should do the rest: the do-sync.sh script calls git annex sync and gitannex knows that it should export the repo to /srv/debian on athena when told to sync.

Files

conf/distributions is an exercise for the reader – this is standard reprepro stuff.

conf/options:

endhook do-sync.sh

conf/do-sync.sh:

#!/bin/sh

git annex add
git annex sync --content

.gitattributes:

* annex.largefiles=anything
conf/* annex.largefiles=nothing
README annex.largefiles=nothing
\.gitattributes annex.largefiles=nothing

These git attributes tell git-annex to annex all files except the plain text config files, which are just added to git.

Bugs

I’m not sure whether these are fixable in git-annex-export, or not. Both can be worked around with hacks/scripts on the server.

  • reprepro exportsymlinks won’t work to create suite symlinks: git-annex-export will create plain files instead of symlinks.

  • git-annex-exports exports non-annexed files in git, such as README, as readable only by their owner.

Planet DebianThorsten Glaser: Happy Birthday, GPS Stash Hunt!

GPS Stash Hunt, also commercially known as “Geocaching”, “Terracaching”, or non-commercially (but also nōn-free) as “Opencaching”, is 18 years old today! Time for celebration or something!

(read more…)

,

Planet DebianNorbert Preining: Docker, cron, mail and logs

If one searches for “docker cron“, there are a lot of hits but not really good solutions or explanations how to get a running system. In particular, what I needed/wanted was (i) stdout/stderr of the cron script is sent to some user, and (ii) that in case of errors the error output also appears in the docker logs. Well, that was not that easy …

There are three components here that need to be tuned:

  • getting cron running
  • getting mail delivery running
  • redirecting some error message to docker

Let us go through the list. The full Dockerfile and support scripts can be found below.

Getting cron running

Many of the lean images do not contain cron, even less run it (this is due to the philosophy of one process per container). So the usual incantation to install cron are necessary:

RUN apt-get install -y cron

After that, one can use as entry point the cron daemon running in foreground:

CMD cron -f

Of course you have to install a crontab file somehow, in my case I did:

ADD crontab /etc/cron.d/mypackage
RUN chmod 0644 /etc/cron.d/mypackage

This works all nice and well, but if there are errors, or problems with the crontab file, you will not see any error message, because (at least on Debian and Ubuntu) cron logs to syslog, but there is no syslog daemon available to show you the output, and cron has no option to log to a file (how stupid!). Furthermore, other cron daemon options (bcron, cronie) are not available in Debian/stable for example 🙁

In my case the crontab file had a syntax error, and thus no actual program was ever run.

Getting mail delivery running

Assuming you have these hurdles settled one would like to get the output of the cron scripts mailed. For this, a sendmail compliant program needs to be available. One could set up a full blown system (exim, postfix), but this is overkill as one only wants to send out message. I opted for ssmtp which is a single program with straight-forward configuration and operation, and it provides a sendmail program.

RUN apt-get install -y ssmtp
ADD ssmtp.conf /etc/ssmtp/ssmtp.conf
RUN chown root.mail /etc/ssmtp/ssmtp.conf
RUN chmod 0640 /etc/ssmtp/ssmtp.conf

The configuration file can be rather minimal, here is an example:

Root=destuser@destination
mailhub=mail-server-name-ip

Of course, the mail-server-name-ip must accepts emails for destuser@destination. There are several more options for ssl support, rewriting of domains etc, see for example here.

Having this in place, cron will now duly send emails with the output of the cron jobs.

Redirecting some error message to docker

This leaves us with the last task, namely getting error messages into the docker logs. Since cron does capture the stdout and stderr of the cronjobs for mail sending, one can either redirect these outputs to docker, but then one will not get emails, or wrap the cron jobs up. I used the following wrapper to output a warning to the docker logs:

#!/bin/bash
#
if [ -z "$1" ] ; then
  echo "need name of cron job as first argument" >&2
  exit 1
fi

if [ ! -x "$1" ] ; then
  echo "cron job file $1 not executable, exiting" >&2
  exit 1
fi

if "$1"
then
  exit 0
else
  echo "cron job $1 failed!" 2>/proc/1/fd/2 >&2
  exit 1
fi

together with entries in the crontab like:

m h d m w root /app/run-cronjob /app/your-script

The magic trick here is the 2>/proc/1/fd/2 >&2 which first redirects the stderr to the stderr of the process with the id 1, which is the entry point of the container and watched by docker, and then echo the message to stderr. One could also redirect stdout in the same way to /proc/1/fd/1 if necessary or preferred.


The above thing combined gives a nice combination of emails with the output of the cron jobs, as well as entries in the docker logs if something broke and for example no output was created.

Let us finish with a minimal Dockerfile doing these kind of things:

from debian:stretch-slim
RUN apt-get -y update
RUN apt-get install -y cron ssmtp
ADD . /app
ADD crontab /etc/cron.d/mypackage
RUN chmod 0644 /etc/cron.d/mypackage
ADD ssmtp.conf /etc/ssmtp/ssmtp.conf
RUN chown root.mail /etc/ssmtp/ssmtp.conf
RUN chmod 0640 /etc/ssmtp/ssmtp.conf
CMD cron -f

Planet DebianBits from Debian: New Debian Developers and Maintainers (March and April 2018)

The following contributors got their Debian Developer accounts in the last two months:

  • Andreas Boll (aboll)
  • Dominik George (natureshadow)
  • Julien Puydt (jpuydt)
  • Sergio Durigan Junior (sergiodj)
  • Robie Basak (rbasak)
  • Elena Grandi (valhalla)
  • Peter Pentchev (roam)
  • Samuel Henrique (samueloph)

The following contributors were added as Debian Maintainers in the last two months:

  • Andy Li
  • Alexandre Rossi
  • David Mohammed
  • Tim Lunn
  • Rebecca Natalie Palmer
  • Andrea Bolognani
  • Toke Høiland-Jørgensen
  • Gabriel F. T. Gomes
  • Bjorn Anders Dolk
  • Geoffroy Youri Berret
  • Dmitry Eremin-Solenikov

Congratulations!

Krebs on SecurityWhen Your Employees Post Passwords Online

Storing passwords in plaintext online is never a good idea, but it’s remarkable how many companies have employees who are doing just that using online collaboration tools like Trello.com. Last week, KrebsOnSecurity notified a host of companies that employees were using Trello to share passwords for sensitive internal resources. Among those put at risk by such activity included an insurance firm, a state government agency and ride-hailing service Uber.

By default, Trello boards for both enterprise and personal use are set to either private (requires a password to view the content) or team-visible only (approved members of the collaboration team can view).

But that doesn’t stop individual Trello users from manually sharing personal boards that include proprietary employer data, information that may be indexed by search engines and available to anyone with a Web browser. And unfortunately for organizations, far too many employees are posting sensitive internal passwords and other resources on their own personal Trello boards that are left open and exposed online.

A personal Trello board created by an Uber employee included passwords that might have exposed sensitive internal company operations.

KrebsOnSecurity spent the past week using Google to discover unprotected personal Trello boards that listed employer passwords and other sensitive data. Pictured above was a personal board set up by some Uber developers in the company’s Asia-Pacific region, which included passwords needed to view a host of internal Google Documents and images.

Uber spokesperson Melanie Ensign said the Trello board in question was made private shortly after being notified by this publication, among others. Ensign said Uber found the unauthorized Trello board exposed information related to two users in South America who have since been notified.

“We had a handful of members in random parts of the world who didn’t realize they were openly sharing this information,” Ensign said. “We’ve reached out to these teams to remind people that these things need to happen behind internal resources. Employee awareness is an ongoing challenge, We may have dodged a bullet here, and it definitely could have been worse.”

Ensign said the initial report about the exposed board came through the company’s bug bounty program, and that the person who reported it would receive at least the minimum bounty amount — $500 — for reporting the incident (Uber hasn’t yet decided whether the award should be higher for this incident).

The Uber employees who created the board “used their work email to open a public board that they weren’t supposed to,” Ensign said. “They didn’t go through our enterprise account to create that. We first found out about it through our bug bounty program, and while it’s not technically a vulnerability in our products, it’s certainly something that we would pay for anyway. In this case, we got multiple reports about the same thing, but we always pay the first report we get.”

Of course, not every company has a bug bounty program to incentivize the discovery and private reporting of internal resources that may be inadvertently exposed online.

Screenshots that KrebsOnSecurity took of many far more shocking examples of employees posting dozens of passwords for sensitive internal resources are not pictured here because the affected parties still have not responded to alerts provided by this author.

Trello is one of many online collaboration tools made by Atlassian Corporation PLC, a technology company based in Sydney, Australia. Trello co-founder Michael Pryor said Trello boards are set to private by default and must be manually changed to public by the user.

“We strive to make sure public boards are being created intentionally and have built in safeguards to confirm the intention of a user before they make a board publicly visible,” Pryor said. “Additionally, visibility settings are displayed persistently on the top of every board.”

If a board is Team Visible it means any members of that team can view, join, and edit cards. If a board is Private, only members of that specific board can see it. If a board is Public, anyone with the link to the board can see it.

Interestingly, updates made to Trello’s privacy policy over the past weekend may make it easier for companies to locate personal boards created by employees and pull them behind company resources.

A Trello spokesperson said the privacy changes were made to bring the company’s policies in line with new EU privacy laws that come into enforcement later this month. But they also clarify that Trello’s enterprise features allow the enterprise admins to control the security and permissions around a work account an employee may have created before the enterprise product was purchased.

Uber spokesperson Ensign called the changes welcome.

“As a result companies will have more security control over Trello boards created by current/former employees and contractors, so we’re happy to see the change,” she said.

Update: Added information from Ensign about two users who had their data exposed from the credentials in the unauthorized Trello page published by Uber employees.

Rondam RamblingsThis is inspiring

The Washington Post reports that: Two African American men arrested at a Philadelphia Starbucks last month have reached a settlement with the city and secured its commitment to a pilot program for young entrepreneurs.  Rashon Nelson and Donte Robinson chose not to pursue a lawsuit against the city, Mike Dunn, a spokesman from the Mayor’s Office, told The Washington Post. Instead, they agreed to

TEDCalling all social entrepreneurs + nonprofit leaders: Apply for The Audacious Project

Our first collection of Audacious Project winners takes the stage after a stellar session at TED2018, in which each winner made a big, big wish to move their organization’s vision to the next level with help from a new consortium of nonprofits. As a bonus during the Audacious Project session. we watched an astonishing performance of “New Second Line” from Camille A. Brown and Dancers. From left: The Bail Project’s Robin Steinberg; Heidi M. Sosik of the Woods Hole Oceanographic Institute; Caroline Harper of Sight Savers; Vanessa Garrison and T. Morgan Dixon of GirlTrek; Fred Krupp from the Environmental Defense Fund; Chloe Davis and Maleek Washington of Camille A. Brown and Dancers; pianist Scott Patterson; Andrew Youn of the One Acre Fund; and Catherine Foster, Camille A. Brown, Timothy Edwards, Juel D. Lane from Camille A. Brown and Dancers. Obscured behind Catherine Foster is Raj Panjabi of Last Mile Health (and dancer Mayte Natalio is offstage). Photo: Ryan Lash / TED

Creating wide-scale change isn’t easy. It takes incredible passion around an issue, and smart ideas on how to move the needle and, hopefully, improve people’s lives. It requires bottomless energy, a dedicated team, an extraordinary amount of hope. And, of course, it demands real resources.

TED would like to help, on the last part at least. This is an open invitation to all social entrepreneurs and nonprofit leaders: apply to be a part of The Audacious Project in 2019. We’re looking for big, bold, unique ideas that are capable of affecting more than a million people or driving transformational change on a key issue. We’re looking for unexplored plans that have a real, credible path to execution. That can inspire people around the world to come together to act.

Applications for The Audacious Project are open now through June 10. And here’s the best part — this isn’t a long, detailed grant application that will take hours to complete. We’ve boiled it down to the essential questions that can be answered swiftly. So apply as soon as you can. If your idea feels like a good fit, we’ll be in touch with an extended application that you’ll have four weeks to complete.

The Audacious Project process is rigorous — if selected as a Finalist, you’ll participate in an ideation workshop to help clarify your approach and work with us and our partners on a detailed project proposal spanning three to five years. But the work will be worth it, as it can turbocharge your drive toward change.

More than $406 million has already been committed to the first ideas in The Audacious Project. And further support is coming in following the simultaneous launch of the project at both TED2018 and the annual Skoll World Forum last week. Watch the full session from TED, or highlight reel above that screened the next day at Skoll. And who knows? Perhaps you’ll be a part of the program in 2019.

CryptogramNIST Issues Call for "Lightweight Cryptography" Algorithms

This is interesting:

Creating these defenses is the goal of NIST's lightweight cryptography initiative, which aims to develop cryptographic algorithm standards that can work within the confines of a simple electronic device. Many of the sensors, actuators and other micromachines that will function as eyes, ears and hands in IoT networks will work on scant electrical power and use circuitry far more limited than the chips found in even the simplest cell phone. Similar small electronics exist in the keyless entry fobs to newer-model cars and the Radio Frequency Identification (RFID) tags used to locate boxes in vast warehouses.

All of these gadgets are inexpensive to make and will fit nearly anywhere, but common encryption methods may demand more electronic resources than they possess.

The NSA's SIMON and SPECK would certainly qualify.

Worse Than FailureCodeSOD: A Password Generator

Every programming language has a *bias* which informs their solutions. Object-oriented languages are biased towards objects, and all the things which follow on. Clojure is all about function application. Haskell is all about type algebra. Ruby is all about monkey-patching existing objects.

In any language, these things can be taken too far. Java's infamous Spring framework leaps to mind. Perl, being biased towards regular expressions, has earned its reputation as being "write only" thanks to regex abuse.

Gert sent us along some Perl code, and I was expecting to see regexes taken too far. To my shock, there weren't any regexes.

Gert's co-worker needed to generate a random 6-digit PIN for a voicemail system. It didn't need to be cryptographically secure, repeats and zeros are allowed (they exist on a keypad, after all!). The Perl-approach for doing this would normally be something like:


sub randomPIN {
  return sprintf("%06u",int(rand(1000000)));
}

Gert's co-worker had a different plan in mind, though.


sub randomPIN {
my $password;
my @num = (1..9);
my @char = ('@','#','$','%','^','&','*','(',')');
my @alph = ('a'..'z');
my @alph_up = ('A'..'Z');

my $rand_num1 = $num[int rand @num];
my $rand_num2 = $num[int rand @num];
my $rand_num3 = $num[int rand @num];
my $rand_num4 = $num[int rand @num];
my $rand_num5 = $num[int rand @num];
my $rand_num6 = $num[int rand @num];

$password = "$rand_num1"."$rand_num2"."$rand_num3"."$rand_num4"."$rand_num5"."$rand_num6";

return $password;
}

This code starts by creating a set of arrays, @num, @char, etc. The only one that matters is @num, though, since this generates a PIN to be entered on a phone keypad and touchtone signals are numeric and also there is no "(" key on a telephone keypad. Obviously, the developer copied this code from a random password function somewhere, which is its own special kind of awful.

Now, what's fascinating is that they initialize @num with the numbers 1 through 9, and then use the rand function to generate a random number from 0 through 8, so that they can select an item from the array. So they understood how the rand function worked, but couldn't make the leap to eliminate the array with something like rand(9).

For now, replacing this function is simply on Gert's todo list.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianNeil Williams: Upgrading the home server rack

My original home server rack is being upgraded to use more ARM machines as the infrastructure of the lab itself. I've also moved house, so there is more room for stuff and kit. This has allowed space for a genuine machine room. I will be using that to host test …

Planet DebianNorbert Preining: TeX Live 2018 released

I guess everyone knows it already, Karl has released TeX Live 2018 officially just when I was off for some mountaineering, but perfectly in time for the current BachoTeX meeting.

The DVDs are already being burnt and will soon be sent to the various TeX User groups who ordered. The .iso image is available on CTAN, and the net installer will pull all the newest stuff. Currently Karl is working on getting those packages updated during the freeze to the newest level in TeX Live.

In addition to the changes I mentioned in the post about TL2018 in Debian, the news taken from the TeX Live documentation:

  • MacTEX: See version support changes below. In addition, the files installed in /Applications/TeX/ by MacTEX have been reorganized for greater clarity; now this location contains four GUI programs (BibDesk, LaTeXiT, TeX Live Utility, and TeXShop) at the top level and folders with additional utilities and documentation.
  • tlmgr: new front-ends tlshell (Tcl/Tk) and tlcockpit (Java); JSON output; uninstall now a synonym for remove; new action/option print-platform-info.
  • Platforms:
    • New: x86_64-linuxmusl and aarch64-linux. Removed: armel-linux, powerpc-linux.
    • x86_64-darwin supports 10.10–10.13 (Yosemite, El Capitan, Sierra, and High Sierra).
    • x86_64-darwinlegacy supports 10.6–10.10 (though x86_64-darwin is preferred for 10.10). All support for 10.5 (Leopard) is gone, that is, both the powerpc-darwin and i386-darwin platforms have been removed.
    • Windows: XP is no longer supported.

That’s all, let the fun begin!

Planet DebianSean Whitton: Debian Policy call for participation -- May 2018

We had a release of Debian Policy near the beginning of last month but there hasn’t been much activity since then. Please consider writing or reviewing patches for some of these bugs.

Consensus has been reached and help is needed to write a patch

#273093 document interactions of multiple clashing package diversions

#314808 Web applications should use /usr/share/package, not /usr/share/doc/…

#425523 Describe error unwind when unpacking a package fails

#452393 Clarify difference between required and important priorities

#556015 Clarify requirements for linked doc directories

#578597 Recommend usage of dpkg-buildflags to initialize CFLAGS and al.

#582109 document triggers where appropriate

#685506 copyright-format: new Files-Excluded field

#757760 please document build profiles

#759316 Document the use of /etc/default for cron jobs

#761219 document versioned Provides

Wording proposed, awaiting review from anyone and/or seconds by DDs

#582109 document triggers where appropriate

#737796 copyright-format: support Files: paragraph with both abbreviated na…

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#835451 Building as root should be discouraged

#846970 Proposal for a Build-Indep-Architecture: control file field

#864615 please update version of posix standard for scripts (section 10.4)

#897217 Vcs-Hg should support -b too

Merged for the next release

#896749 footnote of 3.3 lists deprecated alioth mailinglist URL

,

Planet DebianBen Hutchings: Debian LTS work, April 2018

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 2 hours from March. I worked all 17 hours.

In support of the "retpoline" mitigation for Spectre variant 2, I added a backport of gcc-4.9 to wheezy (as gcc-4.9-backport), based on work by Roberto Sánchez and on the existing gcc-4.8 backport (gcc-mozilla). I also updated the linux-tools package to support building external modules with retpolines enabled. Finally, I completed an update to the linux package, but delayed uploading it until 1st May due to an embargoed issue.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #157

This week’s report represents the three-year anniversary of the Reproducible Builds project reporting on its activities. We would like to thank all those who have contributed over the years, in particular thanking Jérémy Bobbio for starting this.

Here’s what happened in the Reproducible Builds effort between Sunday April 22 and Saturday April 28 2018:

Packages reviewed and fixed, and bugs filed

139 package categorisations were added, 71 were updated and 38 were removed this week.

In addition, build failure bugs were reported by Adrian Bunk (111), Hilmar Preuße (1), Niko Tyni (1) & Sebastian Ramacher (4).

Misc.

This week’s edition was written by Chris Lamb, Holger Levsen and Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianIngo Juergensmann: #DeleteFacebook and alternative Social Networks

Some weeks ago a more or less large scandal popped up in the media: Cambridge Analytica misused data from Facebook. Many users of Facebook were unhappy about abusing their personal data and deleted their Facebook accounts - or at least tried some alternatives like Friendica, Hubzilla, Diaspora, Mastodon and others.

There has been a significant increase in user count since then and this gave a general boost for the networks.

Apropos networks... basically there are two large networks: The Federation (Diaspora, Socialhome) and The Fediverse (GNU Social, Mastodon, postActiv, Pleroma). Within the two networks all solution can exchange information like posts, comments, user information and such, or in other words: they federate with each other. So, when you use Mastodon your posts won't be available to Diaspora users and vice versa as they use different protocols for federation.

And here Friendica and Hubzilla do have their advantage: both are able to federate with both networks. Sean Tilley has some more information in his article "A quick guide to The Free Network" on medium.com.

Another great resource you can use to find more about alternatives to Facebook is the great new Fediverse Wiki.

From my point of view I would recommend either Friendica or Hubzilla, depending on what you want:

  • Friendica is in my opinion the best solution to have best of both worlds, i.e. Fediverse und Federation. It has active developers and a good and helping community. It concentrates on the social network topic.
  • Hubzilla is a more complete approach: you can add addons to have a cloud space, a wiki or create web pages.

Both offers you to create multiple profiles with one account (Friendica: profiles, Hubzilla: channels) and of course you have a fine-grained control about your content. There is also a fresh Youtube channel describing some Friendica features. Although it is in German, others might get some helpful hints as well from those videos.

Which alternative will be the best for you is up to you to decide. All alternatives have their pros and cons. If you don't already have a website, cloud space or such, Hubzilla might be the best choice for you. If you don't need such additional functions, you might be best suited with Friendica. If you like to install docker images, Mastodon will make you happy.

In the end, it's all about choice! You will have better control about your own data in all cases. You can run your own instance, make it private only, or you can join one of the available servers and try out what best suites you. For Friendica you can find some public servers on https://dir.friendica.social.

I'm running a Friendica node on https://nerdica.net/ as well as a Hubzilla hub on https://silverhaze.eu/ - feel invited to try both and register on either one to have a look onto some alternatives for Facebook.

After all: Decentralize and spread the workd! Use the alternatives you have and don't sell you privacy to the big players like Facebook and Google if you don't need to. :-)

PS: If you are already on one of those alternative networks, please feel free to connect me on my Friendica node or Hubzilla hub!

Kategorie: 
 

Planet DebianBits from Debian: New Debian Developers and Maintainers (April and May 2018)

The following contributors got their Debian Developer accounts in the last two months:

  • Andreas Boll (aboll)
  • Dominik George (natureshadow)
  • Julien Puydt (jpuydt)
  • Sergio Durigan Junior (sergiodj)
  • Robie Basak (rbasak)
  • Elena Grandi (valhalla)
  • Peter Pentchev (roam)
  • Samuel Henrique (samueloph)

The following contributors were added as Debian Maintainers in the last two months:

  • Andy Li
  • Alexandre Rossi
  • David Mohammed
  • Tim Lunn
  • Rebecca Natalie Palmer
  • Andrea Bolognani
  • Toke Høiland-Jørgensen
  • Gabriel F. T. Gomes
  • Bjorn Anders Dolk
  • Geoffroy Youri Berret
  • Dmitry Eremin-Solenikov

Congratulations!

Planet DebianJeremy Bicha: Congratulations Ubuntu and Fedora

Congratulations to Ubuntu and Fedora on their latest releases.

This Fedora 28 release is special because it is believed to be the first release in their long history to release exactly when it was originally scheduled.

The Ubuntu 18.04 LTS release is the biggest release for the Ubuntu Desktop in 5 years as it returns to a lightly customized GNOME desktop. For reference, the biggest changes from vanilla GNOME are the custom Ambiance theme and the inclusion of the popular AppIndicator and Dock extensions (the Dock extension being a simplified version of the famous Dash to Dock). Maybe someday I could do a post about the smaller changes.

I think one of the more interesting occurrences for fans of Linux desktops is that these releases of two of the biggest LInux distributions occurred within days of each other. I expect this alignment to continue (although maybe not quite as dramatically as this time) since the Fedora and Ubuntu beta releases will happen at similar times and I expect Fedora won’t slip far from its intended release dates again.

CryptogramIoT Inspector Tool from Princeton

Researchers at Princeton University have released IoT Inspector, a tool that analyzes the security and privacy of IoT devices by examining the data they send across the Internet. They've already used the tool to study a bunch of different IoT devices. From their blog post:

Finding #3: Many IoT Devices Contact a Large and Diverse Set of Third Parties

In many cases, consumers expect that their devices contact manufacturers' servers, but communication with other third-party destinations may not be a behavior that consumers expect.

We have found that many IoT devices communicate with third-party services, of which consumers are typically unaware. We have found many instances of third-party communications in our analyses of IoT device network traffic. Some examples include:

  • Samsung Smart TV. During the first minute after power-on, the TV talks to Google Play, Double Click, Netflix, FandangoNOW, Spotify, CBS, MSNBC, NFL, Deezer, and Facebook­even though we did not sign in or create accounts with any of them.

  • Amcrest WiFi Security Camera. The camera actively communicates with cellphonepush.quickddns.com using HTTPS. QuickDDNS is a Dynamic DNS service provider operated by Dahua. Dahua is also a security camera manufacturer, although Amcrest's website makes no references to Dahua. Amcrest customer service informed us that Dahua was the original equipment manufacturer.

  • Halo Smoke Detector. The smart smoke detector communicates with broker.xively.com. Xively offers an MQTT service, which allows manufacturers to communicate with their devices.

  • Geeni Light Bulb. The Geeni smart bulb communicates with gw.tuyaus.com, which is operated by TuYa, a China-based company that also offers an MQTT service.

We also looked at a number of other devices, such as Samsung Smart Camera and TP-Link Smart Plug, and found communications with third parties ranging from NTP pools (time servers) to video storage services.

Their first two findings are that "Many IoT devices lack basic encryption and authentication" and that "User behavior can be inferred from encrypted IoT device traffic." No surprises there.

Boingboing post.

Related: IoT Hall of Shame.

Worse Than FailureAn Obvious Requirement

Requirements. That magical set of instructions that tell you specifically what you need to build and test. Users can't be bothered to write them, and even if they could, they have no idea how to tell you what they want. It doesn't help that many developers are incapable of following instructions since they rarely exist, and when they do, they usually aren't worth the coffee-stained napkin upon which they're scribbled.

A sign warning that a footpath containing stairs isn't suitable for wheelchairs

That said, we try our best to build what we think our users need. We attempt to make it fairly straightforward to use what we build. The button marked Reports most likely leads to something to do with generating/reading/whatever-ing reports. Of course, sometimes a particular feature is buried several layers deep and requires multiple levels of ribbons, menus, sub-menus, dialogs, sub-dialogs and tabs before you find the checkbox you want. Since us developers as a group are, by nature, somewhat anal retentive, we try to keep related features grouped so that you can generally guess what path to try to find something. And we often supply a Help feature to tell you how to find it when you can't.

Of course, some people simply cannot figure out how to use the software we build, no matter how sensibly it's laid out and organized, or how many hints and help features we provide. And there is nothing in the history of computer science that addresses how to change this. Nothing!

Dimitri C. had a user who wanted a screen that performed several actions. The user provided requirements in the form of a printout of a similar dialog he had used in a previous application, along with a list of changes/colors/etc. They also provided some "helpful" suggestions, along the lines of, "It should be totally different, but exactly the same as the current application." Dimitri took pains to organize the actions and information in appropriate hierarchical groups. He laid out appropriate controls in a sensible way on the screen. He provided a tooltip for each control and a Help button.

Shortly after delivery, a user called to complain that he couldn't find a particular feature. Dimitri asked "Have you tried using the Help button?" The user said that "I can't be bothered to read the instructions in the help tool because accessing this function should be obvious".

Dimitri asked him "Have you looked on the screen for a control with the relevant name?" The user complained that "There are too many controls, and this function should be obvious". Dimitri asked "Did you try to hover your mouse over the controls to read the tooltips?" The user complained that "I don't have the time to do that because it would take too long!" (yet he had the time to complain).

Frustrated, Dimitri replied "To make that more obvious, should I make these things less obvious?". The user complained that "Everything should be obvious". Dimitri asked how that could possibly be done, to which the user replied "I don't know, that's your job".

When he realized that this user had no clue how to ask for what he wanted, he asked how this feature worked in previous programs, to which the user replied "I clicked this, then this, then this, then this, then this, then restarted the program".

Dimitri responded that "That's six steps instead of the two in my program, and that would require you to reenter some of the data".

The user responded "Yes, but it's obvious".

So is the need to introduce that type of user to the business end of a clue-bat.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet DebianDaniel Leidert: Re-enabling right click functionality for my Thinkpad touchpad

I have a Lenovo Thinkpad Yoga 11e running Debian Sid. The touchpad has a left and a right click area at the bottem. For some reason, the right click ability recently stopped working. I have not yet found the reason, but I was able to fix it by adding the emphasized lines in /usr/share/X11/xorg.conf.d/70-synaptics.conf.


Section "InputClass"
Identifier "Default clickpad buttons"
MatchDriver "synaptics"
Option "ClickPad" "true"
Option "EmulateMidButtonTime" "0"

Option "SoftButtonAreas" "50% 0 82% 0 0 0 0 0"
Option "SecondarySoftButtonAreas" "58% 0 0 15% 42% 58% 0 15%"
EndSection

Edit

Nope. Stopped working again and both bottom areas act as left click. As it is working as a left click, I guess, the touchpad is physically ok. So I have no idea, what's going on :(

Edit 2

Thanks to Cole Robinson and the link he provided I found the reason and a fix. GNOME 3.28 uses the clickfinger behaviour by default now. By setting the click method from the 'fingers' (clickfinger) method back to 'areas', either using gsettings or gnome-tweaks, the right click ability is back after rebooting the system.

PS: With the new default clickfinger method, the right- and middle click are emulated using a two- and three-finger tap.

Planet DebianRuss Allbery: Review: Vallista

Review: Vallista, by Steven Brust

Series: Vlad Taltos #15
Publisher: Tor
Copyright: October 2017
ISBN: 1-4299-4699-7
Format: Kindle
Pages: 334

This is the fifteenth book in the Vlad Taltos series, and, following the pattern, goes back to fill in a story from earlier in the series. This time, though, it doesn't go back far: Vallista takes place immediately before Hawk (at least according to the always-helpful Lyorn Records; it was not immediately obvious to me since it had been several years since I read Hawk). That means we have to wait at least one more book before Vlad is (hopefully) more free to act, but we get a bit more world-building and a few more clues about the broader arc of this series.

As is hopefully obvious, this is not at all the place to start with this series.

Vallista opens with Devera finding him and asking him for help. Readers of the series will recognize Devera as a regular and mysterious feature, but this is one of the most active roles she's played in a story. Following her, Vlad finds himself at a mysterious seaside house that he's sure wasn't there the last time he went by that area. When he steps inside, Devera vanishes and the door locks behind him.

The rest of the book is Vlad working out the mystery of what this house is, why it was constructed, and the nature of the people who occupy it. This is explicitly an homage to Gothic romances. The dead daughter Vlad encounters isn't exactly a ghost, but she's close, and there's a locked-up monster, family secrets, star-crossed lovers, and ulterior motives everywhere. There's also a great deal of bizarre geometry, since this book is as detailed of an exploration of necromancy as we've gotten in the series to date.

Like many words in Dragaera, necromancy doesn't mean what one expects from the normal English definition, although there's a tricky similarity. In this world it's more about planes of existence than death in particular, and since one of those planes of existence for Dragaerans is the Paths of the Dead and their strange connections across time and space, necromancy is also the magic of spacial and temporal connections. The mansion Vlad has to find his way out of is a creation of necromancy, as becomes clear early in the book, and there is death involved, but there are also a lot of mirrors, discussion of dimensional linkage and shifts, and as detailed of an explanation as we've gotten yet of Devera's unique abilities.

Vlad seems less devious in his attempts to solve mysteries than he is in his heist setups. A lot of Vallista involves Vlad wandering around, asking questions, complaining about his head hurting, and threatening people until he has enough information to understand the nature of the house. Perhaps a careful reader armed with a good memory of series details would be able to guess the mystery before Vlad lays it out for the reader. I'm not that reader and spent most of the book worrying that I was missing things I was supposed to be following. Thankfully, Brust isn't too coy about the ending. Vlad lays out most of the details in the final chapter for those who weren't following the specifics, although I admit I visited the Lyorn Records wiki afterwards to pick up more of the details. The story apart from the mystery is a very typical iteration of Vlad being snarky, kind-hearted, slightly impatient, and intent on minding his own business except when other people force him to get involved in theirs.

As is typical for the series entries that go back and fill in side stories, we don't get a lot of advancement of the main storyline. There is an intriguing scene in the Paths of the Dead with Vlad's memories, and a conversation between Vlad and Verra that provides one of the clearest indications of the overall arc of the series yet, but most of the story is concerned only with the puzzle of this mansion and its builder. I found that enjoyable but not exceptional. If you like Vlad (and if you're still reading this series, I assume you do), this is more of Vlad doing Vlad things, but I doubt it will stand out as anyone's favorite book in the series. But the series remains satisfying and worth reading even fifteen books in, which is a significant accomplishment.

I eagerly await the next book, which will hopefully be the direct sequel to Hawk and an advancement of the main plot.

Followed by (rumored, not yet confirmed) Tsalmoth.

Rating: 7 out of 10

Planet DebianJoachim Breitner: Avoid the dilemma of the trailing comma

The Haskell syntax uses comma-separated lists in various places and does, in contrast to other programming language, not allow a trailing comma. If everything goes on one line you write

  (foo, bar, baz)

and everything is nice.

Lining up

But if you want to have one entry on each line, then the obvious plan

  (foo,
   bar,
   baz
  )

is aesthetically unpleasing and moreover, extending the list by one to

  (foo,
   bar,
   baz,
   quux
  )

modifies two lines, which produces less pretty diffs.

Because it is much more common to append to lists rather than to prepend, Haskellers have developed the idiom of leading comma:

  ( foo
  , bar
  , baz
  , quux
  )

which looks strange until you are used to it, but solves the problem of appending to a list. And we see this idiom in many places:

  • In Cabal files:

      build-depends: base >= 4.3 && < 5
                   , array
                   , deepseq >= 1.2 && < 1.5
  • In module headers:

    {-# LANGUAGE DefaultSignatures
               , EmptyCase
               , ExistentialQuantification
               , FlexibleContexts
               , FlexibleInstances
               , GADTs
               , InstanceSigs
               , KindSignatures
               , RankNTypes
               , ScopedTypeVariables
               , TemplateHaskell
               , TypeFamilies
               , TypeInType
               , TypeOperators
               , UndecidableInstances #-}

Think outside the list!

I started to avoid this pattern where possible. And it is possible everywhere instead of having a declaration with a list, you can just have multiple declarations. I.e.:

  • In Cabal files:

      build-depends: base >= 4.3 && < 5
      build-depends: array
      build-depends: deepseq >= 1.2 && < 1.5
  • In module headers:

    {-# LANGUAGE DefaultSignatures #-}
    {-# LANGUAGE EmptyCase #-}
    {-# LANGUAGE ExistentialQuantification #-}
    {-# LANGUAGE FlexibleContexts #-}
    {-# LANGUAGE FlexibleInstances #-}
    {-# LANGUAGE GADTs #-}
    {-# LANGUAGE InstanceSigs #-}
    {-# LANGUAGE KindSignatures #-}
    {-# LANGUAGE RankNTypes #-}
    {-# LANGUAGE ScopedTypeVariables #-}
    {-# LANGUAGE TemplateHaskell #-}
    {-# LANGUAGE TypeFamilies #-}
    {-# LANGUAGE TypeInType #-}
    {-# LANGUAGE TypeOperators #-}
    {-# LANGUAGE UndecidableInstances #-}

It is a bit heavier, but it has a number of advantages:

  1. Both appending and prepending works without touching other lines.
  2. It is visually more homogeneous, making it – despite the extra words – easier to spot mistakes visually.
  3. You can easily sort the declarations alphabetically with your editor.
  4. Especially in Cabal files: If you have syntax error in your dependency specification (which I always have, writing << instead of < due to my Debian background), cabal will actually give you a helpful error location – it always only tells you which build-depends stanza was wrong, so if you have only one, then that’s not helpful.

What when it does not work?

Unfortunately, not every list in Haskell can have that treatment, and that’s why the recent GHC proposal on ExtraCommas wants to lift the restriction. In particular, it wants to allow trailing commas in subexport lists:

module Foo
    ( Foo(
        A,
	B,
      ),
      fromFoo,
    )

(Weirdly, export lists already allow trailing commas). An alternative here might be to write

module Foo
    ( Foo(A),
      Foo(B),
      fromFoo,
    )

and teach the compiler to not warn about the duplicate export of the Foo type.

For plain lists, this idiom can be useful:

list :: [Int]
list = let (>>) = (++) in do
   [ 1 ]
   [ 2 ]
   [ 3 ]
   [ 4 ]

It requires RebindableSyntax, so I do not recommend it for regular code, but it can be useful in a module that is dedicated to hold some generated data or configuration. And of course it works with any binary operator, not just (++)

Planet DebianPaul Wise: FLOSS Activities April 2018

Changes

Issues

Review

Administration

  • whowatch: release, contact downstreams
  • Debian: redirect support request, investigate GDPR issues, investigate buildd kernel issue
  • Debian wiki: investigate signup errors, whitelist email addresses, whitelist email domain

Sponsors

The purple-discord work, the sysstat backport and the libipc-run-perl patch backports were sponsored by my employer. All other work was done on a volunteer basis.

Cory DoctorowBoston, Chicago and Waterloo, I’m heading your way!

This Wednesday at 1145am, I’ll be giving the IDE Lunch Seminar at MIT’s Sloan School of Management, 100 Main Street.


From there, I head to Chicago to keynote Thotcon on Friday at 11am.

My final stop on this trip is Waterloo’s Perimeter Institute for Theoretical Physics, May 9 at 2PM.

I hope to see you! I’ve got plenty more appearances planned this season, including Santa Fe, Phoenix, San Jose, Boston, San Diego and Pasadena!

,

Planet DebianNorbert Preining: Analysing Debian packages with Neo4j – Part 3 Getting data from UDD into Neo4j

In the first part and the second part of this series of articles on analyzing Debian packages with Neo4j we gave a short introduction to Debian and the life time and structure of Debian packages, as well as developed the database scheme, that is the set of nodes and relations and their attributes used in the representation of Debian packages.

The current third (and for now last) part deals with the technical process of actually pulling the information from the Ultimate Debian Database UDD and getting it into Neo4j. We will close with a few sample queries and discuss possible future work.

The process of pulling the data and entering into Neo4j consisted of three steps:

  • Dumping the relevant information from the UDD,
  • Parsing the output of the previous step and building up a list of nodes and relations,
  • Creating a Neo4j database from the set of nodes and relations.

Pulling data from the Ultimate Debian Database UDD

The UDD has a public mirror at https://udd-mirror.debian.net/ which is accessible via a PostgreSQL client.

For the current status we only dumped information from two tables, namely the tables sources and packages, from which we dumped the relevant information discussed in the previous blog post.

The complete SQL query for the sources table was

SELECT source, version, maintainer_email, maintainer_name, release,
 uploaders, bin, architecture, build_depends, build_depends_indep,
 build_conflicts, build_conflicts_indep from sources ;

while the one for the packages table was

SELECT package, version, maintainer_email, maintainer_name, release,
 description, depends, recommends, suggests, conflicts, breaks,
 provides, replaces, pre_depends, enhances from packages ;

We first tried to use the command line client psql but due to the limited output format options of the client we decided to switch to a Perl script that uses the database access modules, and then dumped the output immediately into a Perl readable format for the next step.

The complete script can be accessed at the Github page of the project: pull-udd.pl.

Generating the list of nodes and relations

The complicated part of the process lies in the conversion from database tables to a graph with nodes and relations. We developed a Perl script that reads in the two tables dumped in the first step, and generates for each node and relation type a CSV file consisting of the a unique ID and the attributes listed at the end of the last blog post.

The Perl script generate-graph operates in two steps: first it reads in the dump files and creates a unique structure (hash of hashes) that collects all the information necessary. This first step is necessary due to the heavy duplication of information in the dumps (maintainers, packages, etc, all appear several times but need to be merged into a single entity).

We also generate for each node entity (each source package, binary package etc) unique id (UUID) so that we can later reference these nodes when building up the relations.

The final step consists of computing the relations from the data parsed, creating additional nodes on the way (in particular for alternative dependencies), and writing out all the CSV files.

Complications encountered

As expected, there were a lot of steps from the first version to the current working version, in particular due to the following reasons (to name a few):

  • Maintainers are identified by email, but they sometimes use different names
  • Ordering of version numbers is non-trivial due to binNMU uploads and other version string tricks
  • Different version of packages in different architectures
  • udeb (installer) packages

Creating a Neo4j Database

The last step was creating a Neo4j database from the CSV files of nodes and relations. Our first approach was not via CSV files but using Cypher statements to generate nodes and relationships. This turned out to be completely infeasible, since each Cypher statement requires updates in the database. I estimated the time for complete data entry (automated) to several weeks.

Neo4j recommends using the neo4j-import tool, which create a new Neo4j database from data in CSV files. The required format is rather simple, one CSV file for each node and relation type containing a unique id plus other attributes in columns. The CSV files for relations then link the unique ids, and can also add further attributes.

To give an example, let us look at the head of the CSV for source packages sp, which has besides the name no further attribute:

uuid:ID,name
id0005a566e2cc46f295636dee7d504e82,libtext-ngrams-perl
id00067d4a790c429b9428b565b6bddae2,yocto-reader
id000876d57c85440e899cb93db27c835e,python-mhash

We see the unique id, which is tagged with :ID (see the neo4j-import tool manual for details), and the name of the source package.

The CSV files defining relations all look similar:

:START_ID,:END_ID
id000072be5fd749328d0ec4c0ecc875f9,ide234044ae378493ab0af0151f775b8fe
id000082711b4b4076922b5982d09b611b,ida404df8388a149479b130d6692c60f5e
id000082711b4b4076922b5982d09b611b,idb5c2195d5b8f42bfbb9baa5fad6a066e

That is a list of start and end ids. In case of additional attributes like in the case of the depends relation we have

:START_ID,reltype,relversion,:END_ID
id00001319368f4e32993d49a8b1e61673,none,1,idcd55489608944012a02eadde55cbfa9e
id0000143632404ad386e4564b3917a27c,>=,0.8-0,id156130a5c32a47d3918d3a4f4faff16f
id0000143632404ad386e4564b3917a27c,>=,2.14.2-3,id1acca938752543efa4de87c60ff7b279

After having prepared these CSV files, a simple call to neo4j-import generates the Neo4j database in a new directory. Since we changed the set of nodes and relations several times, we named the generated files node-XXX.csv and edge-XXX.csv and generated the neo4j-import call automatically, see build-db script. This call takes, in contrast to the execution of the Cypher statements, a few seconds (!) to create the whole database:

$ neo4j-import ...
...
IMPORT DONE in 10s 608ms.
Imported:
528073 nodes
4539206 relationships
7540954 properties
Peak memory usage: 521.28 MB

Looking at the script build-all that glues everything together we see another step (sort-uniq) which makes sure that the same UUID is not listed multiple times.

Sample queries

Let us conclude with a few sample queries: First we want to find all packages in Jessie that build depends on tex-common. For this we use the Neo4j visualization kit and write Cypher statements to query and return the relevant nodes and relations:

match (BP:bp)<-[:build_depends]-(VSP:vsp)-[:builds]->(VBP:vbp)<-[:contains]-(S:suite)
  where BP.name="tex-common" and S.name="jessie"
  return BP, VSP, VBP, S

This query would give us more or less the following graph:

Another query is to find out the packages which are most often build depended upon, a typical query that is expensive for relational databases. In Cypher we can express this in very concise notation:

match (S:suite)-[:contains]->(VBP:vbp)-[:builds]-(VSP:vsp)-[:build_depends]-(X:bp)
  where S.name = "sid"
  with X.name as pkg,count(VSP) as cntr
  return pkg,cntr order by -cntr

which returns the top package debhelper with 55438 packages build-depending on it, followed by dh-python (9289) and pkg-config (9102).

Conclusions and future work

We have shown that a considerable part of the UDD can be transformed and be represented in a graph database, which allows to efficiently and easily express complex queries about relations between the packages.

We have seen how conversion of an old and naturally grown RDB is a laborious job that requires lots of work - in particular domain knowledge is necessary to deal with subtle inconsistencies and corner cases.

Lessons we have learned are

  • Finding a good representation is not a one-shot thing, but needs several iterations and lots of domain knowledge;
  • using Cypher is only reasonable for query but not for importing huge amounts of data
  • visualization in Chrome or Firefox is very dependent on the version of the browser, the OS, and probably the current moon phase.

There are also many things one could (I might?) do in the future:

Bug database: Including all the bugs reported including the versions in which they have been fixed would greatly improve the usefulness of the database.

Intermediate releases: Including intermediate releases of package that never made it into a release of Debian by parsing the UDD table for uploads would give a better view onto the history of package evolution.

Dependency management: Currently we carry the version and relation type as attribute of the dependency and pointing to the unversioned package. Since we already have a tree of versioned packages, we could point into this tree and only carry the relation type.

UDD dashboard: As a real-world challenge one could rewrite the UDD dashboard web interface using the graph database and compare the queries necessary to gather the data from the UDD and the graph database.

Graph theoretic issues: Find dependency cycles, connected components, etc.


This concludes the series of blog entries on representing Debian packages in a graph database.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.8.500.0

armadillo image

RcppArmadillo release 0.8.500.0, originally prepared and uploaded on April 21, has hit CRAN today (after having already been available via the RcppCore drat repo). A corresponding Debian release will be prepared as well. This RcppArmadillo release contains Armadillo release 8.500.0 with a number of rather nice changes (see below for details), and continues our normal bi-monthly CRAN release cycle.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 472 other packages on CRAN.

A high-level summary of changes follows.

Changes in RcppArmadillo version 0.8.500.0 (2018-04-21)

  • Upgraded to Armadillo release 8.500 (Caffeine Raider)

    • faster handling of sparse matrices by kron() and repmat()

    • faster transpose of sparse matrices

    • faster element access in sparse matrices

    • faster row iterators for sparse matrices

    • faster handling of compound expressions by trace()

    • more efficient handling of aliasing in submatrix views

    • expanded normalise() to handle sparse matrices

    • expanded .transform() and .for_each() to handle sparse matrices

    • added reverse() for reversing order of elements

    • added repelem() for replicating elements

    • added roots() for finding the roots of a polynomial

  • Fewer LAPACK compile-time guards are used, new unit tests for underlying features have been added (Keith O'Hara in #211 addressing #207).

  • The configure check for LAPACK features has been updated accordingly (Keith O'Hara in #214 addressing #213).

  • The compile-time check for g++ is now more robust to minimal shell versions (#217 addressing #216).

  • Compiler tests to were added for macOS (Keith O'Hara in #219).

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux AustraliaDavid Rowe: FreeDV 700D Part 3

After a 1 year hiatus, I am back into FreeDV 700D development, working to get the OFDM modem, LDPC FEC, and interleaver algorithms developed last year into real time operation. The aim is to get improved performance on HF channels over FreeDV 700C.

I’ve been doing lots of refactoring, algorithm development, fixing bugs, tuning, and building up layers of C code so we can get 700D on the air.

Steve ported the OFDM modem to C – thanks Steve!

I’m building up the software in the form of command line utilities, some notes, examples and specifications in Codec 2 README_ofdm.txt.

Last week I stayed at the shack of Chris, VK5CP, in a quiet rural location at Younghusband on the river Murray. As well as testing my Solar Boat, Mark (VK5QI) helped me test FreeDV 700D. This was the first time the C code software has been tested over a real HF radio channel.

We transmitted signals from YoungHusband, and received them at a remote SDR in Sydney (about 1300km away), downloading wave files of the received signal for off-line analysis.

After some tweaking, it worked! The frequency offset was a bit off, so I used the cohpsk_ch utility to shift it within the +/- 25Hz acquisition range of the FreeDV 700D demodulator. I also found some level sensitivity issues with the LDPC decoder. After implementing a form of AGC, the number of bit errors dropped by a factor of 10.

The channel had nasty fading of around 1Hz, here is a video of the “sample #32” spectrum bouncing around. This rapid fading is a huge challenge for modems. Note also the spurious birdie off to the left, and the effect of receiver AGC – the noise level rises during fades.

Here is a spectrogram of the same sample 33. The x axis is time in seconds. It’s like a “waterfall” SDR plot on it’s side. Note the heavy “barber pole” fading, which corresponds to the fades sweeping across the spectrum in the video above.

Here is the smoothed SNR estimate. The SNR is moving target for real world HF channels, the SNR moves between 2 and 6dB.

FreeDV 700D was designed to work down to 2dB on HF fading channels so pat on the back for me! Hundreds of hours of careful development and testing meant this thing actually worked when it went on air….

Sample 32 is a longer file that contains test frames instead of coded voice. The QPSK scatter diagram is a messy cross, typical of fading channels, as the amplitude of the signal moves in and out:

The LDPC FEC does a good job. Here are plots of the uncoded (raw) bit errors, and the bit errors after LDPC decoding, with the SNR estimates below:

Here are some wave and raw (headerless) audio files. The off air audio is error free, albeit at the low quality of Codec 2 at 700 bits/s. The goal of this work is to get intelligible speech through HF channels at low SNRs. We’ll look at improving the speech quality as a future step.

Still, error free digital voice on a heavily faded HF channel at 2dB SNR is pretty cool.

See below for how to use the last two raw file samples.

sample 33 off air modem signal Sample 33 decoded voice Sample 32 off air test frames raw file Sample 33 off air voice raw file

SNR estimation

After I sampled the files I had a problem – I needed to know the SNR. You see in my development I use simulated channels where I know exactly what the SNR is. I need to compare the performance of the real world, off-air signals to my expected results at a given SNR.

Unfortunately SNR on a fading channel is a moving target. In simulation I measure the total power and noise over the entire run, and the simulated fading channel is consistent. Real world channels jump all over the place as the ionosphere bounces around. Oh well, knowing we are in the ball park is probably good enough. We just need to know if FreeDV 700D is hanging onto real world HF channels at roughly the SNRs it was designed for.

I came up with a way of measuring SNR, and tested it with a range of simulated AWGN (just noise) and fading channels. The fading bandwidth is the speed at which the fading channel evolves. Slow fading channels might change at 0.2Hz, faster channels, like samples #32 and #33, at about 1Hz.

The blue line is the ideal, and on AWGN and slowly fading channels my SNR estimator does OK. It reads a dB low as the fading bandwidth increases to 1Hz. We are interested in the -2 to 4dB SNR range.

Command Lines

With the samples in the table above and codec2-dev SVN rev 3465, you can repeat some of my decodes using Octave and C:

octave:42> ofdm_ldpc_rx("32.raw")
EsNo fixed at 3.000000 - need to est from channel
Coded BER: 0.0010 Tbits: 54992 Terrs:    55
Codec PER: 0.0097 Tpkts:  1964 Terrs:    19
Raw BER..: 0.0275 Tbits: 109984 Terrs:  3021

david@penetrator:~/codec2-dev/build_linux/src$ ./ofdm_demod ../../octave/32.raw /dev/null -t --ldpc
Warning EsNo: 3.000000 hard coded
BER......: 0.0246 Tbits: 116620 Terrs:  2866
Coded BER: 0.0009 Tbits: 54880 Terrs:    47

build_linux/src$ ./freedv_rx 700D ../../octave/32.raw /dev/null --testframes
BER......: 0.0246 Tbits: 116620 Terrs:  2866
Coded BER: 0.0009 Tbits: 54880 Terrs:    47

build_linux/src$ ./freedv_rx 700D ../../octave/33.raw  - | aplay -f S16

Next Steps

I’m working steadily towards integrating FreeDV 700D into the FreeDV GUI program so anyone can try it. This will be released in May 2018.

Reading Further

Towards FreeDV 700D
FreeDV 700D – First Over The Air Tests
Steve Ports an OFDM modem from Octave to C
Codec 2 README_ofdm.txt

Planet DebianEugene V. Lyubimkin: Copying pictures from Jolla phone to Debian desktop in 30 seconds via jmtpfs/libmtp

Verified on Debian 9, amd64.

Unlike some scary instructions may suggest (try searching for 'jolla phone linux connect' or 'jolla phone linux file transfer'), just transferring the pictures taken with the Jolla phone camera to your Debian machine can be done simpler and faster than enabling developer mode on the phone and doing SSH-over-USB. In this case, there happens to be a nice Debian package for the task which requires zero configuration:


  1. connect the Jolla phone via USB, click on 'MTP transfer' in the phone pop-up;
  2. install jmtpfs via your favourite package manager, e.g.:
    $ sudo cupt install jmtpfs
    
    The following packages will be installed:
    
    jmtpfs [0.5-2+b2(stretch)]
    libmtp-common [1.1.13-1(stretch)]
    libmtp-runtime [1.1.13-1(stretch)]
    libmtp9 [1.1.13-1(stretch)]
    
    Need to get 40,1KiB/355KiB of archives. After unpacking 2699KiB will be used.
    Do you want to continue? [y/N/q/a/rc/?] y
    ...
    

  3. mount the data to some user directory, e.g.:
    $ mkdir jolla
    $ jmtpfs jolla
    Device 0 (...) is a Jolla Sailfish (...).
    

  4. the pictures are ready to be processed by any file manager or picture organiser:
    $ cd jolla/
    $ ls
    Mass storage
    

  5. after we're done, the directory can be unmounted:
    $ fusermount -u jolla
    

Planet DebianChris Lamb: Free software activities in April 2018

Here is my monthly update covering what I have been doing in the free software world during April 2018 (previous month).


Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month I:

  • Gave the keynote presentation at FLOSSUK's Spring Conference in Edinburgh, Scotland on reproducible builds and how it can prevent individual developers & centralised infrastructure from becoming targets from malicious actors.
  • Presented at foss-north 2018 in Gothenburg, Sweden to speak about diffoscope, our in-depth tool to analyse reproducibility issues in packages and how it can be used in quality-assurance efforts more generally.
  • Filed 10 upstream patches to make their build or output reproducible for the Sphinx documentation generator [...], the Lexicon DNS manager [...], the Dashell C++ stream library [...], Pylint static analysis tool [...], the vcr.py HTTP-interaction testing tool [...], the Click Python command-line parser [...], the ASDF interchange format [...], the strace system call tracer [...], the libdazzle Glib component library [...] and the Corosync Cluster Engine [...].
  • Updated the diffoscope tests to prevent a build failure under file 5.33. (#897099)
  • Dropped support for stripping text from PHP Pear registry file in our strip-nondeterminism tool to remove specific non-deterministic results from a completed build as we can fix this in the toolchain instead. [...]
  • Added an example on how to unmount the manpage in disorderfs, our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues. [...]
  • Migrated our weekly blog reports and related machinery from the deprecated Alioth and Debian-branded service to the more-generic reproducible-builds.org domain as well as made some cosmetic changes to the site itself. [...]
  • In Debian, I:
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Worked on publishing our weekly reports. (#154, #155 & #156).

Debian

My activities as the Debian Project Leader are covered in my monthly "Bits from the DPL" email to the debian-devel-announce mailing list.

I contributed the following patches for Debian:

  • debhelper: Does not accept zero as a valid version number in debian/changelog. (#894895)
  • python-colormap: Please drop override of new-package-should-not-package-python2-module. (#896662)
  • whatthepatch: Please drop override of new-package-should-not-package-python2-module. (#896664)
  • libdazzle: Incorrect Homepage: field. (#896065)
  • python-click: Please correct Homepage: field. (#895277)
  • libmypaint: Incorrect Homepage: field. (#895402)
  • figlet: Add missing space in figlet(6) manpage. (#894541)

Debian LTS


This month I have been paid to work 16¼ hours on the Debian Long Term Support (LTS) project. In that time I did the following:


Uploads

  • sphinx (1.7.2-1) — New upstream release, apply my upstream patch to make the set output reproducible (#895553), don't use Google Fonts to avoid local privacy breaches, fix testsuite to not rely on specific return types, etc.
  • python-django (1:1.11.12-1 & 2.0.4-1) — New upstream bugfix releases.
  • installation-birthday (9) — Also use /var/lib/vim to determine installation date. (#895686)
  • ruby-rjb (1.5.5-2) — Fix FBTFS under Java 9. (#874146)
  • redisearch:
    • 1.0.10-1 — New upstream release.
    • 1.0.10-2 — Drop -mpopcnt from CFLAGS. (#896593)
    • 1.0.10-3 — Use upstream's patch for removing -mpopcnt.
    • 1.1.0-1 — New upstream release.
  • libfiu (0.96-2) — Apply patch from upstream to make the build reproducible. (#894776)
  • redis (5:4.0.9-1) — New upstream release.
  • python-redis (2.10.6-3) — Fix tests when performed against an i386 Redis server. (#896864)

I also performed six sponsored uploads: connman-ui (0~20150623-0.1), playerctl (0.5.0-1), yasnippet-snippets (0.2-1), nose2 (0.7.4-2), pytest-tornado (0.5.0-1) & django-ipware (2.0.2-1).


FTP Team


As a Debian FTP assistant I ACCEPTed 108 packages: appstream, ayatana-indicator-messages, ayatana-indicator-notifications, buildbot, ccextractor, ccnet, cfitsio, cloudcompare, cpprest, cross-toolchain-base, cross-toolchain-base-ports, dde-qt5integration, deepin-deb-installer, deepin-voice-recorder, desktop-autoloader, dh-r, dlib, falkon, flask-babelex, flif, fscrypt, gcc-7-cross, gcc-7-cross-ports, gcc-8-cross, gcc-8-cross-ports, gegl, gimp, golang-github-dropbox-dropbox-sdk-go-unofficial, golang-github-jedisct1-dlog, golang-github-jedisct1-xsecretbox, golang-github-sanity-io-litter, googleplay-api, haskell-bsb-http-chunked, haskell-cmark-gfm, haskell-config-ini, haskell-genvalidity, haskell-genvalidity-property, haskell-hedgehog, haskell-hslua-module-text, haskell-ini, haskell-microstache, haskell-multimap, haskell-onetuple, haskell-only, haskell-parser-combinators, haskell-product-isomorphic, haskell-rate-limit, haskell-singleton-bool, haskell-tasty-expected-failure, haskell-validity, haskell-vector-builder, haskell-wl-pprint-annotated, haskell-word-wrap, haskell-yi-keymap-emacs, haskell-yi-keymap-vim, horizon-eda, iwd, jquery-i18n.js, kitty, kiwisolver, knot-resolver, libbpp-phyl, libdeclare-constraints-simple-perl, libgpg-error, libkf5incidenceeditor, libmypaint, linux, linux-latest, maildir-utils, moment-timezone.js, musescore-general-soundfont, mypaint-brushes, nmap, node-tar, openstack-meta-packages, orcania, osmo-fl2k, osmo-iuh, peek, peruse, protobuf, python-async-generator, python-cerberus, python-datrie, python-h11, python-libevdev, python-panwid, python-plaster, python-plaster-pastedeploy, python-readme-renderer, qpid-proton, razercfg, rebound, ruby-iso8601, ruby-tomlrb, ruby-xmlrpc, rustc, sent, shoogle, snapcast, texext, texlive-bin, u-boot, virtualbox-guest-additions-iso, vnlog, vtk7, xorgxrdp & zodbpickle.

I additionally filed 10 RC bugs against packages that had incomplete debian/copyright files against: appstream, cfitsio, dlib, falkon, horizon-eda, maildir-utils, osmo-fl2k, peruse, python-h11 & qpid-proton.

Planet DebianJamie McClelland: The pain of installing custom ROMs on Android phones

A while back I bought a Nexus 5x. During a three-day ordeal I finally got Omnirom installed - with full disk encryption, root access and some stitched together fake Google Play code that allowed me to run Signal without actually letting Google into my computer.

A short while later, Open Whisper Systems released a version of Signal that uses Web Sockets when Google Play services is not installed (and allows for installation via a web page without the need for the Google Play store). Dang. Should have waited.

Now, post Meltdown/Spectre, I worked up the courage to go through this process again. In the comments of my Omnirom post, I received a few suggestions about not really needing root. Hm - why didn't I think of that? Who needs root anyway? Combining root with full disk encryption was the real pain point in my previous install, so perhaps I can make things much easier. Also, not needing any of the fake Google software would be a definite plus.

This time around I decided to go with LineageOS since it seems to be the most mainstream of the custom ROMs. I found perfectly reasonable sounding instructions.

My first mistake was skipping the initial steps (since I already had TWRP recovery installed I didn't think I needed to follow them). I went straight to the step of installing LineageOS (including wiping the Cache, System and Data partitions). Unfortunately, when it came time to flash the ROM, I got the error that the ROM is for bullhead, but the hardware I was using is "" (yes, empty string there).

After some Internet searching I learned that the problem is an out-dated version of TWRP. Great, let's upgrade TWRP. I went back and started from the beginning of the LineageOS install instructions. But when it came to the fastboot flashing unlock step, I got the message explaining that my phone did not have OEM unlocking enabled. There are plenty of posts in the Internet demonstrating how to boot into your phone and flip the switch to allow OEM unlocking from the Developer section of your System tools. Great, except that I could no longer boot into my phone thanks to the various deletion steps I had already taken. Dang. Did just brick my phone?

I started thinking through how to buy a new phone.

Then, I did more searching on the Internet and learned that I can flash a new version of TWRP the same way you flash anything else. Phew! New TWRP flashed and new LineageOS ROM installed! And, my first question: what is the point of locking your phone if you can still flash recovery images and even new ROMs?

However, when I booted, I got an "OS vendor mismatch error". WTF. Ok, now my phone is really bricked.

Fortunately, someone not only identified this problem but contributed an exceptionally well-written step-by-step set of directions to fix the problem. The post, in combination with some comments on it, explains that you have to download the Google firmware that corresponds to the error code in your message (in case that post ever goes away: unzip the file you download, then cd into the directory created and unzip the file that starts with image-bullhead. Then, minimally flash the vendor.img to the vendor partition in TWRP).

In other words, the LineageOS ROM depends on having the right Google firmware installed.

All of these steps were possible without unlocking the phone. However, when I tried to update the Radio and Bootloader using:

fastboot flash bootloader bootloader-bullhead-bhz11l.img
fastboot flash radio radio-bullhead-m8994f-2.6.37.2.21.img

It failed, so I booted into my now working install, enabled OEM unlock, unlocked the phone (which wiped everything so I had to start over) and then it all worked.

And, kudos to LineageOS for the simple setup process and ease of getting full disk encryption.

Now that I'm done, I am asking myself a few questions:

I have my own custom ROM and I am not trusting Google with everything anymore. Hooray! So ... who am I trusting? This question I know the answer to (I think...):

  • Team Win, which provides the TWRP recovery software has total control of everything. Geez, I hope these people aren't assholes.
  • Google, since I blindly install their firmware vendor image, bootloader image and radio image. I guess they still can control my phone.
  • Fdroid, I hope they vette their packages, because I blindly install them from their default archives.
  • Guardian Project, since I enable their fdroid repo too - but hey at least I have met a few of these people and they are May First/People Link members.
  • Firefox, I download firefox directly from Mozilla since fdroid doesn't seem to really support them.
  • Signal, since I download that APK directly as well.
  • And the https certificate system (which pretty much means Let's Encrypt nowadays - since nearly everything depends on the integrity of the packages I am downloading over https.

But I'm still not sure about one more question:

Should I lock my phone? Given what I just accomplished without locking it, it seems that locking the phone could make my life harder the next time I upgrade and doesn't really stop someone else from replacing key components of my operating system without me knowing it.

CryptogramSecurity Vulnerabilities in VingCard Electronic Locks

Researchers have disclosed a massive vulnerability in the VingCard eletronic lock system, used in hotel rooms around the world:

With a $300 Proxmark RFID card reading and writing tool, any expired keycard pulled from the trash of a target hotel, and a set of cryptographic tricks developed over close to 15 years of on-and-off analysis of the codes Vingcard electronically writes to its keycards, they found a method to vastly narrow down a hotel's possible master key code. They can use that handheld Proxmark device to cycle through all the remaining possible codes on any lock at the hotel, identify the correct one in about 20 tries, and then write that master code to a card that gives the hacker free reign to roam any room in the building. The whole process takes about a minute.

[...]

The two researchers say that their attack works only on Vingcard's previous-generation Vision locks, not the company's newer Visionline product. But they estimate that it nonetheless affects 140,000 hotels in more than 160 countries around the world; the researchers say that Vingcard's Swedish parent company, Assa Abloy, admitted to them that the problem affects millions of locks in total. When WIRED reached out to Assa Abloy, however, the company put the total number of vulnerable locks somewhat lower, between 500,000 and a million.

Patching is a nightmare. It requires updating the firmware on every lock individually.

And the researchers speculate whether or not others knew of this hack:

The F-Secure researchers admit they don't know if their Vinguard attack has occurred in the real world. But the American firm LSI, which trains law enforcement agencies in bypassing locks, advertises Vingcard's products among those it promises to teach students to unlock. And the F-Secure researchers point to a 2010 assassination of a Palestinian Hamas official in a Dubai hotel, widely believed to have been carried out by the Israeli intelligence agency Mossad. The assassins in that case seemingly used a vulnerability in Vingcard locks to enter their target's room, albeit one that required re-programming the lock. "Most probably Mossad has a capability to do something like this," Tuominen says.

Slashdot post.

Worse Than FailureCodeSOD: Philegex

Last week, I was doing some graphics programming without a graphics card. It was low resolution, so I went ahead and re-implemented a few key methods from the Open GL Shader Language in a fashion which was compatible with NumPy arrays. Lucky for me, I was able to draw off many years of experience, I understood both technologies, and they both have excellent documentation which made it easy. After dozens of lines of code, I was able to whip up some pretty flexible image generator functions. I knew the tools I needed, I understood how they worked, and while I was reinventing a wheel, I had a very specific reason.

Philemon Eichin sends us some code from a point in his career where none of these things were true.

Philemon was building a changelog editor. As such, he wanted an easy, flexible way to identify patterns in the text. Philemon knew that there was something that could do that job, but he didn’t know what it was called or how it was supposed to work. So, like all good programmers, Philemon went ahead and coded up what he needed- he invented his own regular expression language, and built his own parser for it.

Thus was born Philegex. Philemon knew that regexes involved slashes, so in his language you needed to put a slash in front of every character you wanted to match exactly. He knew that it involved question marks, so he used the question mark as a wildcard which could match any character. That left the ’|" character to be optional.

So, for example: /P/H/I/L/E/G/E/X|??? would match “PHILEGEX!!!” or “PHILEGEWTF”. A date could be described as: nnnn/.nn/.nn. (YYYY.MM.DD or YYYY.DD.MM)

Living on his own isolated island without access to the Internet to attempt to google up “How to match patterns in text”, Philemon invented his own language for describing parts of a regular expression. This will be useful to interpret the code below.

PhilegexRegex
MaskableMatches
p1 Pattern / Regex
Block(s)Token(s)
CT CharType
SplitLineParseRegex
CC currentChar
auf_zu openParenthesis
Chars CharClassification

With the preamble out of the way, enjoy Philemon’s approach to regular expressions, implemented elegantly in VB.Net.

Public Class Textmarker
    Const Datum As String = "nn/.nn/.nnnn"

    Private Structure Blocks
        Dim Type As Chars
        Dim Multi As Boolean
        Dim Mode As Char_Mode
        Dim Subblocks() As Blocks
        Dim passed As Boolean
        Dim _Optional As Boolean
    End Structure



    Public Shared Function IsMaskable(p1 As String, Content As String) As Boolean
        Dim ID As Integer = 0
        Dim p2 As Chars
        Dim _Blocks() As Blocks = SplitLine(p1)
        For i As Integer = 0 To Content.Length - 1
            p2 = GetCT(Content(i))
START_CASE:
            '#If CONFIG = "Debug" Then
            '            If ID = 2 Then
            '                Stop
            '            End If

            '#End If
            If ID > _Blocks.Length - 1 Then
                Return False
            End If
            Select Case _Blocks(ID).Mode
                Case Char_Mode._Char
                    If p2.Char_V = _Blocks(ID).Type.Char_V Then
                        _Blocks(ID).passed = True
                        If Not _Blocks(ID).Multi = True Then ID += 1
                        Exit Select
                    Else
                        If _Blocks(ID).passed = True And _Blocks(ID).Multi = True Then
                            ID += 1
                            GoTo START_CASE
                        Else
                            If Not _Blocks(ID)._Optional Then Return False

                        End If
                    End If
                Case Char_Mode.Type
                    If _Blocks(ID).Type.Type = Chartypes.any Then
                        _Blocks(ID).passed = True
                        If Not _Blocks(ID).Multi = True Then ID += 1
                        Exit Select
                    Else


                        If p2.Type = _Blocks(ID).Type.Type Then
                            _Blocks(ID).passed = True
                            If Not _Blocks(ID).Multi = True Then ID += 1
                            Exit Select
                        Else
                            If _Blocks(ID).passed = True And _Blocks(ID).Multi = True Then
                                ID += 1
                                GoTo START_CASE
                            Else
                                If _Blocks(ID)._Optional Then
                                    ID += 1
                                    _Blocks(ID - 1).passed = True
                                Else
                                    Return False

                                End If

                            End If
                        End If

                    End If


            End Select


        Next
        For i = ID To _Blocks.Length - 1
            If _Blocks(ID)._Optional = True Then
                _Blocks(ID).passed = True
            Else
                Exit For
            End If
        Next
        If _Blocks(_Blocks.Length - 1).passed Then
            Return True
        Else
            Return False
        End If

    End Function

    Private Shared Function GetCT(Char_ As String) As Chars

        If "0123456789".Contains(Char_) Then Return New Chars(Char_, 2)
        If "qwertzuiopüasdfghjklöäyxcvbnmß".Contains((Char.ToLower(Char_))) Then Return New Chars(Char_, 1)
        Return New Chars(Char_, 4)
    End Function


    Private Shared Function SplitLine(ByVal Line As String) As Blocks()
        Dim ret(0) As Blocks
        Dim retID As Integer = -1
        Dim CC As Char
        For i = 0 To Line.Length - 1
            CC = Line(i)
            Select Case CC
                Case "("
                    ReDim Preserve ret(retID + 1)
                    retID += 1
                    Dim ii As Integer = i + 1
                    Dim auf_zu As Integer = 1
                    Do
                        Select Case Line(ii)
                            Case "("
                                auf_zu += 1
                            Case ")"
                                auf_zu -= 1
                            Case "/"
                                ii += 1
                        End Select
                        ii += 1
                    Loop Until auf_zu = 0
                    ret(retID).Subblocks = SplitLine(Line.Substring(i + 1, ii - 1))
                    ret(retID).Mode = Char_Mode.subitems
                    ret(retID).passed = False

                Case "*"
                    ret(retID).Multi = True
                    ret(retID).passed = False
                Case "|"
                    ret(retID)._Optional = True

                Case "/"
                    ReDim Preserve ret(retID + 1)
                    retID += 1
                    ret(retID).Mode = Char_Mode._Char
                    ret(retID).Type = New Chars(Line(i + 1), Chartypes.other)
                    i += 1
                    ret(retID).passed = False

                Case Else

                    ReDim Preserve ret(retID + 1)
                    retID += 1
                    ret(retID).Mode = Char_Mode.Type
                    ret(retID).Type = New Chars(Line(i), TocType(CC))
                    ret(retID).passed = False
            End Select
        Next
        Return ret
    End Function
    Private Shared Function TocType(p1 As Char) As Chartypes
        Select Case p1
            Case "c"
                Return Chartypes._Char
            Case "n"
                Return Chartypes.Number
            Case "?"
                Return Chartypes.any
            Case Else
                Return Chartypes.other
        End Select
    End Function

    Public Enum Char_Mode As Integer
        Type = 1
        _Char = 2
        subitems = 3
    End Enum
    Public Enum Chartypes As Integer
        _Char = 1
        Number = 2
        other = 4
        any
    End Enum
    Structure Chars
        Dim Char_V As Char
        Dim Type As Chartypes
        Sub New(Char_ As Char, typ As Chartypes)
            Char_V = Char_
            Type = typ
        End Sub
    End Structure
End Class

I’ll say this: building a finite state machine, which is what the core of a regex engine is, is perhaps the only case where using a GoTo could be considered acceptable. So this code has that going for it. Philemon was kind enough to share this code with us, so we knew he knows it’s bad.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet Linux AustraliaOpenSTEM: Be Gonski Ready!

Gonski is in the news again with the release of the Gonski 2.0 report. This is most likely to impact on schools and teachers in a range of ways from funding to curriculum. Here at OpenSTEM we can help you to be ahead of the game by using our materials, which are already Gonski-ready! The […]

Planet DebianRuss Allbery: Review: Full of Briars

Review: Full of Briars, by Seanan McGuire

Series: October Daye #7.1
Publisher: DAW
Copyright: August 2016
ISBN: 0-7564-1222-6
Format: Kindle
Pages: 44

"Full of Briars" is a novella set in the October Daye series, between Chimes at Midnight and The Winter Long, although published four years later. It was published independently, so it gets a full review here, but it's $2 on Amazon and primarily fills in some background for series readers.

It's also extremely hard to review without spoilers, since it is the direct consequences of a major plot revelation at the end of Chimes of Midnight that would spoil a chunk of that story and some of the series leading up to it. So I'm going to have to be horribly vague and fairly brief.

"Full of Briars" is, unlike most of the series and all of the novels, told from Quentin's perspective rather than Toby's. The vague thing that I can say about the plot is that this is the story of Toby finally meeting Quentin's parents. Since Quentin is supposed to be in a blind fosterage and his parentage kept secret, this is a bit of a problem. It might be enough of a problem to end the fosterage and call him home.

That is very much not something Quentin wants. Or Toby, or any of the rest of the crew Toby has gathered around her in the course of the series.

The rest of the story is mostly talking, about that decision and its aftermath and then some other developments in Quentin's life. It lacks a bit of the drama of the novels of the series, but one of the reasons why I'm still reading this series is that I like these characters and their dialogue. They're all very much themselves here: Toby being blunt, May being random, and Quentin being honorable and determined and young. Tybalt is particularly good here, doing his own version of Toby's tendency to speak truth to power and strongly asserting the independence of the Court of Cats.

The ending didn't have much impact for me, and I don't think the scene worked quite as well as McGuire intended, but it's another bit of background that's useful for series readers to be aware of.

This is missable, but it's cheap enough and fast enough to read that I wouldn't miss it if you're otherwise reading the series. The core plot outcome is predictable, as is much of what happens in the process. But I liked Quentin's parents, I liked how they interact with McGuire's regular cast, and it's nice to know exactly what happened in this interlude.

Followed by The Winter Long (and also see the Toby Short Stories page for a list of all the short fiction in this universe and where it falls in series order).

Rating: 6 out of 10

,

Harald WelteOsmoDevCon 2018 retrospective

One week ago, the annual Osmocom developer meeting (OsmoDevCon 2018) concluded after four long and intense days with old and new friends (schedule can be seen here

It was already the 7th incarnation of OsmoDevCon, and I have to say that it's really great to see the core Osmocom community come together every year, to share their work and experience with their fellow hackers.

Ever since the beginning we've had the tradition that we look beyond our own projects. In 2012, David Burgess was presenting on OpenBTS. In 2016, Ismael Gomez presented about srsUE + srsLTE, and this year we've had the pleasure of having Sukchan Kim coming all the way from Korea to talk to us about his nextepc project (a FOSS implementation of the Evolved Packet Core, the 4G core network).

What has also been a regular "entertainment" part in recent years are the field trip reports to various [former] satellite/SIGINT/... sites by Dimitri Stolnikov.

All in all, the event has become at least as much about the people than about technology. It's a community of like-minded people that to some part are still working on joint projects, but often work independently and scratch their own itch - whether open source mobile comms related or not.

After some criticism last year, the so-called "unstructured" part of OsmoDevCon has received more time again this year, allowing for exchange among the participants irrespective of any formal / scheduled talk or discussion topic.

In 2018, with the help of c3voc, for the first time ever, we've recorded most of the presentations on video. The results are still in the process of being cut, but are starting to appear at https://media.ccc.de/c/osmodevcon2018

If you want to join a future OsmoDevCon in person: Make sure you start contributing to any of the many Osmocom member projects now to become eligible. We need you!

Now the sad part i that it will take one entire year until we'll reconvene. May the Osmocom Developer community live long and prosper. I want to meet you guys for many more years at OsmoDevCon!

There is of course the user-oriented OsmoCon 2018 in October, but that's of course a much larger event with a different audience.

Nevertheless, I'm very much looking forward to that, too.

The OsmoCon 2018 Call for Participation is still running. Please consider submitting talks if you have anything open source mobile communications related to share!

Sociological ImagesBouncers and Bias

Originally Posted at TSP Discoveries

Whether we wear stilettos or flats, jeans or dress clothes, our clothing can allow or deny us access to certain social spaces, like a nightclub. Yet, institutional dress codes that dictate who can and cannot wear certain items of clothing target some marginalized communities more than others. For example, recent reports of bouncers denying Black patrons from nightclubs prompted Reuben A Buford May and Pat Rubio Goldsmith to test whether urban nightclubs in Texas deny entrance for Black and Latino men through discriminatory dress code policies.

Photo Credit: Bruce Turner, Flickr CC

For the study, recently published in Sociology of Race and EthnicityThe authors recruited six men between the ages of 21 and 23. They selected three pairs of men by race — White, Black, and Latino — to attend 53 urban nightclubs in Dallas, Houston, and Austin. Each pair shared similar racial, socioeconomic, and physical characteristics. One individual from each pair dressed as a “conformist,” wearing Ralph Lauren polos, casual shoes, and nice jeans that adhered to the club’s dress code. The other individual dressed in stereotypically urban dress, wearing “sneakers, blue jean pants, colored T-shirt, hoodie, and a long necklace with a medallion.” The authors categorized an interaction as discrimination if a bouncer denied a patron entrance based on his dress or if the bouncer enforced particular dress code rules, such as telling a patron to tuck in their necklace. Each pair attended the same nightclub at peak hours three to ten minutes apart. The researchers exchanged text messages with each pair to document any denials or accommodations.

Black men were denied entrance into nightclubs 11.3 percent of the time (six times), while White and Latino men were both denied entry 5.7 percent of the time (three times). Bouncers claimed the Black patrons were denied entry because of their clothing, despite allowing similarly dressed White and Latino men to enter. Even when bouncers did not deny entrance, they demanded that patrons tuck in their necklaces to accommodate nightclub policy. This occurred two times for Black men, three times for Latino men, and one time for White men. Overall, Black men encountered more discriminatory experiences from nightclub bouncers, highlighting how institutions continue to police Black bodies through seemingly race-neutral rules and regulations.

Amber Joy is a PhD student in sociology at the University of Minnesota. Her current research interests include punishment, sexual violence and the intersections of race, gender, age, and sexuality. Her work examines how state institutions construct youth victimization.

Planet Linux AustraliaDavid Rowe: Solar Boat

Two years ago when I bought my Hartley TS16 sail boat I dreamed of converting it to solar power. In January I installed a Torqueedo electric outboard and a 24V, 100AH Lithium battery back. That’s working really well. Next step was to work out a way to mount some surplus 200W solar panels on the boat. The idea is to (temporarily) detach the mast, and use the boat on the river Murray, a major river that passes within 100km of where I live in Adelaide, South Australia.

Over the last few weeks I worked with my friend Gary (VK5FGRY) to mount solar panels on the TS16. Gary designed and fabricated some legs from 40mm square aluminium:

With a matching rubber foot on each leg, the panels sit firmly on the gel coat of the boat, and are held down by ropes or octopus straps.

The panels maximum power point is at 28.5V (and 7.5A) which is close to the battery pack under charge (3.3*8 = 26.4V) so I decided to try a direct DC connection – no inverter or charger. I ran some tests in the back yard: each panel was delivering about 4A into the battery pack, and two in parallel delivered about 8A. I didn’t know solar panels could be connected in parallel, but happily this means I can keep my direct DC connection. Horizontal panels costs a few amps – a good example of why solar panels are usually angled at the sun. However the azimuth of the boat will be always changing so horizontal is the only choice. The panels are very sensitive to shadowing; a hand placed on a panel, or a small shadow is enough to drop the current to 0A. OK, so now I had a figure for panel output – about 4A from each panel.

This didn’t look promising. Based on my sea voyages with the Torqueedo, I estimated I would need 800W (about 30A) to maintain my target houseboat speed of 4 knots (7 km/hr); that’s 8 panels which won’t ft on my boat! However the current draw on the river might be different without tides, and waves, and I wasn’t sure exactly how many AH I would get over a day from the sun. Would trees on the river bank shadow the panels?

So it was off to Younghusband on the Murray, where our friend Chris (VK5CP) was hosting a bunch of Ham Radio guys for an extended Anzac day/holiday weekend. It’s Autumn here, with generally sunny days of about 23C. The sun is up from from 6:30am to 6pm.

Turns out that even with two panels – the solar boat was really practical! Over three days we made three trips of 2 hours each, at speeds of 3 to 4 knots, using only the panels for charging. Each day I took friends out, and they really loved it – so quiet and peaceful, and the river scenery is really nice.

After an afternoon cruise I would park the boat on the South side of the river to catch the morning sun, which in Autumn appears to the North here in Australia. I measured the panel current as 2A at 7am, 6A at 9am, 9A at 10am, and much to my surprise the pack was charged by 11am! In fact I had to disconnect the panels as the cell voltage was pushing over 4V.

On a typical run upriver we measured 700W = 4kt, 300W = 3.1kt, 150W = 2.5kt, and 8A into the panels in full sun. Panel current dropped to 2A with cloud which was a nasty surprise. We experienced no shadowing issues from trees. The best current we saw at about noon was 10A. We could boost the current by 2A by putting three guys on one side of the boat and tipping the entire boat (and solar panels) towards the sun!

Even partial input from solar can have a big impact. Lets say at 4 knots (30A) I can drive for 2 hours using 60% of my 100AH pack. If I back off the speed a little, so I’m drawing 20A, then 10A from the panels will extend my driving time to 6 hours.

I slept on the boat, and one night I found a paddle steamer (the Murray Princess) parked across the river from me, all lit up with fairy lights:

On our final adventure, my friend Darin (VK5IX) and I were entering Lake Carlet, when suddenly the prop hit something very hard, “crack crack crack”. My poor prop shaft was bent and my propeller is wobbling from side to side:

We gently e-motored back and actually recorded our best results – 3 knots on 300W, 10A from the panels, 10A to the motor.

With 4 panels I would have a very practical solar boat, capable of 4-6 hours cruising a day just on solar power. The 2 extra panels could be mounted as a canopy over the rear of the boat. I have an idea about an extended solar adventure of several days, for example 150km from Younghusband to Goolwa.

Reading Further

Engage the Silent Drive
Lithium Cell Amp Hour Tester and Electric Sailing

,

Planet DebianSteinar H. Gunderson: Nageru 1.7.2 released

I've released version of 1.7.2 of Nageru, my free video mixer.

The main new feature this time round is the ability to use sound from video inputs. This was originally intended for IP camera inputs from Android, but I suppose you could also it for playout if you're brave. :-) A/V sync keeps being a hard problem (it's easy to make something that feels reasonable and works 99% of the time, but fails miserably the last percent), so I don't recommend running your cameras over IP if you can avoid it, but sometimes lugging SDI around really is too inconvenient.

Apart from that, the git log this time is dominated by a lot of small tweaks and bugfixes; things are getting increasingly refined as we get more experience with the larger setups. I wondered for a bit whether I should give it a version bump to 1.8.0, but in the end, I didn't consider IP inputs (nor the support for assisting Cubemap with HLS output) important enough. So 1.7.2 it is.

The full changelog (with lots of things hidden under the last point :-) ) follows. As usual, new packages are also on their way up to Debian, although unfortunately, my CEF-enabling patch to Chromium still hasn't received any love, so if you want CEF support, you'll have to compile yourself.

Nageru 1.7.2, April 28th, 2018

  - Several improvements to video (FFmpeg) inputs: You can now use
    them as audio sources, you can right-click on video channels
    to change URL/filename on-the-fly, themes can ask for forced
    disconnection (useful for network sources that are hanging),
    and various other improvements. Be aware that the audio support
    may still be somewhat rough, as A/V sync of arbitrary video
    playout is a hard problem.

  - The included themes have been fixed to properly make the returned
    chain preparation functions independent of global state (e.g. if
    the white balance for a channel was changed before the frame was
    actually rendered). If you are using a custom theme, you may want
    to apply similar fixes to it.

  - In Metacube stream output, mark each keyframe with a pts metadata
    block. This allows Cubemap 1.4.0 or newer to serve fMP4 fragments
    for HLS from Nageru's output, without any further remuxing or
    transcoding.

  - If needed, Nageru will now automatically try to autodetect a
    usable --va-display parameter by probing all DRM nodes for H.264
    encoders. This removes the need to set --va-display in almost all
    cases, and also removes the dependency on libpci.

  - For GPUs that support querying available memory (in practice only
    NVIDIA GPUs at the current time), expose the amount of used/total
    GPU memory both on standard output and in the Prometheus metrics
    (as well as included Grafana dashboard).

  - The Grafana dashboard now supports heatmaps for the chosen x264
    speedcontrol preset (requires Grafana 5.1 or newer). (There used to
    be a heatmap earlier, but it was all broken.)

  - Various bugfixes.

Planet Linux AustraliaJulien Goodwin: PoE termination board

For my next big project I'm planning on making it run using power over ethernet. Back in March I designed a quick circuit using the TI TPS2376-H PoE termination chip, and an LMR16020 switching regulator to drop the ~48v coming in down to 5v. There's also a second stage low-noise linear regulator (ST LDL1117S33R) to further drop it down to 3.3v, but as it turns out the main chip I'm using does its own 5->3.3v conversion already.

Because I was lazy, and the pricing was reasonable I got these boards manufactured by pcb.ng who I'd used for the USB-C termination boards I did a while back.

Here's the board running a Raspberry Pi 3B+, as it turns out I got lucky and my board is set up for the same input as the 3B+ supplies.



One really big warning, this is a non-isolated supply, which, in general, is a bad idea for PoE. For my specific use case there'll be no exposed connectors or metal, so this should be safe, but if you want to use PoE in general I'd suggest using some of the isolated convertors that are available with integrated PoE termination.

For this series I'm going to try and also make some notes on the mistakes I've made with these boards to help others, for this board:
  • I failed to add any test pins, given this was the first try I really should have, being able to inject power just before the switching convertor was helpful while debugging, but I had to solder wires to the input cap to do that.
  • Similarly, I should have had a 5v output pin, for now I've just been shorting the two diodes I had near the output which were intended to let me switch input power between two feeds.
  • The last, and the only actual problem with the circuit was that when selecting which exact parts to use I optimised by choosing the same diode for both input protection & switching, however this was a mistake, as the switcher needed a Schottky diode, and one with better ratings in other ways than the input diode. With the incorrect diode the board actually worked fine under low loads, but would quickly go into thermal shutdown if asked to supply more than about 1W. With the diode swapped to a correctly rated one it now supplies 10W just fine.
  • While debugging the previous I also noticed that the thermal pads on both main chips weren't well connected through. It seems the combination of via-in-thermal-pad (even tented), along with Kicad's normal reduction in paste in those large pads, plus my manufacturer's use of a fairly thin application of paste all contributed to this. Next time I'll probably avoid via-in-pad.


Coming soon will be a post about the GPS board, but I'm still testing bits of that board out, plus waiting for some missing parts (somehow not only did I fail to order 10k resistors, I didn't already have some in stock).

Planet Linux AustraliaChris Smart: Fedora on ODROID-HC1 mini NAS (ARMv7)

EDIT: I am having a problem where the Fedora kernel does not always detect the disk drive (whether cold, warm or hotplugged). I’ve built upstream 4.16 kernel and it works perfectly every time. It doesn’t seem to be uas related, disabling that on the usb-storage module doesn’t make any difference. I’m looking into it…

Hardkernel is a Korean company that makes various embedded ARM based systems, which it calls ODROID.

One of their products is the ODROID-HC1, a mini NAS designed to take a single 2.5″ SATA drive (HC stands for “Home Cloud”) which comes with 2GB RAM and a Gigabit Ethernet port. There is also a 3.5″ model called the HC2. Both of these are based on the ODROID-XU4, which itself is based on the previous iteration ODROID-XU3. All of these are based on the Samsung Exynos5422 SOC and should work with the following steps.

The Exynos SOC needs proprietary first stage bootloaders which are embedded in the first 1.4MB or so at the beginning of the SD card in order to load U-Boot. As these binary blobs are not re-distributable, Fedora cannot support these devices out of the box, however all the other bits are available including the kernel, device tree and U-Boot. So, we just need to piece it all together and the result is a stock Fedora system!

To do this you’ll need the ODROID device, a power supply (5V/4A for HC1, 12V/2A for HC2), one of their UART adapters, an SD card (UHS-I) and probably a hard drive if you want to use it as a NAS (you may also want a battery for the RTC and a case).

ODROID-HC1 with UART, RTC battery, SD card and 2.5″ drive.

Note that the default Fedora 27 ARM image does not support the Realtek RTL8153 Ethernet adapter out of the box (it does after a kernel upgrade) so if you don’t have a USB Ethernet dongle handy we’ll download the kernel packages on our host, save them to the SD card and install them on first boot. The Fedora 28 image works out of the box, so if you’re installing 28 you can skip that step.

Download the Fedora Minimal ARM server image and save it in your home dir.

Install the Fedora ARM installer and U-Boot bootloader files for the device on your host PC.

sudo dnf install fedora-arm-installer uboot-images-armv7

Insert your SD card into your computer and note the device (mine is /dev/mmcblk0) using dmesg or df commands. Once you know that, open a terminal and let’s write the Fedora image to the SD card! Note that we are using none as the target because it’s not a supported board and we will configure the bootloader manually.

sudo fedora-arm-image-installer \
--target=none \
--image=Fedora-Minimal-armhfp-27-1.6-sda.raw.xz \
--resizefs \
--norootpass \
--media=/dev/mmcblk0

First things first, we need to enable the serial console and turn off cpuidle else it won’t boot. We do this by mounting the boot partition on the SD card and modifying the extlinux bootloader configuration.

sudo mount /dev/mmcblk0p2 /mnt
 
sudo sed -i "s|append|& cpuidle.off=1 \
console=tty1 console=ttySAC2,115200n8|" \
/mnt/extlinux/extlinux.conf

As mentioned, the kernel that comes with Fedora 27 image doesn’t support the Ethernet adapter, so if you don’t have a spare USB Ethernet dongle, let’s download the updates now. If you’re using Fedora 28 this is not necessary.

cd /mnt
 
sudo wget http://dl.fedoraproject.org/pub/fedora/linux/updates/27/armhfp/Packages/k/kernel-4.16.3-200.fc27.armv7hl.rpm \
http://dl.fedoraproject.org/pub/fedora/linux/updates/27/armhfp/Packages/k/kernel-core-4.16.3-200.fc27.armv7hl.rpm \
http://dl.fedoraproject.org/pub/fedora/linux/updates/27/armhfp/Packages/k/kernel-modules-4.16.3-200.fc27.armv7hl.rpm
 
cd ~/

Unmount the boot partition.

sudo umount /mnt

Now, we can embed U-Boot and the required bootloaders into the SD card. To do this we need to download the files from Hardkernel along with their script which writes the blobs (note that we are downloading the files for the XU4, not HC1, as they are compatible). We will tell the script to use the U-Boot image we installed earlier, this way we are using Fedora’s U-Boot not the one from Hardkernel.

Download the required files from Hardkernel.

mkdir hardkernel ; cd hardkernel
 
wget https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/sd_fusing.sh \
https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/bl1.bin.hardkernel \
https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/bl2.bin.hardkernel.720k_uboot \
https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/tzsw.bin.hardkernel
 
chmod a+x sd_fusing.sh

Copy the Fedora U-Boot files into the local dir.

cp /usr/share/uboot/odroid-xu3/u-boot.bin .

Finally, run the fusing script to embed the files onto the SD card, passing in the device for your SD card.
sudo ./sd_fusing.sh /dev/mmcblk0

That’s it! Remove your SD card and insert it into your ODROID, then plug the UART adapter into a USB port on your computer and connect to it with screen (check dmesg for the port number, generally ttyUSB0).

sudo screen /dev/ttyUSB0

Now power on your ODROID. If all goes well you should see the SOC initialise, load Fedora’s U-Boot and boot Fedora to the welcome setup screen. Complete this and then log in as root or your user you have just set up.

Welcome configuration screen for Fedora ARM.

If you’re running Fedora 27 image, install the kernel updates, remove the RPMs and reboot the device (skip this if you’re running Fedora 28).
sudo dnf install --disablerepo=* /boot/*rpm
 
sudo rm /boot/*rpm
 
sudo reboot

Fedora login over serial connection.

Once you have rebooted, the Ethernet adapter should work and you can do your regular updates

sudo dnf update

You can find your SATA drive at /dev/sda where you should be able to partition, format, mount it, share it and well, do whatever you want with the box.

You may wish to take note of the IP address and/or configure static networking so that you can SSH in once you unplug the UART.

Enjoy your native Fedora embedded ARM Mini NAS 🙂

Planet DebianMichal Čihař: What's being cooked for Weblate 3.0

Next release on Weblate roadmap is called 3.0 and will bring some important changes. Some of these are already present in the Git repository and deployed on Hosted Weblate, but more of that will follow.

Component discovery

Component discovery is useful feature if you have several translation components in one repository. Previously import_project management command was the only way to help you, however it had to be executed manually on any change. Now you can use Component discovery addon which does quite similar thing, however is triggered on VCS update, so it can be used to follow structure of your project without any manual interaction. This feature is already available in Git and on Hosted Weblate, though you have to ask for initial setup there.

Code cleanups

Over the years (first Weblate release was more than six years ago) the code structure is far from optimal. There are several code cleanups scheduled and some of them are already present in Git repository. This will make Weblate easier to maintain and extend (eg. with third party file format drivers).

User management

Since the beginning Weblate has relied on user management and permissions as provided by Django. This is really not a good fit for language / project matrix permissions which most people need so we've come with Group ACL to extend this. This worked quite well for some use cases, but it turned out to be problematic for others. It is also quite hard to setup properly. For Weblate 3.0 this will be dropped and replaced by access control which fits more use cases. This is still being finalized in our issue tracker, so if you have any comments to this, please share them.

Migration path

Due to above mentioned massive changes, migrations across 3.0 will not be supported. You will always have to upgrade to 3.0 first and then upgrade to further versions. The code cleanups will also lead to some changes in the configuration, so take care when upgrading and follow upgrading instructions.

Filed under: Debian English SUSE Weblate

,

CryptogramFriday Squid Blogging: Bizarre Contorted Squid

This bizarre contorted squid might be a new species, or a previously known species exhibiting a new behavior. No one knows.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecuritySecurity Trade-Offs in the New EU Privacy Law

On two occasions this past year I’ve published stories here warning about the prospect that new European privacy regulations could result in more spams and scams ending up in your inbox. This post explains in a question and answer format some of the reasoning that went into that prediction, and responds to many of the criticisms leveled against it.

Before we get to the Q&A, a bit of background is in order. On May 25, 2018 the General Data Protection Regulation (GDPR) takes effect. The law, enacted by the European Parliament, requires companies to get affirmative consent for any personal information they collect on people within the European Union. Organizations that violate the GDPR could face fines of up to four percent of global annual revenues.

In response, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — has proposed redacting key bits of personal data from WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges (IP addresses).

Under current ICANN rules, domain name registrars should collect and display a variety of data points when someone performs a WHOIS lookup on a given domain, such as the registrant’s name, address, email address and phone number. Most registrars offer a privacy protection service that shields this information from public WHOIS lookups; some registrars charge a nominal fee for this service, while others offer it for free.

But in a bid to help registrars comply with the GDPR, ICANN is moving forward on a plan to remove critical data elements from all public WHOIS records. Under the new system, registrars would collect all the same data points about their customers, yet limit how much of that information is made available via public WHOIS lookups.

The data to be redacted includes the name of the person who registered the domain, as well as their phone number, physical address and email address. The new rules would apply to all domain name registrars globally.

ICANN has proposed creating an “accreditation system” that would vet access to personal data in WHOIS records for several groups, including journalists, security researchers, and law enforcement officials, as well as intellectual property rights holders who routinely use WHOIS records to combat piracy and trademark abuse.

But at an ICANN meeting in San Juan, Puerto Rico last month, ICANN representatives conceded that a proposal for how such a vetting system might work probably would not be ready until December 2018. Assuming ICANN meets that deadline, it could be many months after that before the hundreds of domain registrars around the world take steps to adopt the new measures.

In a series of posts on Twitter, I predicted that the WHOIS changes coming with GDPR will likely result in a noticeable increase in cybercrime — particularly in the form of phishing and other types of spam. In response to those tweets, several authors on Wednesday published an article for Georgia Tech’s Internet Governance Project titled, “WHOIS afraid of the dark? Truth or illusion, let’s know the difference when it comes to WHOIS.”

The following Q&A is intended to address many of the more misleading claims and assertions made in that article.

Cyber criminals don’t use their real information in WHOIS registrations, so what’s the big deal if the data currently available in WHOIS records is no longer in the public domain after May 25?

I can point to dozens of stories printed here — and probably hundreds elsewhere — that clearly demonstrate otherwise. Whether or not cyber crooks do provide their real information is beside the point. ANY information they provide — and especially information that they re-use across multiple domains and cybercrime campaigns — is invaluable to both grouping cybercriminal operations and in ultimately identifying who’s responsible for these activities.

To understand why data reuse in WHOIS records is so common among crooks, put yourself in the shoes of your average scammer or spammer — someone who has to register dozens or even hundreds or thousands of domains a week to ply their trade. Are you going to create hundreds or thousands of email addresses and fabricate as many personal details to make your WHOIS listings that much harder for researchers to track? The answer is that those who take this extraordinary step are by far and away the exception rather than the rule. Most simply reuse the same email address and phony address/phone/contact information across many domains as long as it remains profitable for them to do so.

This pattern of WHOIS data reuse doesn’t just extend across a few weeks or months. Very often, if a spammer, phisher or scammer can get away with re-using the same WHOIS details over many years without any deleterious effects to their operations, they will happily do so. Why they may do this is their own business, but nevertheless it makes WHOIS an incredibly powerful tool for tracking threat actors across multiple networks, registrars and Internet epochs.

All domain registrars offer free or a-la-carte privacy protection services that mask the personal information provided by the domain registrant. Most cybercriminals — unless they are dumb or lazy — are already taking advantage of these anyway, so it’s not clear why masking domain registration for everyone is going to change the status quo by much. 

It is true that some domain registrants do take advantage of WHOIS privacy services, but based on countless investigations I have conducted using WHOIS to uncover cybercrime businesses and operators, I’d wager that cybercrooks more often do not use these services. Not infrequently, when they do use WHOIS privacy options there are still gaps in coverage at some point in the domain’s history (such as when a registrant switches hosting providers) which are indexed by historic WHOIS records and that offer a brief window of visibility into the details behind the registration.

This is demonstrably true even for organized cybercrime groups and for nation state actors, and these are arguably some of the most sophisticated and savvy cybercriminals out there.

It’s worth adding that if so many cybercrooks seem nonchalant about adopting WHOIS privacy services it may well be because they reside in countries where the rule of law is not well-established, or their host country doesn’t particularly discourage their activities so long as they’re not violating the golden rule — namely, targeting people in their own backyard. And so they may not particularly care about covering their tracks. Or in other cases they do care, but nevertheless make mistakes or get sloppy at some point, as most cybercriminals do.

The GDPR does not apply to businesses — only to individuals — so there is no reason researchers or anyone else should be unable to find domain registration details for organizations and companies in the WHOIS database after May 25, right?

It is true that the European privacy regulations as they relate to WHOIS records do not apply to businesses registering domain names. However, the domain registrar industry — which operates on razor-thin profit margins and which has long sought to be free from any WHOIS requirements or accountability whatsoever — won’t exactly be tripping over themselves to add more complexity to their WHOIS efforts just to make a distinction between businesses and individuals.

As a result, registrars simply won’t make that distinction because there is no mandate that they must. They’ll just adopt the same WHOIS data collection and display polices across the board, regardless of whether the WHOIS details for a given domain suggest that the registrant is a business or an individual.

But the GDPR only applies to data collected about people in Europe, so why should this impact WHOIS registration details collected on people who are outside of Europe?

Again, domain registrars are the ones collecting WHOIS data, and they are most unlikely to develop WHOIS record collection and dissemination policies that seek to differentiate between entities covered by GDPR and those that may not be. Such an attempt would be fraught with legal and monetary complications that they simply will not take on voluntarily.

What’s more, the domain registrar community tends to view the public display of WHOIS data as a nuisance and a cost center. They have mainly only allowed public access to WHOIS data because ICANN’s contracts state that they should. So, from registrar community’s point of view, the less information they must make available to the public, the better.

Like it or not, the job of tracking down and bringing cybercriminals to justice falls to law enforcement agencies — not security researchers. Law enforcement agencies will still have unfettered access to full WHOIS records.

As it relates to inter-state crimes (i.e, the bulk of all Internet abuse), law enforcement — at least in the United States — is divided into two main components: The investigative side (i.e., the FBI and Secret Service) and the prosecutorial side (the state and district attorneys who actually initiate court proceedings intended to bring an accused person to justice).

Much of the legwork done to provide the evidence needed to convince prosecutors that there is even a case worth prosecuting is performed by security researchers. The reasons why this is true are too numerous to delve into here, but the safe answer is that law enforcement investigators typically are more motivated to focus on crimes for which they can readily envision someone getting prosecuted — and because very often their plate is full with far more pressing, immediate and local (physical) crimes.

Admittedly, this is a bit of a blanket statement because in many cases local, state and federal law enforcement agencies will do this often tedious legwork of cybercrime investigations on their own — provided it involves or impacts someone in their jurisdiction. But due in large part to these jurisdictional issues, politics and the need to build prosecutions around a specific locality when it comes to cybercrime cases, very often law enforcement agencies tend to miss the forest for the trees.

Who cares if security researchers will lose access to WHOIS data, anyway? To borrow an assertion from the Internet Governance article, “maybe it’s high time for security researchers and businesses that harvest personal information from WHOIS on an industrial scale to refine and remodel their research methods and business models.”

This is an alluring argument. After all, the technology and security industries claim to be based on innovation. But consider carefully how anti-virus, anti-spam or firewall technologies currently work. The unfortunate reality is that these technologies are still mostly powered by humans, and those humans rely heavily on access to key details about domain reputation and ownership history.

Those metrics for reputation weigh a host of different qualities, but a huge component of that reputation score is determining whether a given domain or Internet address has been connected to any other previous scams, spams, attacks or other badness. We can argue about whether this is the best way to measure reputation, but it doesn’t change the prospect that many of these technologies will in all likelihood perform less effectively after WHOIS records start being heavily redacted.

Don’t advances in artificial intelligence and machine learning obviate the need for researchers to have access to WHOIS data?

This sounds like a nice idea, but again it is far removed from current practice. Ask anyone who regularly uses WHOIS data to determine reputation or to track and block malicious online threats and I’ll wager you will find the answer is that these analyses are still mostly based on manual lookups and often thankless legwork. Perhaps such trendy technological buzzwords will indeed describe the standard practice of the security industry at some point in the future, but in my experience this does not accurately depict the reality today.

Okay, but Internet addresses are pretty useful tools for determining reputation. The sharing of IP addresses tied to cybercriminal operations isn’t going to be impacted by the GDPR, is it? 

That depends on the organization doing the sharing. I’ve encountered at least two cases in the past few months wherein European-based security firms have been reluctant to share Internet address information at all in response to the GDPR — based on a perceived (if not overly legalistic) interpretation that somehow this information also might be considered personally identifying data. This reluctance to share such information out of a concern that doing so might land the sharer in legal hot water can indeed have a chilling effect on the important sharing of threat intelligence across borders.

According to the Internet Governance article, “If you need to get in touch with a website’s administrator, you will be able to do so in what is a less intrusive manner of achieving this purpose: by using an anonymized email address, or webform, to reach them (The exact implementation will depend on the registry). If this change is inadequate for your ‘private detective’ activities and you require full WHOIS records, including the personal information, then you will need to declare to a domain name registry your specific need for and use of this personal information. Nominet, for instance, has said that interested parties may request the full WHOIS record (including historical data) for a specific domain and get a response within one business day for no charge.”

I’m sure this will go over tremendously with both the hacked sites used to host phishing and/or malware download pages, as well as those phished by or served with malware in the added time it will take to relay and approve said requests.

According to a Q3 2017 study (PDF) by security firm Webroot, the average lifespan of a phishing site is between four and eight hours. How is waiting 24 hours before being able to determine who owns the offending domain going to be helpful to either the hacked site or its victims? It also doesn’t seem likely that many other registrars will volunteer for this 24-hour turnaround duty — and indeed no others have publicly demonstrated any willingness to take on this added cost and hassle.

I’ve heard that ICANN is pushing for a delay in the GDPR as it relates to WHOIS records, to give the registrar community time to come up with an accreditation system that would grant vetted researchers access to WHOIS records. Why isn’t that a good middle ground?

It might be if ICANN hadn’t dragged its heels in taking GDPR seriously until perhaps the past few months. As it stands, the experts I’ve interviewed see little prospect for such a system being ironed out or in gaining necessary traction among the registrar community to accomplish this anytime soon. And most experts I’ve interviewed predict it is likely that the Internet community will still be debating about how to create such an accreditation system a year from now.

Hence, it’s not likely that WHOIS records will continue to be anywhere near as useful to researchers in a month or so than they were previously. And this reality will continue for many months to come — if indeed some kind of vetted WHOIS access system is ever envisioned and put into place.

After I registered a domain name using my real email address, I noticed that address started receiving more spam emails. Won’t hiding email addresses in WHOIS records reduce the overall amount of spam I can expect when registering a domain under my real email address?

That depends on whether you believe any of the responses to the bolded questions above. Will that address be spammed by people who try to lure you into paying them to register variations on that domain, or to entice you into purchasing low-cost Web hosting services from some random or shady company? Probably. That’s exactly what happens to almost anyone who registers a domain name that is publicly indexed in WHOIS records.

The real question is whether redacting all email addresses from WHOIS will result in overall more bad stuff entering your inbox and littering the Web, thanks to reputation-based anti-spam and anti-abuse systems failing to work as well as they did before GDPR kicks in.

It’s worth noting that ICANN created a working group to study this exact issue, which noted that “the appearance of email addresses in response to WHOIS queries is indeed a contributor to the receipt of spam, albeit just one of many.” However, the report concluded that “the Committee members involved in the WHOIS study do not believe that the WHOIS service is the dominant source of spam.”

Do you have something against people not getting spammed, or against better privacy in general? 

To the contrary, I have worked the majority of my professional career to expose those who are doing the spamming and scamming. And I can say without hesitation that an overwhelming percentage of that research has been possible thanks to data included in public WHOIS registration records.

Is the current WHOIS system outdated, antiquated and in need of an update? Perhaps. But scrapping the current system without establishing anything in between while laboring under the largely untested belief that in doing so we will achieve some kind of privacy utopia seems myopic.

If opponents of the current WHOIS system are being intellectually honest, they will make the following argument and stick to it: By restricting access to information currently available in the WHOIS system, whatever losses or negative consequences on security we may suffer as a result will be worth the cost in terms of added privacy. That’s an argument I can respect, if not agree with.

But for the most part that’s not the refrain I’m hearing. Instead, what this camp seems to be saying is if you’re not on board with the WHOIS changes that will be brought about by the GDPR, then there must be something wrong with you, and in any case here a bunch of thinly-sourced reasons why the coming changes might not be that bad.

Rondam RamblingsCredit where it's due

Richard Nixon is rightfully remembered as one of the great villains of American democracy.  But he wasn't all bad.  He opened relations with China, appointed four mostly sane Supreme Court justices, and oversaw the establishment of the EPA among many other accomplishments.  Likewise, I believe that Donald Trump will eventually go down in history as one of the worst (if not the worst) president

Rondam RamblingsAn open letter to Jack Phillips

[Jack Phillips is the owner of the Masterpiece Cake Shop in Lakewood, Colorado.  Mr. Phillips is being sued by the Colorado Civil Rights Commission for refusing to make a wedding cake for a gay couple.  His case is currently before the U.S. Supreme Court.  Yesterday Mr. Phillips published an op-ed in The Washington Post to which this letter is a response.] Dear Mr. Phillips: Imagine your child

Rondam RamblingsPaul Ryan forces out House chaplain

Just in case there was the slightest ember of hope in your mind that Republicans actually care about religious freedom and are not just odious hypocritical power-grubbing opportunists, this should extinguish it once and for all: House Chaplain Patrick Conroy’s sudden resignation has sparked a furor on Capitol Hill, with sources in both parties saying he was pushed out by Speaker Paul Ryan (R-Wis

CryptogramTSB Bank Disaster

This seems like an absolute disaster:

The very short version is that a UK bank, TSB, which had been merged into and then many years later was spun out of Lloyds Bank, was bought by the Spanish bank Banco Sabadell in 2015. Lloyds had continued to run the TSB systems and was to transfer them over to Sabadell over the weekend. It's turned out to be an epic failure, and it's not clear if and when this can be straightened out.

It is bad enough that bank IT problem had been so severe and protracted a major newspaper, The Guardian, created a live blog for it that has now been running for two days.

The more serious issue is the fact that customers still can't access online accounts and even more disconcerting, are sometimes being allowed into other people's accounts, says there are massive problems with data integrity. That's a nightmare to sort out.

Even worse, the fact that this situation has persisted strongly suggests that Lloyds went ahead with the migration without allowing for a rollback.

This seems to be a mistake, and not enemy action.

Worse Than FailureError'd: Billboards Show Obvious Disasters

"Actually, this board is outside a casino in Sheffield which is next to the church, but we won't go there," writes Simon.

 

Wouter wrote, "The local gas station is now running ads for Windows 10 updates."

 

"If I were to legally change my name to a GUID, this is exactly what I'd pick," Lincoln K. wrote.

 

Robert F. writes, "Copy/Paste? Bah! If you really want to know how many log files are being generated per server, every minute, you're going to have to earn it by typing out this 'easy' command."

 

"I imagine someone pushed the '10 items or fewer' rule on the self-checkout kiosk just a little too far," Michael writes.

 

Wojciech wrote, "To think - someone actually doodled this JavaScript code!"

 

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Cory DoctorowRaleigh-Durham, I’m headed your way! CORRECTED!

CORRECTION! The Flyleaf event is at 6PM, not 7!

I’m delivering the annual Kilgour lecture tomorrow morning at 10AM at UNC, and I’ll be speaking at Flyleaf Books at 6PM — be there or be oblong!

Also, if you’re in Boston, Waterloo or Chicago, you can catch me in the coming weeks!.

Abstract: For decades, regulators and corporations have viewed the internet and the computer as versatile material from which special-purpose tools can be fashioned: pornography distribution systems, jihadi recruiting networks, video-on-demand services, and so on.

But the computer is an unprecedented general purpose device capable of running every program we can express in symbolic language, and the internet is the nervous system of the 21st century, webbing these pluripotent computers together.

For decades, activists have been warning regulators and corporations about the peril in getting it wrong when we make policies for these devices, and now the chickens have come home to roost. Frivolous, dangerous and poorly thought-through choices have brought us to the brink of electronic ruin.

We are balanced on the knife-edge of peak indifference — the moment at which people start to care and clamor for action — and the point of no return, the moment at which it’s too late for action to make a difference. There was never a more urgent moment to fight for a free, fair and open internet — and there was never an internet more capable of coordinating that fight.

Cory DoctorowLittle Brother is 10 years old today: I reveal the secret of writing future-proof science fiction


It’s been ten years since the publication of my bestselling novel Little Brother; though the novel was written more than a decade ago, and though it deals with networked computers and mobile devices, it remains relevant, widely read, and widely cited even today.

In an essay for Tor.com, I write about my formula for creating fiction about technology that stays relevant — the secret is basically to assume that people will be really stupid about technology for the foreseeable future.

And now we come to how to write fiction about networked computers that stays relevant for 12 years and 22 years and 50 years: just write stories in which computers can run all the programs, and almost no one understands that fact. Just write stories in which authority figures, and mass movements, and well-meaning people, and unethical businesses, all insist that because they have a *really good reason* to want to stop some program from running or some message from being received, it *must* be possible.

Write those stories, and just remember that because computers can run every program and the internet can carry any message, every device will someday be a general-purpose computer in a fancy box (office towers, cars, pacemakers, voting machines, toasters, mixer-taps on faucets) and every message will someday be carried on the public internet. Just remember that the internet makes it easier for people of like mind to find each other and organize to work together for whatever purpose stirs them to action, including terrible ones and noble ones. Just remember that cryptography works, that your pocket distraction rectangle can scramble messages so thoroughly that they can never, ever be descrambled, not in a trillion years, without your revealing the passphrase used to protect them. Just remember that swords have two edges, that the universe doesn’t care how badly you want something, and that every time we make a computer a little better for one purpose, we improve it for every purpose a computer can be put to, and that is all purposes.


Just remember that declaring war on general purpose computing is a fool’s errand, and that that never stopped anyone.


Ten Years of Cory Doctorow’s Little Brother [Cory Doctorow/Tor.com]


(Image: Missy Ward, CC-BY)

TEDTED hosts first-ever TED en Español Spanish-language speaker event at NYC headquarters

Thursday, April 26, 2018 – Today marks the first-ever TED en Español speaker event hosted by TED in its New York City office. The all-Spanish daytime event, held in TED’s theater in Manhattan, will feature eight speakers, a musical performance, five short films and fifteen 1-minute talks given by members of the audience. 150 people are expected to attend from around the world, and about 20 TEDx events in 10 countries plan to tune in to watch remotely.

The New York event is just the latest addition to TED’s sweeping new Spanish-language TED en Español initiative, designed to spread ideas to the global Hispanic community. Led by TED’s Gerry Garbulsky, also head of the world’s largest TEDx event – TEDxRiodelaPlata in Argentina – TED en Español includes a Facebook community, Twitter feed, weekly ¨Boletín¨ newsletter, YouTube channel and – as of earlier this month – an original podcast created in partnership with Univision Communications.

“As part of our nonprofit mission at TED, we work to find and spread the best ideas no matter where the speakers live or what language they speak,” said Gerry. “We want everyone to have access to ideas in their own language. Given the massive global Hispanic population, we’ve begun the work to bring Spanish-language ideas directly to Spanish-speaking audiences, and today event is a major step in solidifying our commitment to that effort.”

Today´s speakers include chef Gastón Acurio, futurist Juan Enriquez, entrepreneur Leticia Gasca, data scientist César A. Hidalgo, founder and funder Rebeca Hwang, ocean expert Enric Sala, assistant conductor of the LA Philharmonic Paolo Bortolameolli, and psychologist and dancer César Silveyra. Musical group LADAMA will perform.

 

Planet Linux AustraliaMichael Still: A first program in golang, with a short aside about Google

Share

I have reached the point in my life where I needed to write my first program in golang. I pondered for a disturbingly long time what exactly to write, but then it came to me…

Back in the day Google had an internal short URL service (think bit.ly, but for internal things). It was called “go” and lived at http://go. So what should I write as my first golang program? go of course.

The implementation is on github, and I am sure it isn’t perfect. Remember, it was a learning exercise. I mostly learned that golang syntax is a bit bonkers, and that etcd hates me.

This code stores short URLs in etcd, and redirects you to the right place if it knows about the short code you used. If you just ask for the root URL, you get a list of the currently defined short codes, as well as a form to create new ones. Not bad for a few hours hacking I think.

Share

The post A first program in golang, with a short aside about Google appeared first on Made by Mikal.

Worse Than FailureCodeSOD: If Not Null…

Robert needed to fetch some details about pump configurations from the backend. The API was poorly documented, but there were other places in the code which did that, so a quick search found this block:

var getConfiguration = function(){
    ....
    var result = null;
    result = getPumpConfiguration (areaID,subStationID,mngmtUnitID,lastServiceDate,service,format,result);
    result = getPumpConfiguration (areaID,subStationID,null,lastServiceDate,null,format,result);
    result = getPumpConfiguration (areaID,subStationID,null,lastServiceDate,service,null,result);
    result = getPumpConfiguration (areaID,subStationID,mngmtUnitID,lastServiceDate,null,null,result);
    result = getPumpConfiguration (areaID,subStationID,null,lastServiceDate,null,null,result);
    return result;
}

This collection of lines lurked at the end of a 100+ line function, which did a dozen other things. At a glance, it’s mildly perplexing. I can see that result gets passed into the function multiple times, so perhaps this is an attempt at a fluent API? So this series of calls awkwardly fetches the data that’s required? The parameters vary a little with every call, so that must be it, right?

Let’s check the implementation of getPumpConfiguration

function getPumpConfiguration (areaID,subStationID,mngmtUnitID,lastServiceDate,service,format,result) {
    if (result==null) {
    ...
    result = queryResult;
    ...
    }
    return result;
}

Oh, no. If the result parameter has a value… we just return it. Otherwise, we attempt to fetch data. This isn’t a fluent API which loads multiple pieces of data with separate requests, it’s an attempt at implementing retries. Hopefully one of those calls works.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV May 2018 Workshop

May 19 2018 12:30
May 19 2018 16:30
May 19 2018 12:30
May 19 2018 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Topic to be announced

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

May 19, 2018 - 12:30

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV May 2018 Main Meeting: "Share" with FOSS Software

May 1 2018 18:30
May 1 2018 20:30
May 1 2018 18:30
May 1 2018 20:30
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE NEW LOCATION

6:30 PM to 8:30 PM Tuesday, May 1, 2018
Meeting Room 3, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

Linux Users of Victoria is a subcommittee of Linux Australia.

May 1, 2018 - 18:30

read more

Planet Linux AustraliaMichael Still: etcd v2 and v3 data stores are separate

Share

Just noting this because it wasted way more of my time that it should have…

So you write an etcd app in a different language from your previous apps and it can’t see the data that the other apps wrote? Check the versions of your client libraries. The v2 and v3 data stores in etcd are different, and cannot be seen by each other. You need to convert your v2 data to the v3 data store before it will be visible there.

You’re welcome.

Share

The post etcd v2 and v3 data stores are separate appeared first on Made by Mikal.

,

Rondam RamblingsSupport Josh Harder for Congress

I've been quiet lately in part because I'm sinking back into the pit of despair when I think about politics.  The spinelessness and hypocrisy of the Republican party, the insidious and corrosive effects of corporate "free speech" embodied in soulless monsters like Sinclair and Fox News, and the fact that ultimately all this insanity has its foundation in the will of the people (or at least a

TEDIn Case You Missed It: The dawn of “The Age of Amazement” at TED2018

In Case You Missed It TED2018More than 100 speakers — activists, scientists, adventurers, change-makers and more — took the stage to give the talk of their lives this week in Vancouver at TED2018. One blog post could never hope to hold all of the extraordinary wisdom they shared. Here’s a (shamelessly inexhaustive) list of the themes and highlights we heard throughout the week — and be sure to check out full recaps of day 1, day 2, day 3 and day 4.

Discomfort is a proxy for progress. If we hope to break out of the filter bubbles that are defining this generation, we have to talk to and connect with people we disagree with. This message resonated across the week at TED, with talks from Zachary R. Wood and Dylan Marron showing us the power of reaching out, even when it’s uncomfortable. As Wood, a college student who books “uncomfortable speakers,” says: “Tuning out opposing viewpoints doesn’t make them go away.” To understand how society can progress forward, he says, “we need to understand the counterforces.” Marron’s podcast “Conversations With People Who Hate Me” showcases him engaging with people who have attacked him on the internet. While it hasn’t led to world peace, it has helped him develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.”

The Audacious Project, a new initiative for launching big ideas, seeks to create lasting change at scale. (Photo: Ryan Lash / TED)

Audacious ideas for big impact. The Audacious Project, TED’s newest initiative, aims to be the nonprofit version of an IPO. Housed at TED, it’s a collaboration among some of the biggest names in philanthropy that asks for nonprofit groups’ most audacious dreams; each year, five will be presented at TED with an invitation for the audience and world to get involved. The inaugural Audacious group includes public defender Robin Steinberg, who’s working to end the injustice of bail; oceanographer Heidi M. Sosik, who wants to explore the ocean’s twilight zone; Caroline Harper from Sight Savers, who’s working to end the scourge of trachoma; conservationist Fred Krupp, who wants to use the power of satellites and data to track methane emissions in unprecedented detail; and T. Morgan Dixon and Vanessa Garrison, who are inspiring a nationwide movement for Black women’s health. Find out more (and how you can get involved) at AudaciousProject.org.

Living means acknowledging death. Philosopher-comedian Emily Levine has stage IV lung cancer — but she says there’s no need to “oy” or “ohhh” over her: she’s OK with it. Life and death go hand in hand, she says; you can’t have one without the other. Therein lies the importance of death: it sets limits on life, limits that “demand creativity, positive energy, imagination” and force you to enrich your existence wherever and whenever you can. Jason Rosenthal’s journey of loss and grief began when his wife, Amy Krouse Rosenthal, wrote about their lives in an article read by millions of people: “You May Want to Marry My Husband” — a meditation on dying disguised as a personal ad for her soon-to-be-solitary spouse. By writing their story, Amy made Jason’s grief public — and challenged him to begin anew. He speaks to others who may be grieving: “I would like to offer you what I was given: a blank sheet of paper. What will you do with your intentional empty space, with your fresh start?”

“It’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” says Yuval Noah Harari. (Photo: Ryan Lash / TED)

Can we rediscover the humanity in our tech?  In a visionary talk about a “globally tragic, astoundingly ridiculous mistake” companies like Google and Facebook made at the foundation of digital culture, Jaron Lanier suggested a way we can fix the internet for good: pay for it. “We cannot have a society in which, if two people wish to communicate, the only way that can happen is if it’s financed by a third person who wishes to manipulate them,” he says. Historian Yuval Noah Harari, appearing onstage as a hologram live from Tel Aviv, warns that with consolidation of data comes consolidation of power. Fascists and dictators, he says, have a lot to gain in our new digital age; and “it’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” he says. Gizmodo writers Kashmir Hill and Surya Mattu survey the world of “smart devices” — the gadgets that “sit in the middle of our home with a microphone on, constantly listening,” and gathering data — to discover just what they’re up to. Hill turned her family’s apartment into a smart home, loading up on 18 internet-connected appliances; her colleague Mattu built a router that tracked how often the devices connected, who they were transmitting to, what they were transmitting. Through the data, he could decipher the Hill family’s sleep schedules, TV binges, even their tooth-brushing habits. And a lot of this data can be sold, including deeply intimate details. “Who is the true beneficiary of your smart home?” he asks. “You, or the company mining you?”

An invitation to build a better world. Actor and activist Tracee Ellis Ross came to TED with a message: the global collection of women’s experiences will not be ignored, and women will no longer be held responsible for the behaviors of men. Ross believes it is past time that men take responsibility to change men’s bad behavior — and she offers an invitation to men, calling them in as allies with the hope they will “be accountable and self-reflective.” She offers a different invitation to women: Acknowledge your fury. “Your fury is not something to be afraid of,” she says. “It holds lifetimes of wisdom. Let it breathe, and listen.”

Wow! discoveries. Among the TED Fellows, explorer and conservationist Steve Boyes’ efforts to chart Africa’s Okavango Delta has led scientists to identify more than 25 new species; University of Arizona astrophysicist Burçin Mutlu-Pakdil discovered a galaxy with an outer ring and a reddish inner ring that was unlike any ever seen before (her reward: it’s now called Burçin’s Galaxy). Another astronomer, University of Hawaii’s Karen Meech saw — and studied for an exhilarating few days — ‘Oumuamua, the first interstellar comet observed from Earth. Meanwhile, engineer Aaswath Raman is harnessing the cold of deep space to invent new ways to keep us cooler and more energy-efficient. Going from the sublime to the ridiculous, roboticist Simone Giertz showed just how much there is to be discovered from the process of inventing useless things.  

Walter Hood shares his work creating public spaces that illuminate shared memories without glossing over past — and present — injustices. (Photo: Ryan Lash / TED)

Language is more than words. Even though the stage program of TED2018 consisted primarily of talks, many went beyond words. Architects Renzo Piano, Vishaan Chakbrabarti, Ian Firth and Walter Hood showed how our built structures, while still being functional, can lift spirits, enrich lives, and pay homage to memories. Smithsonian Museum craft curator Nora Atkinson shared images from Burning Man and explained how, in the desert, she found a spirit of freedom, creativity and collaboration not often found in the commercial art world. Designer Ingrid Fetell Lee uncovered the qualities that make everyday objects a joy to behold. Illustrator Christoph Niemann reminded us how eloquent and hilarious sketches can be; in her portraits of older individuals, photographer Isadora Kosofsky showed us that visuals can be poignant too. Paul Rucker discussed his painful collection of artifacts from America’s racial past and how the artistic act of making scores of Ku Klux Klan robes has brought him some catharsis. Our physical movements are another way we speak  — for choreographer Elizabeth Streb, it’s expressing the very human dream to fly. For climber Alex Honnold, it was attaining a sense of mastery when he scaled El Capitan alone without ropes. Dolby Laboratories chief scientist Poppy Crum demonstrated the emotions that can be read through physical tells like body temperature and exhalations, and analytical chemist Simone Francese revealed the stories told through the molecules in our fingerprints.  

Kate Raworth presents her vision for what a sustainable, universally beneficial economy could look like. (Photo: Bret Hartman / TED)

Is human growth exponential or limited? There will be almost ten billion people on earth by 2050. How are we going to feed everybody, provide water for everybody and get power to everybody? Science journalist Charles C. Mann has spent years asking these questions to researchers, and he’s found that their answers fall into two broad categories: wizards and prophets. Wizards believe that science and technology will let us produce our way out of our dilemmas — think: hyper-efficient megacities and robots tending genetically modified crops. Prophets believe close to the opposite; they see the world as governed by fundamental ecological processes with limits that we transgress to our peril. As he says: “The history of the coming century will be the choice we make as a species between these two paths.” Taking up the cause of the prophets is Oxford economist Kate Raworth, who says that our economies have become “financially, politically and socially addicted” to relentless GDP growth, and too many people (and the planet) are being pummeled in the process. What would a sustainable, universally beneficial economy look like? A doughnut, says Raworth. She says we should strive to move countries out of the hole — “the place where people are falling short on life’s essentials” like food, water, healthcare and housing — and onto the doughnut itself. But we shouldn’t move too far lest we end up on the doughnut’s outside and bust through the planet’s ecological limits.

Seeing opportunity in adversity. “I’m basically nuts and bolts from the knee down,” says MIT professor Hugh Herr, demonstrating how his bionic legs — made up of 24 sensors, 6 microprocessors and muscle-tendon-like actuators — allow him to walk, skip and run. Herr builds body parts, and he’s working toward a goal that’s long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He dreams of a future where humans have augmented their bodies in a way that redefines human potential, giving us unimaginable physical strength — and, maybe, the ability to fly. In a beautiful, touching talk in the closing session of TED2018, Mark Pollock and Simone George take us inside their relationship — detailing how Pollock became paralyzed and the experimental work they’ve undertaken to help him regain motion. In collaboration with a team of engineers who created an exoskeleton for Pollock, as well as Dr. Reggie Edgerton’s team at UCLA, who developed a way to electrically stimulate the spinal cord of those with paralysis, Pollock was able to pull his knee into his chest during a lab test — proving that progress is definitely still possible.

TED Fellow and anesthesiologist Rola Hallam started the world’s first crowdfunded hospital in Syria. (Photo: Ryan Lash / TED)

Spotting the chance to make a difference. The TED Fellows program was full of researchers, activists and advocates capitalizing on the spaces that go unnoticed. Psychiatrist Essam Daod, found a “golden hour” in refugees’ treks when their narratives can sometimes be reframed into heroes’ journeys; landscape architect Kotcharkorn Voraakhom realized that a park could be designed to allow her flood-prone city of Bangkok mitigate the impact of climate change; pediatrician Lucy Marcil seized on the countless hours that parents spend in doctors’ waiting rooms to offer tax assistance; sustainability expert DeAndrea Salvador realized the profound difference to be made by helping low-income North Carolina residents with their energy bills; and anesthesiologist Rola Hallam is addressing aid shortfalls for local nonprofits, resulting in the world’s first crowdfunded hospital in Syria.

Catch up on previous In Case You Missed It posts from April 10 (Day 1), April 11 (Day 2), April 12 (Day 3), and yesterday, April 13 (Day 4).

Krebs on SecurityDDoS-for-Hire Service Webstresser Dismantled

Authorities in the U.S., U.K. and the Netherlands on Tuesday took down popular online attack-for-hire service WebStresser.org and arrested its alleged administrators. Investigators say that prior to the takedown, the service had more than 136,000 registered users and was responsible for launching somewhere between four and six million attacks over the past three years.

The action, dubbed “Operation Power Off,” targeted WebStresser.org (previously Webstresser.co), one of the most active services for launching point-and-click distributed denial-of-service (DDoS) attacks. WebStresser was one of many so-called “booter” or “stresser” services — virtual hired muscle that anyone can rent to knock nearly any website or Internet user offline.

Webstresser.org (formerly Webstresser.co), as it appeared in 2017.

“The damage of these attacks is substantial,” reads a statement from the Dutch National Police in a Reddit thread about the takedown. “Victims are out of business for a period of time, and spend money on mitigation and on (other) security measures.”

In a separate statement released this morning, Europol — the law enforcement agency of the European Union — said “further measures were taken against the top users of this marketplace in the Netherlands, Italy, Spain, Croatia, the United Kingdom, Australia, Canada and Hong Kong.” The servers powering WebStresser were located in Germany, the Netherlands and the United States, according to Europol.

The U.K.’s National Crime Agency said WebStresser could be rented for as little as $14.99, and that the service allowed people with little or no technical knowledge to launch crippling DDoS attacks around the world.

Neither the Dutch nor U.K. authorities would say who was arrested in connection with this takedown. But according to information obtained by KrebsOnSecurity, the administrator of WebStresser allegedly was a 19-year-old from Prokuplje, Serbia named Jovan Mirkovic.

Mirkovic, who went by the hacker nickname “m1rk,” also used the alias “Mirkovik Babs” on Facebook where for years he openly discussed his role in programming and ultimately running WebStresser. The last post on Mirkovic’s Facebook page, dated April 3 (the day before the takedown), shows the young hacker sipping what appears to be liquor while bathing. Below that image are dozens of comments left in the past few hours, most of them simply, “RIP.”

A story in the Serbia daily news site Blic.rs notes that two men from Serbia were arrested in conjunction with the WebStresser takedown; they are named only as “MJ” (Jovan Mirkovik) and D.V., aged 19 from Ruma.

Mirkovik’s fake Facebook page (Mirkovik Babs) includes countless mentions of another Webstresser administrator named “Kris” and includes a photograph of a tattoo that Kris got in 2015. That same tattoo is shown on the Facebook profile of a Kristian Razum from Zapresic, Croatia. According to the press releases published today, one of the administrators arrested was from Croatia.

Multiple sources are now pointing to other booter businesses that were reselling WebStresser’s service but which are no longer functional as a result of the takedown, including powerboot[dot]net, defcon[dot]pro, ampnode[dot]com, ripstresser[dot]com, fruitstresser[dot]com, topbooter[dot]com, freebooter[dot]co and rackstress[dot]pw.

Tuesday’s action against WebStresser is the latest such takedown to target both owners and customers of booter services. Many booter service operators apparently believe (or at least hide behind) a wordy “terms of service” agreement that all customers must acknowledge, under the assumption that somehow this absolves them of any sort of liability for how their customers use the service — regardless of how much hand-holding and technical support booter service administrators offer customers.

In October the FBI released an advisory warning that the use of booter services is punishable under the Computer Fraud and Abuse Act, and may result in arrest and criminal prosecution.

In 2016, authorities in Israel arrested two 18-year-old men accused of running vDOS, until then the most popular and powerful booter service on the market. Their arrests came within hours of a story at KrebsOnSecurity that named the men and detailed how their service had been hacked.

Many in the hacker community have criticized authorities for targeting booter service administrators and users and for not pursuing what they perceive as more serious cybercriminals, noting that the vast majority of both groups are young men under the age of 21. In its Reddit thread, the Dutch Police addressed this criticism head-on, saying Dutch authorities are working on a new legal intervention called “Hack_Right,” a diversion program intended for first-time cyber offenders.

“Prevention of re-offending by offering a combination of restorative justice, training, coaching and positive alternatives is the main aim of this project,” the Dutch Police wrote. “See page 24 of the 5th European Cyber Security Perspectives and stay tuned on our THTC twitter account #HackRight! AND we are working on a media campaign to prevent youngsters from starting to commit cyber crimes in the first place. Expect a launch soon.”

In the meantime, it’s likely we’ll sooner see the launch of yet more booter services. According to reviews and sales threads at stresserforums[dot]net — a marketplace for booter buyers and sellers — there are dozens of other booter services in operation, with new ones coming online almost every month.

Sociological ImagesBoozy Milkshakes and Sordid Spirits

The first nice weekend after a long, cold winter in the Twin Cities is serious business. A few years ago some local diners joined the celebration with a serious indulgence: the boozy milkshake.

When talking with a friend of mine from the Deep South about these milkshakes, she replied, “oh, a bushwhacker! We had those all the time in college.” This wasn’t the first time she had dropped southern slang that was new to me, so off to Google I went.

According to Merriam-Webster, “to bushwhack” means to attack suddenly and unexpectedly, as one would expect the alcohol in a milkshake to sneak up on you. The cocktail is a Nashville staple, but the origins trace back to the Virgin Islands in the 1970s.

Photo Credit: Beebe Bourque, Flickr CC
Photo Credit: Like_the_Grand_Canyon, Flickr CC

Here’s the part where the history takes a sordid turn: “Bushwhacker” was apparently also the nickname for guerrilla fighters in the Confederacy during the Civil War who would carry out attacks in rural areas (see, for example, the Lawrence Massacre). To be clear, I don’t know and don’t mean to suggest this had a direct influence in the naming of the cocktail. Still, the coincidence reminded me of the famous, and famously offensive, drinking reference to conflict in Northern Ireland.

Battle of Lawrence, Wikimedia Commons

When sociologists talk about concepts like “cultural appropriation,” we often jump to clear examples with a direct connection to inequality and oppression like racist halloween costumes or ripoff products—cases where it is pretty easy to look at the object in question and ask, “didn’t they think about this for more than thirty seconds?”

Cases like the bushwhacker raise different, more complicated questions about how societies remember history. Even if the cocktail today had nothing to do with the Confederacy, the weight of that history starts to haunt the name once you know it. I think many people would be put off by such playful references to modern insurgent groups like ISIS. Then again, as Joseph Gusfield shows, drinking is a morally charged activity in American society. It is interesting to see how the deviance of drinking dovetails with bawdy, irreverent, or offensive references to other historical and social events. Can you think of other drinks with similar sordid references? It’s not all sex on the beach!

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramTwo NSA Algorithms Rejected by the ISO

The ISO has rejected two symmetric encryption algorithms: SIMON and SPECK. These algorithms were both designed by the NSA and made public in 2013. They are optimized for small and low-cost processors like IoT devices.

The risk of using NSA-designed ciphers, of course, is that they include NSA-designed backdoors. Personally, I doubt that they're backdoored. And I always like seeing NSA-designed cryptography (particularly its key schedules). It's like examining alien technology.

Worse Than FailureThe Search for Truth

Every time you change existing code, you break some other part of the system. You may not realize it, but you do. It may show up in the form of a broken unit test, but that presumes that a) said unit test exists, and b) it properly tests the aspect of the code you are changing. Sadly, more often than not, there is either no test to cover your change, or any test that does exist doesn't handle the case you are changing.

Nicolai Abildgaard - Diogenes der lyser i mørket med en lygte.jpg

This is especially true if the thing you are changing is simple. It is even more true when changing something as complex as working with a boolean.

Mr A. was working at a large logistics firm that had an unusual error where a large online retailer was accidentally overcharged by millions of dollars. When large companies send packages to logistics hubs for shipment, they often send hundreds or thousands of them at a time on the same pallet, van or container (think about companies like Amazon). The more packages you send in these batches the less you pay (a single lorry is cheaper than a fleet of vans). These packages are lumped together and billed at a much lower rate than you or I would get.

One day, a particular developer saw something untidy in the code - an uninitialized Boolean variable in one of the APIs. The entire code change was from this:

    parcel.consolidated;

to this:

    parcel.consolidated = false;

There are some important things to note: the code was written in NodeJS and didn't really have the concept of Boolean, the developers did not believe in Unit Testing, and in a forgotten corner of the codebase was a little routine that examined each parcel to see if the discount applied.

The routine to see if the discount should be applied ran every few minutes. It looked at each package and if it was marked as consolidated or not, then it moved on to the next parcel. If the flag was NULL, then it applied the rules to see if was part of a shipment and set the flag to either True or False.

That variable was not Boolean but rather tri-state (though thankfully didn't involve FILE_NOT_FOUND). By assuming it was Boolean and initializing it, NO packages had a discount applied. Oopsie!

It took more than a month before anyone noticed and complained. And since it was a multi-million dollar mistake, they complained loudly!

Even after this event, Unit Testing was still not accepted as a useful practice. To this day Release Management, Unit Testing, Automated Testing and Source Code Management remain stubbornly absent...

Not long after this, Mr. A. continued his search for truth elsewhere.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

CryptogramComputer Alarm that Triggers When Lid Is Opened

"Do Not Disturb" is a Macintosh app that send an alert when the lid is opened. The idea is to detect computer tampering.

Wired article:

Do Not Disturb goes a step further than just the push notification. Using the Do Not Disturb iOS app, a notified user can send themselves a picture snapped with the laptop's webcam to catch the perpetrator in the act, or they can shut down the computer remotely. The app can also be configured to take more custom actions like sending an email, recording screen activity, and keeping logs of commands executed on the machine.

Can someone please make one of these for Windows?

,

CryptogramBaseball Code

Info on the coded signals used by the Colorado Rockies.

TEDMore TED2018 conference shorts to amuse and amaze

Even in the Age of Amazement, sometimes you need a break between talks packed with fascinating science, tech, art and so much more. That’s where interstitials come in: short videos that entertain and intrigue, while allowing the brain a moment to reset and ready itself to absorb more information.

For this year’s conference, TED commissioned and premiered four short films made just for the conference. Check out those films here!

Mixed in with our originals, curators Anyssa Samari and Jonathan Wells hand-picked even more videos — animations, music, even cool ads — to play throughout the week. Here’s the program of shorts they found, from creative people all around the world:

The short: Jane Zhang: “Dust My Shoulders Off.” A woman having a bad day is transported to a world of famous paintings where she has a fantastic adventure.

The creator: Outerspace Leo

Shown during: Session 2, After the end of history …

The short: “zoom(art).” A kaleidoscopic, visually compelling journey of artificial intelligence creating beautiful works of art.

The creator: Directed and programmed by Alexander Mordvintsev, Google Research

Shown during: Session 2, After the end of history …

The short: “20syl – Kodama.” A music video of several hands playing multiple instruments (and drawing a picture) simultaneously to create a truly delicious electronic beat.

The creators: Mathieu Le Dude & 20syl

Shown during: Session 3, Nerdish Delight

The short: “If HAL-9000 was Alexa.” 2001: A Space Odyssey seems a lot less sinister (and lot more funny) when Alexa can’t quite figure out what Dave is saying.

The creator: ScreenJunkies

Shown during: Session 3, Nerdish Delight

The short: “Maxine the Fluffy Corgi.” A narrated day in the life of an adorable pup named Maxine who knows what she wants.

The creator: Bryan Reisberg

Shown during: Session 3, Nerdish Delight

The short: “RGB FOREST.” An imaginative, colorful and geometric jaunt through the woods set to jazzy electronic music.

The creator: LOROCROM

Shown during: Session 6, What on earth do we do?

The short: “High Speed Hummingbirds.” Here’s your chance to watch the beauty and grace of hummingbirds in breathtaking slow motion.

The creator: Anand Varma

Shown during: Session 6, What on earth do we do?

The short: “Cassius ft. Cat Power & Pharrell Williams | Go Up.” A split screen music video that cleverly subverts and combines versions of reality.

The creator: Alex Courtès

Shown during: Session 7, Wow. Just wow.

The short: “Blobby.” A stop motion film about a man and a blob and the peculiar relationship they share.

The creator: Laura Stewart

Shown during: Session 7, Wow. Just wow.

The short: “WHO.” David Byrne and St. Vincent dance and sing in this black-and-white music video about accidents and consequences.

The creator: Martin de Thurah

Shown during: Session 8, Insanity. Humanity.

The short: “MAKIN’ MOVES.” When music makes the body move in unnatural, impossible ways.

The creator: Kouhei Nakama

Shown during: Session 9, Body electric

The short: “The Art of Flying.” The beautiful displays the Common Starling performs in nature.

The creator: Jan van IJken

Shown during: Session 9, Body electric

The short: “Kiss & Cry.” The heart-rending story of Giselle, a woman who lives and loves and wants to be loved. (You’ll never guess who plays the heroine.)

The creators: Jaco Van Dormael and choreographer Michèle Anne De Mey

Shown during: Session 10, Personally speaking

The short: “Becoming Violet.” The power of the human body, in colors and dance.

The creator: Steven Weinzierl

Shown during: Session 10, Personally speaking

The short: “Golden Castle Town.” A woman is transported to another world and learns to appreciate life anew.

The creator: Andrew Benincasa

Shown during: Session 10, Personally speaking

The short: “Tom Rosenthal | Cos Love.” A love letter to love that is grand and a bit melacholic.

The creator: Kathrin Steinbacher

Shown during: Session 11, What matters

TEDInsanity. Humanity. Notes from Session 8 at TED2018

If we want to create meaningful technology to counter radicalization, we have to start with the human journey at its core,” says technologist Yasmin Green at Session 8 at TED2018: The Age of Amazement, April 13, Vancouver. Photo: Ryan Lash / TED

The seven speakers lived up to the two words in the title of the session. Their talks showcased both our collective insanity — the algorithmically-assembled extremes of the Internet — and our humanity — the values and desires that extremists astutely tap into — along with some speakers combining the two into a glorious salad. Let’s dig in.

Artificial Intelligence = artificial stupidity. How does a sweetly-narrated video of hands unwrapping Kinder eggs garner 30 million views and spawn more than 10 million imitators? Welcome to the weird world of YouTube children’s videos, where an army of content creators use YouTube “to hack the brains of very small children, in return for advertising revenue,” as artist and technology critic James Bridle describes. Marketing ethics aside, this world seems innocuous on the surface but go a few clicks deeper and you’ll find a surreal and sinister landscape of algorithmically-assembled cartoons, nursery rhymes built from keyword combos, and animated characters and human actors being tortured, assaulted and killed. Automated copycats mimic trusted content providers “using the same mechanisms that power Facebook and Google to create ‘fake news’ for kids,” says Bridle. He adds that feeding the situation is the fact “we’re training them from birth to click on the very first link that comes along, regardless of where the source is.” As technology companies ignore these problems in their quest for ad dollars, the rest of us are stuck in a system in which children are sent down auto-playing rabbit holes where they see disturbing videos filled with very real violence and very real trauma — and get traumatized as a result. Algorithms are touted as the fix, but Bridle declares, “Machine learning, as any expert on it will tell you, is what we call software that does stuff we don’t really understand, and I think we have enough of that already,” he says. Instead, “we need to think of technology not as a solution to all our problems but as a guide to what they are.” After his talk, TED Head of Curation Helen Walters has a blunt question for Bridle: “So are we doomed?” His realistic but ungrim answer: “We’ve got a hell of a long way to go, but talking is the beginning of that process.”

Technology that fights extremism and online abuse. Over the last few years, we’ve seen geopolitical forces wreak havoc with their use of the Internet. At Jigsaw (a division of Alphabet), Yasmin Green and her colleagues were given the mandate to build technology that could help make the world safer from extremism and persecution. “Radicalization isn’t a yes or no choice,” she says. “It’s a process, during which people have questions about ideology, religion — and they’re searching online for answers which is an opportunity to reach them.” In 2016, Green collaborated with Moonshot CVE to pilot a new approach called the “Redirect Method.” She and a team interviewed dozens of former members of violent extremist groups and used that information to create a campaign that deployed targeted advertising to reach people susceptible to ISIS’s recruiting and show them videos to counter those messages. Available in English and Arabic, the eight-week pilot program reached more than 300,000 people. In another project, she and her team looked for a way to combat online abuse. Partnering across Google with Wikipedia and the New York Times, the team trained machine-learning models to understand the emotional impact of language — specifically, to predict comments that were likely to make someone leave a conversation and to give commenters real-time feedback about how their words might land. Due to the onslaught of online vitriol, the Times had previously enabled commenting on only 10 percent of homepage stories, but this strategy led it to open up all homepage stories to comments. “If we ever thought we could build technology insulated from the dark side of humanity, we were wrong,” Green says. “If technology has any hope of overcoming today’s challenges, we must throw our entire selves into understanding these issues and create solutions that are as human as the problems they aim to solve.” In a post-talk Q & A, Green adds that banning certain keywords isn’t enough of a solution: “We need to combine human insight with innovation.”

Living life means acknowledging death. Philosopher-comedian Emily Levine starts her talk with some bad news — she’s got stage 4 lung cancer — but says there’s no need to “oy” or “ohhh” over her: she’s okay with it. After all, explains Levine, life and death go hand in hand; you can’t have one without the other. In fact, therein lies the importance of death: it sets limits on life, limits that “demand creativity, positive energy, imagination” and force you to enrich your existence wherever and whenever you can. Levine muses about the scientists who are attempting to thwart death — she dubs them the Anti-Life Brigade — and calls them ungrateful and disrespectful in their efforts to wrest control from nature. “We don’t live in the clockwork universe,” she says wryly. “We live in a banana peel universe,” where our attempts at mastery will always come up short against mystery. She has come to view life as a “gift that you enrich as best you can and then give back.” And just as we should appreciate that life’s boundary line stops abruptly at death, we should accept our own intellectual and physical limits. “We won’t ever be able to know everything or control everything or predict everything,” says Levine. “Nature is like a self-driving car.” We may have some control, but we’re not at the wheel.

A high-schooler working on the future of AI. Is artificial intelligence actually intelligence? Not yet, says Kevin Frans. Earlier in his teen years — he’s now just 18 — he joined the OpenAI lab to think about the fascinating problem of making AI that has true intelligence. Right now, he says, a lot of we call intelligence is just trial-and-error on a massive scale — machines try every possible solution, even ones too absurd for a human to imagine, until it finds the thing that works best to solve a single discrete problem. That can create computers that are champions at Go or Q-Bert, but it really doesn’t create general intelligence. So Frans is conceptualizing instead a way to think about AI from a skills perspective — specifically, the ability to learn simple skills and assemble them to accomplish tasks. It’s early days for this approach, and for Kevin himself, who is part of the first generation to grow up as AI natives and think with these machines. What can he and these new brains accomplish together?

Come fly with her. From a young age, action and hardware engineer Elizabeth Streb wanted to fly like, well, a fly or a bird. It took her years of painful experimentation to realize that humans can’t swoop and veer like them, but perhaps she could discover how humans could fly. Naturally, it involves more falling than staying airborne. She has jumped through broken glass and toppled from great heights in order to push the bounds of her vertical comfort zone. With her Streb Extreme Action company, she’s toured the world, bringing the delight and wonder of human flight to audiences. Along the way, she realized, “If we wanted to go higher, faster, sooner, harder and make new discoveries, it was necessary to create our very own space-ships,” so she’s also built hardware to provide a boost. More recently, she opened Brooklyn’s Streb Lab for Action Mechanics (SLAM) to instruct others. “As it turns out, people don’t just want to dream about flying, nor do they want to watch people like us fly; they want to do it, too, and they can,” she says. In teaching, she sees “smiles become more common, self-esteem blossom, and people get just a little bit braver. People do learn to fly, as only humans can.”

Calling all haters. “You’re everything I hate in a human being” — that’s just one of the scores of nasty messages that digital creator Dylan Marron receives every day. While his various video series such as “Every Single Word” and “Sitting in Bathrooms With Trans People” have racked up millions of views, they’ve also sent a slew of Internet poison in his direction. “At first, I would screenshot their comments and make fun of their typos but this felt elitist and unhelpful,” recalls Marron. Over time, he developed an unexpected coping mechanism: he calls the people responsible for leaving hateful remarks on his social media, opening their chats with a simple question: “Why did you write that?” These exchanges have been captured on Marron’s podcast “Conversations With People Who Hate Me.” While it hasn’t led to world peace — you would have noticed — he says it’s caused him to develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.” And he stresses that his solution is not right for everyone . In a Q & A afterward, he says that some people have told him that his podcast just gives a platform to those espousing harmful ideologies. Marron emphasizes, “Empathy is not endorsement.” His conversations represent his own way of responding to online hate, and he says, “I see myself as a little tile in the mosaic of activism.”

Rebuilding trust at work. Trust is the foundation for everything we humans do, but what do we do when it is broken? It’s a problem that fascinates Frances Frei, a professor at Harvard Business School who recently spent six months trying to restore trust at Uber. According to Frei, trust is a three-legged stool that rests on authenticity, logic, and empathy. “If any one of these three gets shaky, if any one of these three wobbles, trust is threatened,” she explains. So which wobbles did Uber have? All of them, according to Frei. Authenticity was the hardest to fix – but that’s not uncommon. “It is still much easier to coach people to fit in; it is still much easier to reward people when they say something that you were going to say,” Frei says, “but when we figure out how to celebrate difference and how to let people bring the best version of themselves forward, well, holy cow, is that the world I want my sons to grow up in.” You can read more about her talk here.

TEDWhat matters: Notes from Session 11 of TED2018

Reed Hastings, the head of Netflix, listens to a question from Chris Anderson during a sparky onstage Q&A on the final morning of TED2018, April 14, 2018. Photo: Ryan Lash / TED

What a week. We’ve heard so much, from dystopian warnings to bold visions for change. Our brains are full. Almost. In this session we pull back to the human stories that underpin everything we are, everything we want. From new ways to set goals and move business forward, to unabashed visions for joy and community, it’s time to explore what matters.

The original people of this land. One important thing to know: TED’s conference home of Vancouver is built on un-ceded land that once belonged to First Nations people. So this morning, two DJs from A Tribe Called Red start this session by remembering and honoring them, telling First Nations stories in beats and images in a set that expands on the concept of Halluci Nation, inspired by the poet, musician and activist John Trudell. In Trudell’s words: “We are the Halluci Nation / Our DNA is of earth and sky / Our DNA is of past and future.”

The power of why, what and how. Our leaders and our institutions are failing us, and it’s not always because they’re bad or unethical. Sometimes, it’s simply because they’re leading us toward the wrong objectives, says venture capitalist John Doerr. How can we get back on track? The trick may be a system called OKR, developed by legendary management thinker Andy Grove. Doerr explains that OKR stands for ‘objectives and key results’ – and setting the right ones can be the difference between success and failure. However, before you set your objective (your what) and your key results (your how), you need to understand your why. “A compelling sense of why can be the launch pad for our objectives,” he says. He illustrates the power of OKRs by sharing the stories of individuals and organizations who’ve put them into practice, including Google’s Larry Page and Sergey Brin. “OKRs are not a silver bullet. They’re not going to be a substitute for a strong culture or for stronger leadership, but when those fundamentals are in place, they can take you to the mountaintop,” he says. He encourages all of us to take the time to write down our values, our objectives, and our key results – and to do it today. “Let’s fight for what it is that really matters, because we can take OKRs beyond our businesses. We can take them to our families, to our schools, even to our government. We can hold those governments accountable,” he says. “We can get back on the right track if we can and do measure what really matters.”

What’s powering China’s tech innovation? The largest mass migration in the world occurs every year around the Chinese Spring Festival. Over 40 days, travelers — including 290 million migrant workers — take 3 billion trips all over China. Few can afford to fly, so railways strained to keep up, with crowding, fraud and drama. So the Chinese technology sector has been building everything from apps to AI to ease not only this process, but other pain points throughout society. But unlike the US, where innovation is often fueled by academia and enterprise, China’s tech innovation is powered by “an overwhelming need economy that is serving an underprivileged populace, which has been separated for 30 years from China’s economic boom.” The CEO of the China Morning Post, Gary Liu has a front-row seat to this transformation. As China’s introduction of a “social credit rating” system suggests, a technology boom in an authoritarian society hides a significant dark side. But the Chinese internet hugely benefits its 772 million users. It has spread deeply into rural regions, revitalizing education and creating jobs. There’s a long way to go to bring the internet to everyone in China — more than 600 million people remain offline . But wherever the internet is fueling prosperity, “we should endeavor to follow it with capital and with effort, driving both economic and societal impact all over the world. Just imagine for a minute what more could be possible if the global needs of the underserved become the primary focus of our inventions.”

Netflix and chill, the interview. The humble beginnings of Netflix paved the way to transforming how we consume content today. Reed Hastings — who started out as a high school math teacher — admits that making the shift from DVDs to streaming was a big leap. “We weren’t confident,” he admits in his interview with TED Curator Chris Anderson. “It was scary.” Obviously, it paid off over time, with 117 million subscribers (and growing), more than $11 billion in revenue (so far) and a slew of popular original content (Black Mirror, anyone?) fueled by curated algorithmic recommendations. The offerings of Netflix, Hastings says, is a mixture of candy and broccoli — and it allows people to decide what a proper “diet” is for them. “We get a lot of joy from making people happy,” he says. The external culture of the streaming platform reflects its internal culture as well: they’re super focused on how to run with no process, but without chaos. There’s an emphasis on freedom, responsibility and honesty (as he puts it, “disagreeing silently is disloyal”). And though Hastings loves business — competing against the likes of HBO and Disney — he also enjoys his philanthropic pursuits supporting innovative education, such as the KIPP charter schools, and advocates for more variety in educational content. For now, he says, it’s the perfect job.

“E. Pluribus Unum” — ”Out of many, one.” It’s the motto of the United States, yet few citizens understand its meaning. Artist and designer Walter Hood calls for national landscapes that preserve the distinct identities of peoples and cultures, while still forging unity. Hood believes spaces should illuminate shared memories without glossing over past — and present — injustices. To guide his projects, Hood follows five simple guidelines. The first — “Great things happen when we exist in each other’s world” — helped fire up a Queens community garden initiative in collaboration with Bette Midler and hip-hop legend 50 Cent. “Two-ness” — or the sense of double identity faced by those who are “othered,” like women and African-Americans — lies behind a “shadow sculpture” at the University of Virginia that commemorates a forgotten, buried servant household uncovered during the school’s expansion. “Empathy” inspired the construction of a park in downtown Oakland that serves office workers and the homeless community, side-by-side. “The traditional belongs to all of us” — and to the San Francisco neighborhood of Bayview-Hunter’s Point, where Hood restored a Victorian opera house to serve the local community. And “Memory” lies at the core of a future shorefront park in Charleston, which will rest on top of Gadsden Wharf — an entry point for 40% of the United States’ slaves, where they were then “stored” in chains — that forces visitors to confront the still-resonating cruelty of our past.

The tension between acceptance and hope. When Simone George met Mark Pollock, it was eight years after he’d lost his sight. Pollock was rebuilding his identity — living a high-octane life of running marathons and racing across Antarctica to reach the South Pole. But a year after he returned from Antarctica, Pollock fell from a third-story window; he woke up paralyzed from the waist down. Pollock shares how being a realist — inspired by the writings of Admiral James Stockdale, a Vietnam POW — helped him through bleak days after this accident, when even hope seemed dangerous. George explains how she helped Pollock navigate months in the hospital; told that any sensation Pollock didn’t regain in the weeks immediately after the fall would likely never come back, the two looked to stories of others, like Christopher Reeve, who had pushed beyond what was understood as possible for those who are paralyzed. “History is filled with the kinds of impossible made possible through human endeavor,” Pollock says. So he started asking: Why can’t human endeavor cure paralysis in his lifetime? In collaboration with a team of engineers in San Francisco, who created an exoskeleton for Pollock, as well as Dr. Reggie Edgerton’s team at UCLA, who had developed a way to electrically stimulate the spinal cord of those with paralysis, Pollock was able to pull his knee into his chest during a lab test, proving that progress is definitely still possible. For now, “I accept the wheelchair, it’s almost impossible not to,” says Pollock. “We also hope for another life — a life where we have created a cure through collaboration, a cure that we’re actively working to release from university labs around the world and share with everyone who needs it.”

The pursuit of joy, not happiness. “How do tangible things make us feel intangible joy?” asks designer Ingrid Fetell Lee. She pursued this question for ten years to understand how the physical world relates to the mysterious, quixotic emotion of joy. In turns out, the physical can be a remarkable, renewable resource for fostering a happier, healthier life. There isn’t just one type of joy, and its definition morphs from person to person — but psychologists, broadly speaking, describe joy as intense, momentary experience of positive emotion (or, simply, as something that makes you want to jump up and down). However, joy shouldn’t be conflated with happiness, which measure how good we feel over time. So, Lee asked around about what brings people joy and eventually had a notebook filled with things like beach balls, treehouses, fireworks, googly eyes and ice cream cones with rainbow sprinkles, and realized something significant: the patterns of joy have roots in evolutionary history. Things like symmetrical shapes, bright colors, an attraction to abundance and multiplicity, a feeling of lightness or elevation — this is what’s universally appealing. Joy lowers blood pressure, improves our immune system and even increases productivity. She began to wonder: should we use these aesthetics to help us find more opportunities for joy in the world around us? “Joy begins with the senses,” she says. “What we should be doing is embracing joy, and finding ways to put ourselves in the path of it more often.”

And that’s a wrap. Speaking of joy, Baratunde Thurston steps out to close this conference with a wrap that shouts out the diversity of this year’s audience but also nudges the un-diverse selection of topics: next year, he asks, instead of putting an African child on a slide, can we put her onstage to speak for herself? He winds together the themes of the week, from the terrifying — killer robots, octopus robots, genetically modified piglets — to the badass, the inspiring and the mind-opening. Are you not amazed?

Worse Than FailureThe Big Balls of…

The dependency graph of your application can provide a lot of insight into how objects call each other. In a well designed application, this is probably mostly acyclic and no one node on the graph has more than a handful of edges coming off of it. The kinds of applications we talk about here, on the other hand, we have a name for their graphs: the Enterprise Dependency and the Big Ball of Yarn.

Thomas K introduces us to an entirely new iteration: The Big Ball of Mandelbrot

A dependency graph which exhibits self-similarity at different scales

This gives new meaning to points “on a complex plane”.

What you’re seeing here is the relationship between stored procedures and tables. Circa 1995, when this application shambled into something resembling life, the thinking was, “If we put all the business logic in stored procedures, it’ll be easy to slap new GUIs on there as technology changes!”

Of course, the relationship between what the user sees on the screen and the underlying logic which drives that display means that as they changed the GUI, they also needed to change the database. Over the course of 15 years, the single cohesive data model ubercomplexificaticfied itself as each customer needed a unique GUI with a unique feature set which mandated unique tables and stored procedures.

By the time Thomas came along to start a pseudo-greenfield GUI in ASP.Net, the first and simplest feature he needed to implement involved calling a 3,000 line stored procedure which required over 100 parameters.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Krebs on SecurityTranscription Service Leaked Medical Records

MEDantex, a Kansas-based company that provides medical transcription services for hospitals, clinics and private physicians, took down its customer Web portal last week after being notified by KrebsOnSecurity that it was leaking sensitive patient medical records — apparently for thousands of physicians.

On Friday, KrebsOnSecurity learned that the portion of MEDantex’s site which was supposed to be a password-protected portal physicians could use to upload audio-recorded notes about their patients was instead completely open to the Internet.

What’s more, numerous online tools intended for use by MEDantex employees were exposed to anyone with a Web browser, including pages that allowed visitors to add or delete users, and to search for patient records by physician or patient name. No authentication was required to access any of these pages.

This exposed administrative page from MEDantex’s site granted anyone complete access to physician files, as well as the ability to add and delete authorized users.

Several MEDantex portal pages left exposed to the Web suggest that the company recently was the victim of WhiteRose, a strain of ransomware that encrypts a victim’s files unless and until a ransom demand is paid — usually in the form of some virtual currency such as bitcoin.

Contacted by KrebsOnSecurity, MEDantex founder and chief executive Sreeram Pydah confirmed that the Wichita, Kansas based transcription firm recently rebuilt its online servers after suffering a ransomware infestation. Pydah said the MEDantex portal was taken down for nearly two weeks, and that it appears the glitch exposing patient records to the Web was somehow incorporated into that rebuild.

“There was some ransomware injection [into the site], and we rebuilt it,” Pydah said, just minutes before disabling the portal (which remains down as of this publication). “I don’t know how they left the documents in the open like that. We’re going to take the site down and try to figure out how this happened.”

It’s unclear exactly how many patient records were left exposed on MEDantex’s site. But one of the main exposed directories was named “/documents/userdoc,” and it included more than 2,300 physicians listed alphabetically by first initial and last name. Drilling down into each of these directories revealed a varying number of patient records — displayed and downloadable as Microsoft Word documents and/or raw audio files.

Although many of the exposed documents appear to be quite recent, some of the records dated as far back as 2007. It’s also unclear how long the data was accessible, but this Google cache of the MEDantex physician portal seems to indicate it was wide open on April 10, 2018.

Among the clients listed on MEDantex’s site include New York University Medical Center; San Francisco Multi-Specialty Medical Group; Jackson Hospital in Montgomery Ala.; Allen County Hospital in Iola, Kan; Green Clinic Surgical Hospital in Ruston, La.; Trillium Specialty Hospital in Mesa and Sun City, Ariz.; Cooper University Hospital in Camden, N.J.; Sunrise Medical Group in Miami; the Wichita Clinic in Wichita, Kan.; the Kansas Spine Center; the Kansas Orthopedic Center; and Foundation Surgical Hospitals nationwide. MEDantex’s site states these are just some of the healthcare organizations partnering with the company for transcription services.

Unfortunately, the incident at MEDantex is far from an anomaly. A study of data breaches released this month by Verizon Enterprise found that nearly a quarter of all breaches documented by the company in 2017 involved healthcare organizations.

Verizon says ransomware attacks account for 85 percent of all malware in healthcare breaches last year, and that healthcare is the only industry in which the threat from the inside is greater than that from outside.

“Human error is a major contributor to those stats,” the report concluded.

Source: Verizon Business 2018 Data Breach Investigations Report.

According to a story at BleepingComputer, a security news and help forum that specializes in covering ransomware outbreaks, WhiteRose was first spotted about a month ago. BleepingComputer founder Lawrence Abrams says it’s not clear how this ransomware is being distributed, but that reports indicate it is being manually installed by hacking into Remote Desktop services.

Fortunately for WhiteRose victims, this particular strain of ransomware is decryptable without the need to pay the ransom.

“The good news is this ransomware appears to be decryptable by Michael Gillespie,” Abrams wrote. “So if you become infected with WhiteRose, do not pay the ransom, and instead post a request for help in our WhiteRose Support & Help topic.”

Ransomware victims may also be able to find assistance in unlocking data without paying from nomoreransom.org.

KrebsOnSecurity would like to thank India-based cybersecurity startup Banbreach for the heads up about this incident.

Google AdsenseLet’s talk about Policy! Partnership with Industry Associations, and Better Ads Standards

Thanks to your feedback around wanting more discussion of AdSense policies, we have created some new content on our AdSense YouTube channel in a series called “Let’s talk about Policy!”.

This month, we're talking about the Coalition for Better Ads, Better Ads Standards.

These standards were created by the Coalition for Better Ads to improve ad experiences for users across the web. Google is a member of the Coalition alongside other stakeholders within the industry. The Better Ads Standards were created with extensive consumer research to minimize annoying ad experiences across devices. The Standards are currently in place for users within North America and Europe, but we are advising all our publishers to abide by the standards to provide a quality user experience for your visitors and minimize the need to make changes in the future.

Check out our video discussing the Standards to learn more:



Be sure to subscribe to the channel to ensure you don’t miss an episode.

CryptogramRussia is Banning Telegram

Russia has banned the secure messaging app Telegram. It's making an absolute mess of the ban -- blocking 16 million IP addresses, many belonging to the Amazon and Google clouds -- and it's not even clear that it's working. But, more importantly, I'm not convinced Telegram is secure in the first place.

Such a weird story. If you want secure messaging, use Signal. If you're concerned that having Signal on your phone will itself arouse suspicion, use WhatsApp.

CryptogramYet Another Biometric: Ear Shape

This acoustic technology identifies individuals by their ear shapes. No information about either false positives or false negatives.

Worse Than FailureRepresentative Line: The Truth About Comparisons

We often point to dates as one of the example data types which is so complicated that most developers can’t understand them. This is unfair, as pretty much every data type has weird quirks and edge cases which make for unexpected behaviors. Floating point rounding, integer overflows and underflows, various types of string representation…

But file-not-founds excepted, people have to understand Booleans, right?

Of course not. We’ve all seen code like:

if (booleanFunction() == true) …

Or:

if (booleanValue == true) {
    return true;
} else {
    return false;
}

Someone doesn’t understand what booleans are for, or perhaps what the return statement is for. But Paul T sends us a representative line which constitutes a new twist on an old chestnut.

if ( Boolean.TRUE.equals(functionReturningBooleat(pa, isUpdateNetRevenue)) ) …

This is the second most Peak Java Way to test if a value is true. The Peak Java version, of course, would be to use an AbstractComparatorFactoryFactory<Boolean> to construct a Comparator instance with an injected EqualityComparison object. But this is pretty close- take the Boolean.TRUE constant, use the inherited equals method on all objects, which means transparently boxing the boolean returned from the function into an object type, and then executing the comparison.

The if (boolean == true) return true; pattern is my personal nails-on-the-chalkboard code block. It’s not awful, it just makes me cringe. Paul’s submission is like an angle-grinder on a chalkboard.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet Linux AustraliaMichael Still: pyconau 2018 call for proposals now open

Share

The pyconau call for proposals is now open, and runs until 28 May. I took my teenagers to pyconau last year and they greatly enjoyed it. I hadn’t been to a pyconau in ages, and ended up really enjoying thinking about things from topic areas I don’t normally need to think about. I think expanding one’s horizons is generally a good idea.

Should I propose something for this year? I am unsure. Some random ideas that immediately spring to mind:

  • something about privsep: I think a generalised way to make privileged calls in unprivileged code is quite interesting, especially in a language which is often used for systems management and integration tasks. That said, perhaps its too OpenStacky given how disinterested in OpenStack talks most python people seem to be.
  • nova-warts: for a long time my hobby has been cleaning up historical mistakes made in OpenStack Nova that wont ever rate as a major feature change. What lessons can other projects learn from a well funded and heavily staffed project that still thought that exec() was a great way to do important work? There’s definitely an overlap with the privsep talk above, but this would be more general.
  • a talk about how I had to manage some code which only worked in python2, and some other code that only worked in python3 and in the end gave up on venvs and decided that Docker containers are like the ultimate venvs. That said, I suspect this is old hat and was obvious to everyone except me.
  • something else I haven’t though of.

Anyways, I’m undecided. Comments welcome.

Also, here’s an image for this post. Its the stone henge we found at Guerilla Bay last weekend. I assume its in frequent use for tiny tiny druids.

Share

The post pyconau 2018 call for proposals now open appeared first on Made by Mikal.

,

Harald Welteosmo-fl2k - Using USB-VGA dongles as SDR transmitter

Yesterday, during OsmoDevCon 2018, Steve Markgraf released osmo-fl2k, a new Osmocom member project which enables the use of FL2000 USB-VGA adapters as ultra-low-cost SDR transmitters.

How does it work?

A major part of any VGA card has always been a rather fast DAC which generates the 8-bit analog values for (each) red, green and blue at the pixel clock. Given that fast DACs were very rare/expensive (and still are to some extent), the idea of (ab)using the VGA DAC to transmit radio has been followed by many earlier, mostly proof-of-concept projects, such as Tempest for Eliza in 2001.

However, with osmo-fl2k, for the first time it was possible to completely disable the horizontal and vertical blanking, resulting in a continuous stream of pixels (samples). Furthermore, as the supported devices have no frame buffer memory, the samples are streamed directly from host RAM.

As most USB-VGA adapters appear to have no low-pass filters on their DAC outputs, it is possible to use any of the harmonics to transmit signals at much higher frequencies than normally possible within the baseband of the (max) 157 Mega-Samples per seconds that can be achieved.

osmo-fl2k and rtl-sdr

Steve is the creator of the earlier, complementary rtl-sdr software, which since 2012 transforms USB DVB adapters into general-purpose SDR receivers.

Today, six years later, it is hard to think of where SDR would be without rtl-sdr. Reducing the entry cost of SDR receivers nearly down to zero has done a lot for democratization of SDR technology.

There is hence a big chance that his osmo-fl2k project will attain a similar popularity. Having a SDR transmitter for as little as USD 5 is an amazing proposition.

free riders

Please keep in mind that Steve has done rtl-sdr just for fun, to scratch his own itch and for the "hack value". He chose to share his work with the wider public, in source code, under a free software license. He's a very humble person, he doesn't need to stand in the limelight.

Many other people since have built a business around rtl-sdr. They have grabbed domains with his project name, etc. They are now earning money based on what he has done and shared selflessly, without ever contributing back to the pioneering developers who brought this to all of us in the first place.

So, do we want to bet if history repeats itself? How long will it take for vendors showing up online advertising the USB VGA dongles as "SDR transmitter", possibly even with a surcharge? How long will it take for them to include Steve's software without giving proper attribution? How long until they will violate the GNU GPL by not providing the complete corresponding source code to derivative versions they create?

If you want to thank Steve for his amazing work

  • reach out to him personally
  • contribute to his work, e.g.
  • help to maintain it
  • package it for distributions
  • send patches (via osmocom-sdr mailing list)
  • register an osmocom.org account and update the wiki with more information

And last, but not least, carry on the spirit of "hack value" and democratization of software defined radio.

Thank you, Steve! After rtl-sdr and osmo-fl2k, it's hard to guess what will come next :)

Planet Linux AustraliaMichael Still: Caliban’s War

Share

This is the second book in the Leviathan Wakes series by James SA Corey. Just as good as the first, this is a story about how much a father loves his daughter, moral choices, and politics — just as much as it is the continuation of the story arc around the alien visitor. I haven’t seen this far in the Netflix series, but I sure hope they get this right, because its a very good story so far.

Caliban's War Book Cover Caliban's War
James S. A. Corey
Fiction
Orbit Books
April 30, 2013
624

For someone who didn't intend to wreck the solar system's fragile balance of power, Jim Holden did a pretty good job of it. While Earth and Mars have stopped shooting each other, the core alliance is shattered. The outer planets and the Belt are uncertain in their new - possibly temporary - autonomy. Then, on one of Jupiter's moons, a single super-soldier attacks, slaughtering soldiers of Earth and Mars indiscriminately and reigniting the war. The race is on to discover whether this is the vanguard of an alien army, or if the danger lies closer to home.

Share

The post Caliban’s War appeared first on Made by Mikal.

,

Planet Linux AustraliaPia Waugh: Exploring change and how to scale it

Over the past decade I have been involved in several efforts trying to make governments better. A key challenge I repeatedly see is people trying to change things without an idea of what they are trying to change to, trying to fix individual problems (a deficit view) rather than recognising and fixing the systems that created the problems in the first place. So you end up getting a lot of symptomatic relief and iterative improvements of antiquated paradigms without necessarily getting transformation of the systems that generated the problems. A lot of the effort is put into applying traditional models of working which often result in the same old results, so we also need to consider new ways to work, not just what needs to be done.

With life getting faster and (arguably) exponentially more complicated, we need to take a whole of system view if we are to improve ‘the system’ for people. People sometimes balk when I say this thinking it too hard, too big or too embedded. But we made this, we can remake it, and if it isn’t working for us, we need to adapt like we always have.

I also see a lot of slogans used without the nuanced discussion they invite. Such (often ideological) assumptions can subtly play out without evidence, discussion or agreement on common purpose. For instance, whenever people say smaller or bigger government I try to ask what they think the role of government is, to have a discussion. Size is assumed to correlate to services, productivity, or waste depending on your view, but shouldn’t we talk about what the public service should do, and then the size is whatever is appropriate to do what is needed? People don’t talk about a bigger or smaller jacket or shoes, they get the right one for their needs and the size can change over time as the need changes. Indeed, perhaps the public service of the future could be a dramatically different workforce comprised of a smaller group of professional public servants complimented with and a large demographically representative group of part time citizens doing their self nominated and paid “civic duty year of service” as a form of participatory democracy, which would bring new skills and perspectives into governance, policy and programs.

We need urgently to think about the big picture, to collectively talk about the 50 or 100 year view for society, and only then can we confidently plan and transform the structures, roles, programs and approaches around us. This doesn’t mean we have to all agree to all things, but we do need to identify the common scaffolding upon which we can all build.

This blog posts challenges you to think systemically, critically and practically about five things:

    • What future do you want? Not what could be a bit better, or what the next few years might hold, or how that shiny new toy you have could solve the world’s problems (policy innovation, data, blockchain, genomics or any tool or method). What is the future you want to work towards, and what does good look like? Forget about your particular passion or area of interest for a moment. What does your better life look like for all people, not just people like you?
    • What do we need to get there? What concepts, cultural values, paradigm, assumptions should we take with us and what should we leave behind? What new tools do we need and how do we collectively design where we are going?
    • What is the role of gov, academia, other sectors and people in that future? If we could create a better collective understanding of our roles in society and some of the future ideals we are heading towards, then we would see a natural convergence of effort, goals and strategy across the community.
    • What will you do today? Seriously. Are you differentiating between symptomatic relief and causal factors? Are you perpetuating the status quo or challenging it? Are you being critically aware of your bias, of the system around you, of the people affected by your work? Are you reaching out to collaborate with others outside your team, outside your organisation and outside your comfort zone? Are you finding natural partners in what you are doing, and are you differentiating between activities worthy of collaboration versus activities only of value to you (the former being ripe for collaboration and the latter less so).
    • How do we scale change? I believe we need to consider how to best scale “innovation” and “transformation”. Scaling innovation is about scaling how we do things differently, such as the ability to take a more agile, experimental, evidence based, creative and collaborative approach to the design, delivery and continuous improvement of stuff, be it policy, legislation or services. Scaling transformation is about how we create systemic and structural change that naturally drives and motivates better societal outcomes. Each without the other is not sustainable or practical.

How to scale innovation and transformation?

I’ll focus the rest of this post on the question of scaling. I wrote this in the context of scaling innovation and transformation in government, but it applies to any large system. I also believe that empowering people is the greatest way to scale anything.

  • I’ll firstly say that openness is key to scaling everything. It is how we influence the system, how we inspire and enable people to individually engage with and take responsibility for better outcomes and innovate at a grassroots level. It is how we ensure our work is evidence based, better informed and better tested, through public peer review. Being open not only influences the entire public service, but the rest of the economy and society. It is how we build trust, improve collaboration, send indicators to vendors and influence academics. Working openly, open sourcing our research and code, being public about projects that would benefit from collaboration, and sharing most of what we do (because most of the work of the public service is not secretive by any stretch) is one of the greatest tools in try to scale our work, our influence and our impact. Openness is also the best way to ensure both a better supply chain as well as a better demand for things that are demonstrable better.

A quick side note to those who argue that transparency isn’t an answer because all people don’t have to tools to understand data/information/etc to hold others accountable, it doesn’t mean you don’t do transparency at all. There will always be groups or people naturally motivated to hold you to account, whether it is your competitors, clients, the media, citizens or even your own staff. Transparency is partly about accountability and partly about reinforcing a natural motivation to do the right thing.

Scaling innovation – some ideas:

  • The necessity of neutral, safe, well resourced and collaborative sandpits is critical for agencies to quickly test and experiment outside the limitations of their agencies (technical, structural, political, functional and procurement). Such places should be engaged with the sectors around them. Neutral spaces that take a systems view also start to normalise a systems view across agencies in their other work, which has huge ramifications for transformation as well as innovation.
  • Seeking and sharing – sharing knowledge, reusable systems/code, research, infrastructure and basically making it easier for people to build on the shoulders of each other rather than every single team starting from scratch every single time. We already have some communities of practice but we need to prioritise sharing things people can actually use and apply in their work. We also need to extend this approach across sectors to raise all boats. Imagine if there was a broad commons across all society to share and benefit from each others efforts. We’ve seen the success and benefits of Open Source Software, of Wikipedia, of the Data Commons project in New Zealand, and yet we keep building sector or organisational silos for things that could be public assets for public good.
  • Require user research in budget bids – this would require agencies to do user research before bidding for money, which would create an incentive to build things people actually need which would drive both a user centred approach to programs and would also drive innovation as necessary to shift from current practices :) Treasury would require user research experts and a user research hub to contrast and compare over time.
  • Staff mobility – people should be supported to move around departments and business units to get different experiences and to share and learn. Not everyone will want to, but when people stay in the same job for 20 years, it can be harder to engage in new thinking. Exchange programs are good but again, if the outcomes and lessons are not broadly shared, then they are linear in impact (individuals) rather than scalable (beyond the individuals).
  • Support operational leadership – not everyone wants to be a leader, disruptor, maker, innovator or intrapreneur. We need to have a program to support such people in the context of operational leadership that isn’t reliant upon their managers putting them forward or approving. Even just recognising leadership as something that doesn’t happen exclusively in senior management would be a huge cultural shift. Many managers will naturally want to keep great people to themselves which can become stifling and eventually we lose them. When people can work on meaningful great stuff, they stay in the public service.
  • A public ‘Innovation Hub’ – if we had a simple public platform for people to register projects that they want to collaborate on, from any sector, we could stimulate and support innovation across the public sector (things for which collaboration could help would be surfaced, publicly visible, and inviting of others to engage in) so it would support and encourage innovation across government, but also provides a good pipeline for investment as well as a way to stimulate and support real collaboration across sectors, which is substantially lacking at the moment.
  • Emerging tech and big vision guidance - we need a team, I suggest cross agency and cross sector, of operational people who keep their fingers on the pulse of technology to create ongoing guidance for New Zealand on emerging technologies, trends and ideas that anyone can draw from. For government, this would help agencies engage constructively with new opportunities rather than no one ever having time or motivation until emerging technologies come crashing down as urgent change programs. This could be captured on a constantly updating toolkit with distributed authorship to keep it real.

Scaling transformation – some ideas:

  • Convergence of effort across sectors – right now in many countries every organisation and to a lesser degree, many sectors, are diverging on their purpose and efforts because there is no shared vision to converge on. We have myriad strategies, papers, guidance, but no overarching vision. If there were an overarching vision for New Zealand Aotearoa for instance, co-developed with all sectors and the community, one that looks at what sort of society we want into the future and what role different entities have in achieving that ends, then we would have the possibility of natural convergence on effort and strategy.
    • Obviously when you have a cohesive vision, then you can align all your organisational and other strategies to that vision, so our (government) guidance and practices would need to align over time. For the public sector the Digital Service Standard would be a critical thing to get right, as is how we implement the Higher Living Standards Framework, both of which would drive some significant transformation in culture, behaviours, incentives and approaches across government.
  • Funding “Digital Public Infrastructure” – technology is currently funded as projects with start and end dates, and almost all tech projects across government are bespoke to particular agency requirements or motivations, so we build loads of technologies but very little infrastructure that others can rely upon. If we took all the models we have for funding other forms of public infrastructure (roads, health, education) and saw some types of digital infrastructure as public infrastructure, perhaps they could be built and funded in ways that are more beneficial to the entire economy (and society).
  • Agile budgeting – we need to fund small experiments that inform business cases, rather than starting with big business cases. Ideally we need to not have multi 100 million dollar projects at all because technology projects simply don’t cost that anymore, and anyone saying otherwise is trying to sell you something :) If we collectively took an agile budgeting process, it would create a systemic impact on motivations, on design and development, or implementation, on procurement, on myriad things. It would also put more responsibility on agencies for the outcomes of their work in short, sharp cycles, and would create the possibility of pivoting early to avoid throwing bad money after good (as it were). This is key, as no transformative project truly survives the current budgeting model.
  • Gov as a platform/API/enabler (closely related to DPI above) – obviously making all government data, content, business rules (inc but not just legislation) and transactional systems available as APIs for building upon across the economy is key. This is how we scale transformation across the public sector because agencies are naturally motivated to deliver what they need to cheaper, faster and better, so when there are genuinely useful reusable components, agencies will reuse them. Agencies are now more naturally motivated to take an API driven modular architecture which creates the bedrock for government as an API. Digital legislation (which is necessary for service delivery to be integrated across agency boundaries) would also create huge transformation in regulatory and compliance transformation, as well as for government automation and AI.
  • Exchange programs across sectors – to share knowledge but all done openly so as to not create perverse incentives or commercial capture. We need to also consider the fact that large companies can often afford to jump through hoops and provide spare capacity, but small to medium sized companies cannot, so we’d need a pool for funding exchange programs with experts in the large proportion of industry.
  • All of system service delivery evidence base – what you measure drives how you behave. Agencies are motivated to do only what they need to within their mandates and have very few all of system motivations. If we have an all of government anonymised evidence base of user research, service analytics and other service delivery indicators, it would create an accountability to all of system which would drive all of system behaviours. In New Zealand we already have the IDI (an awesome statistical evidence base) but what other evidence do we need? Shared user research, deidentified service analytics, reporting from major projects, etc. And how do we make that evidence more publicly transparent (where possible) and available beyond the walls of government to be used by other sectors?  More broadly, having an all of government evidence base beyond services would help ensure a greater evidence based approach to investment, strategic planning and behaviours.

TEDThe world takes us exactly where we should be: 4 questions with Meagan Fallone

Cartier believes in the power of bold ideas to empower local initiatives to have global impact. To celebrate Cartier’s dedication to launching the ideas of female entrepreneurs into concrete change, TED has curated a special session of talks around the theme “Bold Alchemy” for the Cartier Women’s Initiative Awards, featuring a selection of favorite TED speakers.

Leading up to the session, TED talked with entrepreneur, designer and CEO of Barefoot College International, Meagan Fallone.

TED: Tell us who you are.
Meagan Fallone: I am an entrepreneur, a designer, a passionate mountaineer and a champion of women in the developing world and all women whose voices and potential remain unheard and unrealized. I am a mother and am grounded in the understanding that of all the things I may ever do in my life, it is the only one that truly will define me or endure. I am immovable in my intolerance to injustice in all its forms.

TED: What’s a bold move you’ve made in your career?
MF: I decided to leave the two for-profit companies I started and grow a nonprofit social enterprise.

TED: Tell us about a woman who inspires you.
MF: The women in my family who were risk-takers in their own individual ways: they are always with me and inspire me. My female friends who push me always to dig deeper within myself, to use my power and skills for ever bigger and better impact in the world. I am inspired always by every woman who has ever accepted to come to train with us at Barefoot College. They place their trust in us, leave their community and everyone they love to make an unimaginable journey on every level. It is the bravest thing I have ever seen.

TED: If you could go back in time, what would you tell your 18-year-old self?
MF: I would tell myself not to take myself so seriously. I would tell myself to trust that the world takes us exactly where we should be. It took me far too long to learn to laugh at how ridiculous I am sometimes. It took me even longer to accept that the path that was written for me was not exactly the one I envisaged for myself. Within the things I never imagined lay all the beauty and wonder of my journey so far — and the promise of what I have yet to impact.

The private TED session at Cartier takes place April 26 in Singapore. It will feature talks from a diverse range of global leaders, entrepreneurs and change-makers, exploring topics ranging from the changing global workforce to maternal health to data literacy, and it will include a performance from the only female double violinist in the world.

TEDPlaylist: 10 TEDWomen talks for Earth Day

Earlier this week, I had the privilege and honor to plant trees with the daughter and granddaughter of environmentalist Wangari Maathai. In recognition of her life’s work promoting “sustainable development, democracy and peace,” Maathai received the 2004 Nobel Peace Prize. She was a lifelong activist who founded the Green Belt Movement in 1977.

At that time, rural women in Kenya petitioned the government for help. They explained that their streams were drying up, causing their food supplies to be less secure and longer walks to fetch firewood. Maathai established the Green Belt Movement and encouraged the women of Kenya to work together to grow seedlings and plant trees to bind the soil, store rainwater, provide food and firewood, and receive a small monetary token for their work. Through her efforts, over 51 million trees have been planted in Kenya. Although Maathai died in 2011, her daughter Wanjira continues her work improving the livelihoods of the women of Kenya and striving for a “cleaner, greener world.”

This Earth Day, the work of Professor Maathai and the Green Belt Movement is an inspiration and a “testament to the power of grassroots organizing, proof that one person’s simple idea — that a community should come together to plant trees, can make a difference.”

With that in mind, here are 10 TEDWomen talks from over the years that highlight innovative ideas, cutting-edge science, and the power that each of us has to safeguard our planet and make our world better for everyone.

1. Climate change is unfair. While rich countries can fight against rising oceans and dying farm fields, poor people around the world are already having their lives upended — and their human rights threatened — by killer storms, starvation and the loss of their own lands. Mary Robinson asks us to join the movement for worldwide climate justice.

2. Ocean expert Nancy Rabalais tracks the ominously named “dead zone” in the Gulf of Mexico — where there isn’t enough oxygen in the water to support life. The Gulf has the second largest dead zone in the world; on top of killing fish and crustaceans, it’s also killing fisheries in these waters. Rabalais tells us about what’s causing it — and how we can reverse its harmful effects and restore one of America’s natural treasures.

3. Filmmaker Penelope Jagessar Chaffer was curious about the chemicals she was exposed to while pregnant: Could they affect her unborn child? So she asked scientist Tyrone Hayes to brief her on one he studied closely: atrazine, a herbicide used on corn. (Hayes, an expert on amphibians, is a critic of atrazine, which displays a disturbing effect on frog development.) Onstage together at TEDWomen, Hayes and Chaffer tell their story.

4. Deepika Kurup has been determined to solve the global water crisis since she was 14 years old, after she saw kids outside her grandparents’ house in India drinking water that looked too dirty even to touch. Her research began in her family kitchen — and eventually led to a major science prize. Hear how this teenage scientist developed a cost-effective, eco-friendly way to purify water.

5. Days before this talk, journalist Naomi Klein was on a boat in the Gulf of Mexico, looking at the catastrophic results of BP’s risky pursuit of oil. Our societies have become addicted to extreme risk in finding new energy, new financial instruments and more … and too often, we’re left to clean up a mess afterward. Klein’s question: What’s the backup plan?

6. The water hyacinth may look like a harmless, even beautiful flowering plant — but it’s actually an invasive aquatic weed that clogs waterways, stopping trade, interrupting schooling and disrupting everyday life. In this scourge, green entrepreneur Achenyo Idachaba saw opportunity. Follow her journey as she turns weeds into woven wonders.

7. A skyscraper that channels the breeze … a building that creates community around a hearth … Jeanne Gang uses architecture to build relationships. In this engaging tour of her work, Gang invites us into buildings large and small, from a surprising local community center to a landmark Chicago skyscraper. “Through architecture, we can do much more than create buildings,” she says. “We can help steady this planet we all share.”

8. Architect Kate Orff sees the oyster as an agent of urban change. Bundled into beds and sunk into city rivers, oysters slurp up pollution and make legendarily dirty waters clean — thus driving even more innovation in “oyster-tecture.” Orff shares her vision for an urban landscape that links nature and humanity for mutual benefit.

9. Beverly + Dereck Joubert live in the bush, filming and photographing lions and leopards in their natural habitat. With stunning footage (some never before seen), they discuss their personal relationships with these majestic animals — and their quest to save the big cats from human threats.

10. Artist and poet Cleo Wade shares some truths about growing up (and speaking up) and reflects on the wisdom of a life well-lived, leaving us with a simple yet enduring takeaway: be good to yourself, be good to others, be good to the earth. “The world will say to you, ‘Be a better person,'” Wade says. “Do not be afraid to say, ‘Yes.'”

TEDWomen 2018 Updates

If you’re interested in attending TEDWomen later this year in Palm Springs, California, on November 28–30, we encourage you to sign up for our email newsletter now to stay up to date. We will be adding details on venue, sessions themes, guest curators and speakers soon. Don’t miss the news!

Planet Linux AustraliaDavid Rowe: WaveNet and Codec 2

Yesterday my friend and fellow open source speech coder Jean-Marc Valin (of Speex and Opus fame) emailed me with some exciting news. W. Bastiaan Kleijn and friends have published a paper called “Wavenet based low rate speech coding“. Basically they take bit stream of Codec 2 running at 2400 bit/s, and replace the Codec 2 decoder with the WaveNet deep learning generative model.

What is amazing is the quality – it sounds as good an an 8000 bit/s wideband speech codec! They have generated wideband audio from the narrowband Codec model parameters. Here are the samples – compare “Parametrics WaveNet” to Codec 2!

This is a game changer for low bit rate speech coding.

I’m also happy that Codec 2 has been useful for academic research (Yay open source), and that the MOS scores in the paper show it’s close to MELP at 2400 bit/s. Last year we discovered Codec 2 is better than MELP at 600 bit/s. Not bad for an open source codec written (more or less) by one person.

Now I need to do some reading on Deep Learning!

Reading Further

Wavenet based low rate speech coding
Wavenet Speech Samples
AMBE+2 and MELPe 600 Compared to Codec 2

,

CryptogramFriday Squid Blogging: Squid Prices Rise as Catch Decreases

In Japan:

Last year's haul sank 15% to 53,000 tons, according to the JF Zengyoren national federation of fishing cooperatives. The squid catch has fallen by half in just two years. The previous low was plumbed in 2016.

Lighter catches have been blamed on changing sea temperatures, which impedes the spawning and growth of the squid. Critics have also pointed to overfishing by North Korean and Chinese fishing boats.

Wholesale prices of flying squid have climbed as a result. Last year's average price per kilogram came to 564 yen, a roughly 80% increase from two years earlier, according to JF Zengyoren.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityIs Facebook’s Anti-Abuse System Broken?

Facebook has built some of the most advanced algorithms for tracking users, but when it comes to acting on user abuse reports about Facebook groups and content that clearly violate the company’s “community standards,” the social media giant’s technology appears to be woefully inadequate.

Last week, Facebook deleted almost 120 groups totaling more than 300,000 members. The groups were mostly closed — requiring approval from group administrators before outsiders could view the day-to-day postings of group members.

However, the titles, images and postings available on each group’s front page left little doubt about their true purpose: Selling everything from stolen credit cards, identities and hacked accounts to services that help automate things like spamming, phishing and denial-of-service attacks for hire.

To its credit, Facebook deleted the groups within just a few hours of KrebsOnSecurity sharing via email a spreadsheet detailing each group, which concluded that the average length of time the groups had been active on Facebook was two years. But I suspect that the company took this extraordinary step mainly because I informed them that I intended to write about the proliferation of cybercrime-based groups on Facebook.

That story, Deleted Facebook Cybercrime Groups had 300,000 Members, ended with a statement from Facebook promising to crack down on such activity and instructing users on how to report groups that violate it its community standards.

In short order, some of the groups I reported that were removed re-established themselves within hours of Facebook’s action. I decided instead of contacting Facebook’s public relations arm directly that I would report those resurrected groups and others using Facebook’s stated process. Roughly two days later I received a series replies saying that Facebook had reviewed my reports but that none of the groups were found to have violated its standards. Here’s a snippet from those replies:

Perhaps I should give Facebook the benefit of the doubt: Maybe my multiple reports one after the other triggered some kind of anti-abuse feature that is designed to throttle those who would seek to abuse it to get otherwise legitimate groups taken offline — much in the way that pools of automated bot accounts have been known to abuse Twitter’s reporting system to successfully sideline accounts of specific targets.

Or it could be that I simply didn’t click the proper sequence of buttons when reporting these groups. The closest match I could find in Facebook’s abuse reporting system were, “Doesn’t belong on Facebook,” and “Purchase or sale of drugs, guns or regulated products.” There was/is no option for “selling hacked accounts, credit cards and identities,” or anything of that sort.

In any case, one thing seems clear: Naming and shaming these shady Facebook groups via Twitter seems to work better right now for getting them removed from Facebook than using Facebook’s own formal abuse reporting process. So that’s what I did on Thursday. Here’s an example:

Within minutes of my tweeting about this, the group was gone. I also tweeted about “Best of the Best,” which was selling accounts from many different e-commerce vendors, including Amazon and eBay:

That group, too, was nixed shortly after my tweet. And so it went for other groups I mentioned in my tweetstorm today. But in response to that flurry of tweets about abusive groups on Facebook, I heard from dozens of other Twitter users who said they’d received the same “does not violate our community standards” reply from Facebook after reporting other groups that clearly flouted the company’s standards.

Pete Voss, Facebook’s communications manager, apologized for the oversight.

“We’re sorry about this mistake,” Voss said. “Not removing this material was an error and we removed it as soon as we investigated. Our team processes millions of reports each week, and sometimes we get things wrong. We are reviewing this case specifically, including the user’s reporting options, and we are taking steps to improve the experience, which could include broadening the scope of categories to choose from.”

Facebook CEO and founder Mark Zuckerberg testified before Congress last week in response to allegations that the company wasn’t doing enough to halt the abuse of its platform for things like fake news, hate speech and terrorist content. It emerged that Facebook already employs 15,000 human moderators to screen and remove offensive content, and that it plans to hire another 5,000 by the end of this year.

“But right now, those moderators can only react to posts Facebook users have flagged,” writes Will Knight, for Technologyreview.com.

Zuckerberg told lawmakers that Facebook hopes expected advances in artificial intelligence or “AI” technology will soon help the social network do a better job self-policing against abusive content. But for the time being, as long as Facebook mainly acts on abuse reports only when it is publicly pressured to do so by lawmakers or people with hundreds of thousands of followers, the company will continue to be dogged by a perception that doing otherwise is simply bad for its business model.

Update, 1:32 p.m. ET: Several readers pointed my attention to a Huffington Post story just three days ago, “Facebook Didn’t Seem To Care I Was Being Sexually Harassed Until I Decided To Write About It,” about a journalist whose reports of extreme personal harassment on Facebook were met with a similar response about not violating the company’s Community Standards. That is, until she told Facebook that she planned to write about it.

CryptogramSecuring Elections

Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them.

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper.

Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely.

Last year, the Defcon hackers' conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It's important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.

It shouldn't be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They're computers -- often ancient computers running operating systems no longer supported by the manufacturers -- and they don't have any magical security technology that the rest of the industry isn't privy to. If anything, they're less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment.

We're not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.

Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can't use the security systems available to banking and other high-value applications.

We can securely bank online, but can't securely vote online. If we could do away with anonymity -- if everyone could check that their vote was counted correctly -- then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread.

We can't, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper.

Let's start with the voter rolls. We know they've already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That's just one possibility. A well-executed attack that deletes, for example, one in five voters at random -- or changes their addresses -- would cause chaos on election day.

Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment.

Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or -- even better -- a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything.

Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter.

Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur.

It's vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it's easy to agree on strong security. But after the vote, someone is the presumptive winner -- and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it's too late to agree on anything.

The politicians running in the election shouldn't have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don't do that in the US.

Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because few districts can be decisive and there are so few voting-machine vendors.

We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated $380m to states to upgrade election security.

These are good starts, but don't go nearly far enough. The constitution delegates elections to the states but allows Congress to "make or alter such Regulations". In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the Guardian.

Worse Than FailureError'd: Placeholders-a-Plenty

"On my admittedly old and cheap phone, Google Maps seems to have confused the definition of the word 'trip'," writes Ivan.

 

"When you're Gas Networks Ireland, and don't have anything nice to say, I guess you just say lorem ipsum," wrote Gabe.

 

Daniel D. writes, "Google may not know how I got 100 GB, but they seem pretty sure that it's expiring soon."

 

Peter S. wrote, "F1 finally did it. The quantum driver Lastname is driving a Ferrari and chasing him- or herself in Red Bull."

 

Hrish B. writes, "I hope my last name is not an example as well."

 

Peter S. wrote, "Not sure what IEEE wants me to vote for. But I would vote for hiring better coders."

 

"Well, at least they got my name right, half of the time," Peter S. writes.

 

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet Linux AustraliaOpenSTEM: NAPLAN and vocabulary

It is the time of year when the thoughts of teachers of students in years 3, 5, 7 and 9 turn (not so) lightly to NAPLAN. I’m sure many of you are aware of the controversial review of NAPLAN by Les Perelman, a retired professor from MIT in the United States. Perelman conducted a similar […]

Planet Linux AustraliaFrancois Marier: Using a Kenwood TH-D72A with Pat on Linux and ax25

Here is how I managed to get my Kenwood TH-D72A radio working with Pat on Linux using the built-in TNC and the AX.25 mode

Installing Pat

First of all, download and install the latest Pat package from the GitHub project page.

dpkg -i pat_x.y.z_amd64.deb

Then, follow the installation instructions for the AX.25 mode and install the necessary packages:

apt install ax25-tools ax25-apps

along with the systemd script that comes with Pat:

/usr/share/pat/ax25/install-systemd-ax25-unit.bash

Configuration

Once the packages are installed, it's time to configure everything correctly:

  1. Power cycle the radio.
  2. Enable TNC in packet12 mode (band A*).
  3. Tune band A to VECTOR channel 420 (or 421 if you can't reach VA7EOC on simplex).
  4. Put the following in /etc/ax25/axports (replacing CALLSIGN with your own callsign):

     wl2k    CALLSIGN    9600    128    4    Winlink
    
  5. Set HBAUD to 1200 in /etc/default/ax25.

  6. Download and compile the tmd710_tncsetup script mentioned in a comment in /etc/default/ax25:

     gcc -o tmd710_tncsetup tmd710_tncsetup.c
    
  7. Add the tmd710_tncsetup script in /etc/default/ax25 and use these command line parameters (-B 0 specifies band A, use -B 1 for band B):

     tmd710_tncsetup -B 0 -S $DEV -b $HBAUD -s
    
  8. Start ax25 driver:

     systemctl start ax25.service
    

Connecting to a winlink gateway

To monitor what is being received and transmitted:

axlisten -cart

Then create aliases like these in ~/.wl2k/config.json:

{
  "connect_aliases": {
    "ax25-VA7EOC": "ax25://wl2k/VA7EOC-10",
    "ax25-VE7LAN": "ax25://wl2k/VE7LAN-10"
  },
}

and use them to connect to your preferred Winlink gateways.

Troubleshooting

If it doesn't look like ax25 can talk to the radio (i.e. the TX light doesn't turn ON), then it's possible that the tmd710_tncsetup script isn't being run at all, in which case the TNC isn't initialized correctly.

On the other hand, if you can see the radio transmitting but are not seeing any incoming packets in axlisten then double check that the speed is set correctly:

  • HBAUD in /etc/default/ax25 should be set to 1200
  • line speed in /etc/ax25/axports should be set to 9600
  • SERIAL_SPEED in tmd710_tncsetup should be set to 9600
  • radio displays packet12 in the top-left corner, not packet96

If you can establish a connection, but it's very unreliable, make sure that you have enabled software flow control (the -s option in tmd710_tncsetup).

If you can't connect to VA7EOC-10 on UHF, you could also try the VHF BCFM repeater on Mt Seymour, VE7LAN (VECTOR channel 65).

,

CryptogramLifting a Fingerprint from a Photo

Police in the UK were able to read a fingerprint from a photo of a hand:

Staff from the unit's specialist imaging team were able to enhance a picture of a hand holding a number of tablets, which was taken from a mobile phone, before fingerprint experts were able to positively identify that the hand was that of Elliott Morris.

[...]

Speaking about the pioneering techniques used in the case, Dave Thomas, forensic operations manager at the Scientific Support Unit, added: "Specialist staff within the JSIU fully utilised their expert image-enhancing skills which enabled them to provide something that the unit's fingerprint identification experts could work. Despite being provided with only a very small section of the fingerprint which was visible in the photograph, the team were able to successfully identify the individual."

Worse Than FailureCodeSOD: A Problematic Place

In programming, sometimes the ordering of your data matters. And sometimes the ordering doesn’t matter and it can be completely random. And sometimes… well, El Dorko found a case where it apparently matters that it doesn’t matter:

DirectoryInfo di = new DirectoryInfo(directory);
FileInfo[] files = di.GetFiles();
DirectoryInfo[] subdirs = di.GetDirectories();

// shuffle subdirs to avoid problematic places
Random rnd = new Random();
for( int i = subdirs.Length - 1; i > 0; i-- )
{
    int n = rnd.Next( i + 1 );
    DirectoryInfo tmp = subdirs[i];
    subdirs[i] = subdirs[n];
    subdirs[n] = tmp;
}

foreach (DirectoryInfo dir in subdirs)
{
   // process files in directory
}

This code does some processing on a list of directories. Apparently while coding this, the author found themself in a “problematic place”. We all have our ways of avoiding problematic places, but this programmer decided the best way was to introduce some randomness into the equation. By randomizing the order of the list, they seem to have completely mitigated… well, it’s not entirely clear what they’ve mitigated. And while their choice of shuffling algorithm is commendable, maybe next time they could leave us a comment elaborating on the problematic place they found themself in.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet Linux AustraliaMichael Still: Art with condiments

Share

Mr 15 just made me watch this video, its pretty awesome…

You’re welcome.

Share

The post Art with condiments appeared first on Made by Mikal.

,

Sociological ImagesThe Sociology Behind the X-Files

Originally Posted at TSP Clippings

Photo Credit: Val Astraverkhau, Flickr CC

Throughout history, human beings have been enthralled by the idea of the paranormal. While we might think that UFOs and ghosts belong to a distant and obscure dimension, social circumstances help to shape how we envision the supernatural. In a recent interview with New York Magazine, sociologist Joseph O. Baker describes the social aspects of Americans’ beliefs about UFOs.

Baker argues that pop culture shapes our understandings of aliens. In the 1950s and 1960s, pop culture imagined aliens in humanoid form, typically as very attractive Swedish blonde types with shining eyes. By the 1970s and 1980s, the abductor narrative took hold and extraterrestrials were represented as the now iconic image of the little gray abductor — small, grey-skinned life-forms with huge hairless heads and large black eyes. Baker posits that one of the main causes of UFOs’ heightened popularity during this time was the extreme distrust of the government following incidents such as Watergate. Baker elaborates,

“I think there is something to be said for a lack of faith in government and institutions in that era, and that coincided with UFOs’ rise in popularity. The lack of trust in the government, and the idea that the government knows something about this — those two things went together, and you can see it in the public reaction post-Vietnam, to Watergate, all that stuff.”

While the individual characteristics of “believers” are hard to determine, survey evidence suggests that men and people from low-income backgrounds are more likely to believe in the existence of alien life. Baker says that believing is also dependent upon religious participation rather than education or income. In his words,

“One of the other strongest predictors is not participating as strongly in forms of organized religion. In some sense, there’s a bit of a clue there about what’s going on with belief — it’s providing an alternative belief system. If you look at religious-service attendance, there will be a strong negative effect there for belief in UFOs.”

Baker’s research on the paranormal indicates that social circumstances influence belief in extraterrestrial beings. In short, these social factors help to shape whether you are a Mulder or a Scully. Believing in UFOs goes beyond abductions and encounters of the Third Kind. In the absence of trust in government and religious institutions, UFOs represent an appealing and mysterious alternative belief system.

Isabel Arriagada (@arriagadaisabeis a Ph.D. student in the sociology department at the University of Minnesota. Her research focuses on the development of prison policies in South America and the U.S. and how technology shapes new experiences of imprisonment. 

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityA Sobering Look at Fake Online Reviews

In 2016, KrebsOnSecurity exposed a network of phony Web sites and fake online reviews that funneled those seeking help for drug and alcohol addiction toward rehab centers that were secretly affiliated with the Church of Scientology. Not long after the story ran, that network of bogus reviews disappeared from the Web. Over the past few months, however, the same prolific purveyor of these phantom sites and reviews appears to be back at it again, enlisting the help of Internet users and paying people $25-$35 for each fake listing.

Sometime in March 2018, ads began appearing on Craigslist promoting part-time “social media assistant” jobs, in which interested applicants are directed to sign up for positions at seorehabs[dot]com. This site promotes itself as “leaders in addiction recovery consulting,” explaining that assistants can earn a minimum of $25 just for creating individual Google for Business listings tied to a few dozen generic-sounding addiction recovery center names, such as “Integra Addiction Center,” and “First Exit Recovery.”

The listing on Craigslist.com advertising jobs for creating fake online businesses tied to addiction rehabilitation centers.

Applicants who sign up are given detailed instructions on how to step through Google’s anti-abuse process for creating listings, which include receiving a postcard via snail mail from Google that contains a PIN which needs to be entered at Google’s site before a listing can be created.

Assistants are cautioned not to create more than two listings per street address, but otherwise to use any U.S.-based street address and to leave blank the phone number and Web site for the new business listing.

A screen shot from Seorehabs’ instructions for those hired to create rehab center listings.

In my story Scientology Seeks Captive Converts Via Google Maps, Drug Rehab Centers, I showed how a labyrinthine network of fake online reviews that steered Internet searches toward rehab centers funded by Scientology adherents was set up by TopSeek Inc., which bills itself as a collection of “local marketing experts.” According to LinkedIn, TopSeek is owned by John Harvey, an individual (or alias) who lists his address variously as Sacramento, Calif. and Hawaii.

Although the current Web site registration records from registrar giant Godaddy obscure the information for the current owner of seorehabs[dot]com, a historic WHOIS search via DomainTools shows the site was also registered by John Harvey and TopSeek in 2015. Mr. Harvey did not respond to requests for comment. [Full disclosure: DomainTools previously was an advertiser on KrebsOnSecurity].

TopSeek’s Web site says it works with several clients, but most especially Narconon International — an organization that promotes the rather unorthodox theories of Scientology founder L. Ron Hubbard regarding substance abuse treatment and addiction.

As described in Narconon’s Wikipedia entry, Narconon facilities are known not only for attempting to win over new converts to Scientology, but also for treating all substance abuse addictions with a rather bizarre cocktail consisting mainly of vitamins and long hours in extremely hot saunas. Their Wiki entry documents multiple cases of accidental deaths at Narconon facilities, where some addicts reportedly died from overdoses of vitamins or neglect.

A LUCRATIVE RACKET

Bryan Seely, a security expert who has written extensively about the use of fake search listings to conduct online bait-and-switch scams, said the purpose of sites like those that Seorehabs pays people to create is to funnel calls to a handful of switchboards that then sell the leads to rehab centers that have agreed to pay for them. Many rehab facilities will pay hundreds of dollars for leads that may ultimately lead to a new patient. After all, Seely said, some facilities can then turn around and bill insurance providers for thousands of dollars per patient.

Perhaps best known for a stunt in which he used fake Google Maps listings to intercept calls destined for the FBI and U.S. Secret Service, Seely has learned a thing or two about this industry: Until 2011, he worked for an SEO firm that helped to develop and spread some of the same fake online reviews that he is now helping to clean up.

“Mr. Harvey and TopSeek are crowdsourcing the data input for these fake rehab centers,” Seely said. “The phone numbers all go to just a few dedicated call centers, and it’s not hard to see why. The money is good in this game. He sells a call for $50-$100 at a minimum, and the call center then tries to sell that lead to a treatment facility that has agreed to buy leads. Each lead can be worth $5,000 to $10,000 for a patient who has good health insurance and signs up.”

This graph illustrates what happens when someone calls one of these Seorehabs listings. Source: Bryan Seely.

Many of the listings created by Seorehab assistants are tied to fake Google Maps entries that include phony reviews for bogus treatment centers. In the event those listings get suspended by Google, Seorehab offers detailed instructions on how assistants can delete and re-submit listings.

Assistants also can earn extra money writing fake, glowing reviews of the treatment centers:

Below are some of the plainly bogus reviews and listings created in the last month that pimp the various treatment center names and Web sites provided by Seorehabs. It is not difficult to find dozens of other examples of people who claim to have been at multiple Seorehab-promoted centers scattered across the country. For example, “Gloria Gonzalez” supposedly has been treated at no fewer than seven Seorehab-marketed detox locations in five states, penning each review just in the last month:

A reviewer using the name “Tedi Spicer” also promoted at least seven separate rehab centers across the United States in the past month. Getting treated at so many far-flung facilities in just the few months that the domains for these supposed rehab centers have been online would be an impressive feat:

Bring up any of the Web sites for these supposed rehab listings and you’ll notice they all include the same boilerplate text and graphic design. Aside from combing listings created by the reviewers paid to promote the sites, we can find other Seorehab listings just by searching the Web for chunks of text on the sites. Doing so reveals a long list (this is likely far from comprehensive) of domain names registered in the past few months that were all created with hidden registration details and registered via Godaddy.

Seely said he spent a few hours this week calling dozens of phone numbers tied to these rehab centers promoted by TopSeek, and created a spreadsheet documenting his work and results here (Google Sheets).

Seely said while he would never advocate such activity, TopSeek’s fake listings could end up costing Mr. Harvey plenty of money if someone figured out a way to either mass-report the listings as fraudulent or automate calls to the handful of hotlines tied to the listings.

“It would kill his business until he changes all the phone numbers tied to these fake listings, but if he had to do that he’d have to pay people to rebuild all the directories that link to these sites,” he said.

WHAT YOU CAN DO ABOUT FAKE ONLINE REVIEWS

Before doing business with a company you found online, don’t just pick the company that comes up at the top of search results on Google or any other search engine. Unfortunately, that generally guarantees little more than the company is good at marketing.

Take the time to research the companies you wish to hire before booking them for jobs or services — especially when it comes to big, expensive, and potentially risky services like drug rehab or moving companies. By the way, if you’re looking for a legitimate rehab facility, you could do worse than to start at rehabs.com, a legitimate rehab search engine.

It’s a good idea to get in the habit of verifying that the organization’s physical address, phone number and Web address shown in the search result match that of the landing page. If the phone numbers are different, use the contact number listed on the linked site.

Take the time to learn about the organization’s reputation online and in social media; if it has none (other than a Google Maps listing with all glowing, 5-star reviews), it’s probably fake. Search the Web for any public records tied to the business’ listed physical address, including articles of incorporation from the local secretary of state office online.

A search of the company’s domain name registration records can give you an idea of how long its Web site has been in business, as well as additional details about the the organization (although the ability to do this may soon be a thing of the past).

Seely said one surefire way to avoid these marketing shell games is to ask a simple question of the person who answers the phone in the online listing.

“Ask anyone on the phone what company they’re with,” Seely said. “Have them tell you, take their information and then call them back. If they aren’t forthcoming about who they are, they’re most likely a scam.”

In 2016, Seely published a book on Amazon about the thriving and insanely lucrative underground business of fake online reviews. He’s agreed to let KrebsOnSecurity republish the entire e-book, which is available for free at this link (PDF).

“This is literally the worst book ever written about Google Maps fraud,” Seely said. “It’s also the best. Is it still a niche if I’m the only one here? The more people who read it, the better.”

TEDWhat can your phone do in the next mobile economy? A workshop with Samsung

An attendee plays with an interface for exploring the possibilities of the mobile phone at the Samsung Social Space during TED2018: The Age of Amazement, in Vancouver. Photo: Lawrence Sumulong / TED

What do you imagine your phone doing for you in the future? Sure, you can take calls, send texts, use apps and surf the internet. But according to Samsung, the next corner for mobile engagement could turn your cell phone into a superhero (of sorts) in industries like public safety and healthcare. 5G technology will not only improve a company’s ability to deliver faster, higher quality services, but the “greater connectivity paves the way for data-intensive solutions like self-driving vehicles, Hi-Res streaming VR, and rich real-time communications.” Imagine a world where your Facetime or Skype call doesn’t drop mid-conversation, you never have to wait for a video to buffer, and connecting to Wi-Fi becomes the slower option compared to staying on data.

At their afternoon worksop during TED2018, Samsung provided a short list of real-world issues to guide thoughtful discussion among workshop groups on how the mobile economy can be a part of the big solutions. Scenarios included: data security in the an evolving retail world; hurricane preparedness; urban traffic management; and overburdened emergency rooms.

These breakout out sessions lead to fascinating conversation between those with different perspectives, background and skill sets. Architects and scientists weighed in with writers and business development professionals to dream up a vision of the future where everything works seamlessly and interacts like a well-conducted symphony.

After intense discussion, swapping ideas and possibilities, groups were encouraged to synthesize the conversation and share to the larger room. They didn’t just offer solutions, but posed fascinating questions on how we may unlock answers to the endless possibilities the next mobile economy will bring in the age of amazement.

A view of Samsung’s social space at TED2018, which featured mobile phone activities for exploring the next mobile economy (as well as delicious coffee). Photo: Lawrence Sumulong / TED

CryptogramOblivious DNS

Interesting idea:

...we present Oblivious DNS (ODNS), which is a new design of the DNS ecosystem that allows current DNS servers to remain unchanged and increases privacy for data in motion and at rest. In the ODNS system, both the client is modified with a local resolver, and there is a new authoritative name server for .odns. To prevent an eavesdropper from learning information, the DNS query must be encrypted; the client generates a request for www.foo.com, generates a session key k, encrypts the requested domain, and appends the TLD domain .odns, resulting in {www.foo.com}k.odns. The client forwards this, with the session key encrypted under the .odns authoritative server's public key ({k}PK) in the "Additional Information" record of the DNS query to the recursive resolver, which then forwards it to the authoritative name server for .odns. The authoritative server decrypts the session key with his private key, and then subsequently decrypts the requested domain with the session key. The authoritative server then forwards the DNS request to the appropriate name server, acting as a recursive resolver. While the name servers see incoming DNS requests, they do not know which clients they are coming from; additionally, an eavesdropper cannot connect a client with her corresponding DNS queries.

News article.

Worse Than FailureThe Proprietary Format

Have you ever secured something with a lock? The intent is that at some point in the future, you'll use the requisite key to regain access to it. Of course, the underlying assumption is that you actually have the key. How do you open a lock once you've lost the key? That's when you need to get creative. Lock picks. Bolt cutters. Blow torch. GAU-8...

In 2004, Ben S. went on a solo bicycle tour, and for reasons of weight, his only computer was a Handspring Visor Deluxe PDA running Palm OS. He had an external, folding keyboard that he would use to type his notes from each day of the trip. To keep these notes organized by day, he stored them in the Datebook (calendar) app as all-day events. The PDA would sync with a desktop computer using a Handspring-branded fork of the Palm Desktop software. The whole Datebook could then be exported as a text file from there. As such, Ben figured his notes were safe. After the trip ended, he bought a Windows PC that he had until 2010, but he never quite got around to exporting the text file. After he switched to using a Mac, he copied the files to the Mac and gave away the PC.

Handspring Treo 90

Ten years later, he decided to go through all of the old notes, but he couldn't open the files!

Uh oh.

The Handspring company had gone out of business, and the software wouldn't run on the Mac. His parents had the Palm-branded version of the software on one of their older Macs, but Handspring used a different data file format that the Palm software couldn't open. His in-laws had an old Windows PC, and he was able to install the Handspring software, but it wouldn't even open without a physical device to sync with, so the file just couldn't be opened. Ben reluctantly gave up on ever accessing the notes again.

Have you ever looked at something and then turned your head sideways, only to see it in a whole new light?

One day, Ben was going through some old clutter and found a backup DVD-R he had made of the Windows PC before he had wiped its hard disk. He found the datebook.dat file and opened it in SublimeText. There he saw rows and rows of hexadecimal code arranged into tidy columns. However, in this case, the columns between the codes were not just on-screen formatting for readability, they were actual space characters! It was not a data file after all, it was a text file.

The Handspring data file format was a text file containing hexadecimal code with spaces in it! He copied and pasted the entire file into an online hex-to-text converter (which ignored the spaces and line breaks), and voilà , Ben had his notes back!

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet Linux AustraliaMichael Still: City2Surf 2018

Share

I registered for city2surf this morning, which will be the third time I’ve run in the event. In 2016 my employer sponsored a bunch of us to enter, and I ran the course in 86 minutes and 54 seconds. 2017 was a bit more exciting, because in hindsight I did the final part of my training and the race itself with a torn achilles tendon. Regardless, I finished the course in 79 minutes and 39 seconds — a 7 minute and 16 second improvement despite the injury.

This year I’ve done a few things differently — I’ve started training much earlier, mostly as a side effect to recovering from the achilles injury; and secondly I’ve decided to try and raise some money for charity during the run.

Specifically, I’m raising money for the Black Dog Institute. They were selected because I’ve struggled with depression on and off over my adult life, and that’s especially true for the last twelve months or so. I figure that raising money for a resource that I’ve found personally useful makes a lot of sense.

I’d love for you to donate to the Black Dog Institute, but I understand that’s not always possible. Either way, thanks for reading this far!

Share

The post City2Surf 2018 appeared first on Made by Mikal.

,

Planet Linux AustraliaDavid Rowe: Lithium Cell Amp Hour Tester and Electric Sailing

I recently electrocuted my little sail boat. I built the battery pack using some second hand Lithium cells donated by my EV. However after 8 years of abuse from my kids and I those cells are of varying quality. So I set about developing an Amp-Hour tester to determine the capacity of the cells.

The system has a relay that switches a low value power resistor (OK some coat hanger wire) across the 3.2V cell terminals, loading it up at about 27A, roughly the cruise current for my e-boat. It’s about 0.12 ohms once it heats up. This gets too hot to touch but not red hot, it’s only 86W being dissipated along about 1m of wire. When I built my EV I used the coat hanger wire load trick to test 3kW loads, that was a bit more exciting!

The empty beer can in the background makes a useful insulated stand off. Might need to make more of those.

When I first installed Lithium cells in my EV I developed a charge controller for my EV. I borrowed a small part of that circuit; a two transistor flip flop and a Battery Management System (BMS) module:

Across the cell under test is a CM090 BMS module from EV Power. That’s the good looking red PCB in the photos, onto which I have tacked the circuit above. These modules have a switch than opens when the cell voltage drops beneath 2.5V.

Taking the base of either transistor to ground switches on the other transistor. In logic terms, it’s a “not set” and “not reset” operation. When power is applied, the BMS module switch is closed. The 10uF capacitor is discharged, so provides a momentary short to ground, turning Q1 off, and Q2 on. Current flows through the automotive relay, switching on the load to the battery.

After a few hours the cell discharges beneath 2.5V, the BMS switch opens and Q2 is switched off. The collector voltage on Q2 rises, switching on Q1. Due to the latching operation of the flip flip – it stays in this state. This is important, as when the relay opens, the cell will be unloaded and it’s voltage will rise again and the BMS module switch will close. In the initial design without a flip flop, this caused the relay to buzz as the cell voltage oscillated about 2.5V as the relay opened and closed! I need the test to stop and stay stopped – it will be operating unattended so I don’t want to damage the cell by completely discharging it.

The LED was inserted to ensure the base voltage on Q1 was low enough to switch Q1 off when Q2 was on (Vce of Q2 is not zero), and has the neat side effect of lighting the LED when the test is complete!

In operation, I point a cell phone taking time lapse video of the LED and some multi-meters, and start the test:

I wander back after 3 hours and jog-shuttle the time lapse video to determine the time when the LED came on:

The time lapse feature on this phone runs in 1/10 of real time. For example Cell #9 discharged in 12:12 on the time lapse video. So we convert that time to seconds, multiply by 10 to get “seconds of real time”, then divide by 3600 to get the run time in hours. Multiplying by the discharge current of 27(ish) Amps we get the cell capacity:

  12:12 time lapse, 27*(12*60+12)*10/3600 = 55AH

So this cells a bit low, and won’t be finding it’s way onto my boat!

Another alternative is a logging multimeter, one could even measure and integrate the discharge current over time. or I could have just bought or borrowed a proper discharge tester, but where’s the fun in that?

Results

It was fun to develop, a few Saturday afternoons of sitting in the driveway soldering, occasional burns from 86W of hot wire, and a little head scratching while I figured out how to take the design from an expensive buzzer to a working circuit. Nice to do some soldering after months of software based DSP. I’m also happy that I could develop a transistor circuit from first principles.

I’ve now tested 12 cells (I have 40 to work through), and measured capacities of 50 to 75AH (they are rated at 100AH new). Some cells have odd behavior under load; dipping beneath 3V right at the start of the test rather than holding 3.2V for a few hours – indicating high internal resistance.

My beloved sail e-boat is already doing better. Last weekend, using the best cells I had tested at that point, I e-motored all day on varying power levels.

One neat trick, explained to me by Matt, is motor-sailing. Using a little bit of outboard power, the boat overcomes hydrodynamic friction (it gets moving in the water) and the sail is moved out of stall (like an airplane wing moving to just above stall speed). This means to boat moves a lot faster than under motor or sail alone in light winds. For example the motor was registering just 80W, but we were doing 3 knots in light winds. This same trick can be done with a stink-motor and dinosaur juice, but the e-motor is completely silent, we forgot it was on for hours at a time!

Reading Further

Electric Car BMS Controller
New Lithium Battery Pack for my EV
Engage the Silent Drive
EV Bugs

CryptogramHijacking Emergency Sirens

Turns out it's easy to hijack emergency sirens with a radio transmitter.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV April 2018 Workshop: Linux and Drupal mentoring and troubleshooting

Apr 21 2018 12:00
Apr 21 2018 16:00
Apr 21 2018 12:00
Apr 21 2018 16:00
Location: 
Room B2:11, State Library of Victoria, 328 Swanston St, Melbourne

As our usual venue at Infoxchange is not available this month due to construction work, we'll be joining forces with DrupalMelbourne at the State Library of Victoria.

Linux Users of Victoria is a subcommittee of Linux Australia.

April 21, 2018 - 12:00

Worse Than FailureCodeSOD: Breaking Changes

We talk a lot about the sort of wheels one shouldn’t reinvent. Loads of bad code stumbles down that path. Today, Mary sends us some code from their home-grown unit testing framework.

Mary doesn’t have much to say about whatever case of Not Invented Here Syndrome brought things to this point. It’s especially notable that this is Python, which comes, out of the box, with a perfectly serviceable unittest module built in. Apparently not serviceable enough for their team, however, as Burt, the Lead Developer, wrote his own.

His was Object Oriented. Each test case received a TestStepOutcome object as a parameter, and was expected to return that object. This meant you didn’t have to use those pesky, readable, and easily comprehensible assert… methods. Instead, you just did your test steps and called something like:

  outcome.setPassed()

Or

  outcome.setPassed(False)

Now, no one particularly liked the calling convention of setPassed(False), so after some complaints, Burt added a setFailed method. Developers started using it, and everyone’s tests passed. Everyone was happy.

At least, everyone was happy up until Mary wrote a test she expected to see fail. There was a production bug, and she could replicate it, step by step, at the Python REPL. So that she could “catch” the bug and “prove” it was dead, she wanted a unit test.

The unit test passed.

The bug was still there, and she continued to be able to replicate it manually.

She tried outcome.setFailed() and outcome.setFailed(True) and outcome.setFailed("OH FFS THIS SHOULD FAIL"), but the test passed. outcome.setPassed(False), however… worked just like it was supposed to.

Mary checked the implementation of the TestStepOutcome class, and found this:

class TestStepOutcome(object):
  def setPassed(self, flag=True):
    self.result = flag

  def setFailed(self, flag=True):
    self.result = flag

Yes, in Burt’s reluctance to have a setFailed message, he just copy/pasted the setPassed, thinking, “They basically do the same thing.” No one checked his work or reviewed the code. They all just started using setFailed, saw their tests pass, which is what they wanted to see, and moved on about their lives.

Fixing Burt’s bug was no small task- changing the setFailed behavior broke a few hundred unit tests, proving that every change breaks someone’s workflow.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.