Planet Russell


CryptogramE-Mail Tracking

Interesting survey paper: on the privacy implications of e-mail tracking:

Abstract: We show that the simple act of viewing emails contains privacy pitfalls for the unwary. We assembled a corpus of commercial mailing-list emails, and find a network of hundreds of third parties that track email recipients via methods such as embedded pixels. About 30% of emails leak the recipient's email address to one or more of these third parties when they are viewed. In the majority of cases, these leaks are intentional on the part of email senders, and further leaks occur if the recipient clicks links in emails. Mail servers and clients may employ a variety of defenses, but we analyze 16 servers and clients and find that they are far from comprehensive. We propose, prototype, and evaluate a new defense, namely stripping tracking tags from emails based on enhanced versions of existing web tracking protection lists.

Blog post on the research.

Planet Linux AustraliaBen Martin: Ikea wireless charger in CNC mahogany case

I notice that Ikea sell their wireless chargers without a shell for insertion into desks. The "desk" I chose is a curve cut profile in mahogany that just happens to have the same fit as an LG G3/4/5 type phone. The design changed along the way to a more upright one which then required a catch to stop the phone sliding off.

This was done in Fusion360 which allows bringing in STL files of things like phones and cutting those out of another body. It took a while to work out the ball end toolpath but I finally worked out how to get something that worked reasonably well. The chomps in the side allow fingers to securely lift the phone off the charger.

It will be interesting to play with sliced objects in wood. Layering 3D cuts to build up objects that are 10cm (or about 4 layers) tall.

Worse Than FailureThe Porpoise of Comment Easter Eggs

Today's submitter writes: I wonder how many developers out there have managed, intentionally or otherwise, to have a comment Easter egg go viral within a project.

It seems in the late '90's he was working on a project codenamed "Dolphin." This wasn't the GameCube; it was an ASP/VB6 N-Tier system, also known as "way less fun." One of the first phases of the project involved a few web-based forms. The architects provided them with some simple standard templates to use, such as the method header comment block. This comment block included a Purpose field, which in a moment of self-amusement our submitter changed to Porpoise throughout the VB6 classes and ASP scripts he'd written.

The first phase was released, and after code review, that particular implementation was cited as the paragon that other implementations should follow. Of course, this led to rampant copy-pasta throughout the entire system. By the end of phase 2, the code comments for the Dolphin project were inextricably filled with Porpoises. Being a subtle word change, it largely went unnoticed. Every once in a while, a developer would actually notice and nearly keel over laughing.

Of course, there's also a famous instance of a code comment going properly viral. Deep within the bowels of the Unix kernel, there is a method responsible for saving the CPU context when processes are switched—any time a time slice is used up, an interrupt signal is caught, a system call is made, or a page fault occurs. The code to do this in an efficient manner is horrifically complicated, so it's commented with, You are not expected to understand this. This comment can now be found on buttons, mousepads, t-shirts, hoodies, and tons of other merchandise. It's become a rallying cry of the Unix geeks, a smug way of saying, "I understand where this is from. Do you?"

Have any of you ever written something that went viral, either locally within your company or across the broader Internet community? Let us know in the comments or—if you've got a good one—drop us a submission.

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

Planet DebianIain R. Learmonth: Facebook Lies

In the past, I had a Facebook account. Long ago I “deleted” this account through the procedure outlined on their help pages. In theory, 14 days after I used this process my account would be irrevocably gone. This was all lies.

My account was not deleted and yesterday I received an email:

Screenshot of the email I received from Facebook

It took me a moment to figure it out, but what had happened here is someone had logged into my Facebook account using my email address and password. Facebook simply reactivated the account, which had not had its data deleted, as if I had logged in.

This was possible because:

  1. Facebook was clinging to the hope that I would like to return
  2. The last time I used Facebook I didn’t know what a password manager was and was using the same password for basically everything

When I logged back in, all I needed to provide to prove I was me was my date of birth. Given that old Facebook passwords are readily available from dumps (people think their accounts are gone, so why should they be changing their passwords?) and my date of birth is not secret either, this is not great.

I followed the deletion procedure again and in 2 weeks (you can’t immediately request deletion apparently) I’ll check to see if the account is really gone. I’ve updated the password so at least the deletion process can’t be interrupted by whoever has that password (probably lots of people - it’ll be in a ton of dumps where databases have been hacked).

If it’s still not gone, I hear you can just post obscene and offensive material until Facebook deletes you. I’d rather not have to take that route though.

If you’re interested to see if you’ve turned up in a hacked database dump yourself, I would recommend hibp.

Planet Linux AustraliaSimon Lyall: DevOps Days Auckland 2017 – Monday Session 3

Mirror, mirror, on the wall: testing Conway’s Law in open source communities – Lindsay Holmwood

  • The map between the technical organisation and the technical structure.
  • Easy to find who owns something, don’t have to keep two maps in your head
  • Needs flexibility of the organisation structure in order to support flexibility in a technical design
  • Conway’s “Law” really just adage
  • Complexity frequently takes the form of hierarchy
  • Organisations that mirror perform badly in rapidly changing and innovative enviroments

Metrics that Matter – Alison Polton-Simon (Thoughtworks)

  • Metrics Mania – Lots of focus on it everywhere ( fitbits, google analytics, etc)
  • How to help teams improve CD process
  • Define CD
    • Software consistently in a deployable state
    • Get fast, automated feedback
    • Do push-button deployments
  • Identifying metrics that mattered
    • Talked to people
    • Contextual observation
    • Rapid prototyping
    • Pilot offering
  • 4 big metrics
    • Deploy ready builds
    • Cycle time
    • Mean time between failures
    • Mean time to recover
  • Number of Deploy-ready builds
    • How many builds are ready for production?
    • Routine commits
    • Testing you can trust
    • Product + Development collaboration
  • Cycle Time
    • Time it takes to go from a commit to a deploy
    • Efficient testing (test subset first, faster testing)
    • Appropriate parallelization (lots of build agents)
    • Optimise build resources
  • Case Study
    • Monolithic Codebase
    • Hand-rolled build system
    • Unreliable environments ( tests and builds fail at random )
    • Validating a Pull Request can take 8 hours
    • Coupled code: isolated teams
    • Wide range of maturity in testing (some no test, some 95% coverage)
    • No understanding of the build system
    • Releases routinely delay (10 months!) or done “under the radar”
  • Focus in case study
    • Reducing cycle time, increasing reliability
    • Extracted services from monolith
    • Pipelines configured as code
    • Build infrastructure provisioned as docker and ansible
    • Results:
      • Cycle time for one team 4-5h -> 1:23
      • Deploy ready builds 1 per 3-8 weeks -> weekly
  • Mean time between failures
    • Quick feedback early on
    • Robust validation
    • Strong local builds
    • Should not be done by reducing number of releases
  • Mean time to recover
    • How long back to green?
    • Monitoring of production
    • Automated rollback process
    • Informative logging
  • Case Study 2
    • 1.27 million lines of code
    • High cyclomatic complexity
    • Tightly coupled
    • Long-running but frequently failing testing
    • Isolated teams
    • Pipeline run duration 10h -> 15m
    • MTTR Never -> 50 hours
    • Cycle time 18d -> 10d
    • Created a dashboard for the metrics
  • Meaningless Metrics
    • The company will build whatever the CEO decides to measure
    • Lines of code produced
    • Number of Bugs resolved. – real life duplicates Dilbert
    • Developers Hours / Story Points
    • Problems
      • Lack of team buy-in
      • Easy to agme
      • Unintended consiquences
      • Measuring inputs, not impacts
  • Make your own metrics
    • Map your path to production
    • Highlights pain points
    • collaborate
    • Experiment



Planet Linux AustraliaSimon Lyall: DevOps Days Auckland 2017 – Monday Session 2

Using Bots to Scale incident Management – Anthony Angell (Xero)

  • Who we are
    • Single Team
    • Just a platform Operations team
  • SRE team is formed
    • Ops teams plus performance Engineering team
  • Incident Management
    • In Bad old days – 600 people on a single chat channel
    • Created Framework
    • what do incidents look like, post mortems, best practices,
    • How to make incident management easy for others?
  • ChatOps (Based on Hubot)
    • Automated tour guide
    • Multiple integrations – anything with Rest API
    • Reducing time to restore
    • Flexability
  • Release register – API hook to when changes are made
  • Issue report form
    • Summary
    • URL
    • User-ids
    • how many users & location
    • when started
    • anyone working on it already
    • Anything else to add.
  • Chat Bot for incident
    • Populates for an pushes to production channel, creates pagerduty alert
    • Creates new slack channel for incident
    • Can automatically update status page from chat and page senior managers
    • Can Create “status updates” which record things (eg “restarted server”), or “yammer updates” which get pushed to social media team
    • Creates a task list automaticly for the incident
    • Page people from within chat
    • At the end: Gives time incident lasted, archives channel
    • Post Mortum
  • More integrations
    • Report card
    • Change tracking
    • Incident / Alert portal
  • High Availability – dockerisation
  • Caching
    • Pageduty
    • AWS
    • Datadog




Planet DebianAntoine Beaupré: My free software activities, September 2017

Debian Long Term Support (LTS)

This is my monthly Debian LTS report. I mostly worked on the git, git-annex and ruby packages this month but didn't have time to completely use my allocated hours because I started too late in the month.


I was hoping someone would pick up the Ruby work I submitted in August, but it seems no one wanted to touch that mess, understandably. Since then, new issues came up, and not only did I have to work on the rubygems and ruby1.9 package, but now the ruby1.8 package also had to get security updates. Yes: it's bad enough that the rubygems code is duplicated in one other package, but wheezy had the misfortune of having two Ruby versions supported.

The Ruby 1.9 also failed to build from source because of test suite issues, which I haven't found a clean and easy fix for, so I ended up making test suite failures non-fatal in 1.9, which they were already in 1.8. I did keep a close eye on changes in the test suite output to make sure tests introduced in the security fixes would pass and that I wouldn't introduce new regressions as well.

So I published the following advisories:

  • ruby 1.8: DLA-1113-1, fixing CVE-2017-0898 and CVE-2017-10784. 1.8 doesn't seem affected by CVE-2017-14033 as the provided test does not fail (but it does fail in 1.9.1). test suite was, before patch:

    2199 tests, 1672513 assertions, 18 failures, 51 errors

    and after patch:

    2200 tests, 1672514 assertions, 18 failures, 51 errors
  • rubygems: uploaded the package prepared in August as is in DLA-1112-1, fixing CVE-2017-0899, CVE-2017-0900, CVE-2017-0901. here the test suite passed normally.

  • ruby 1.9: here I used the used 2.2.8 release tarball to generate a patch that would cover all issues and published DLA-1114-1 that fixes the CVEs of the two packages above. the test suite was, before patches:

    10179 tests, 2232711 assertions, 26 failures, 23 errors, 51 skips

    and after patches:

    1.9 after patches (B): 10184 tests, 2232771 assertions, 26 failures, 23 errors, 53 skips


I also quickly issued an advisory (DLA-1120-1) for CVE-2017-14867, an odd issue affecting git in wheezy. The backport was tricky because it wouldn't apply cleanly and the git package had a custom patching system which made it tricky to work on.


I did a quick stint on git-annex as well: I was able to reproduce the issue and confirm an approach to fixing the issue in wheezy, although I didn't have time to complete the work before the end of the month.

Other free software work

New project: feed2exec

I should probably make a separate blog post about this, but ironically, I don't want to spend too much time writing those reports, so this will be quick.

I wrote a new program, called feed2exec. It's basically a combination of feed2imap, rss2email and feed2tweet: it allows you to fetch RSS feeds and send them in a mailbox, but what's special about it, compared to the other programs above, is that it is more generic: you can basically make it do whatever you want on new feed items. I have, for example, replaced my feed2tweet instance with it, using this simple configuration:

url =
output = feed2exec.plugins.exec
args = tweet "%(title)0.70s %(link)0.70s"

The sample configuration file also has examples to talk with Mastodon, and, why not, a torrent server to download torrent files available over RSS feeds. A trivial configuration can also make it work as a crude podcast client. My main motivation to work on this was that it was difficult to extend feed2imap to do what I needed (which was to talk to transmission to download torrent files) and rss2email didn't support my workflow (which is delivering to feed-specific mail folders). Because both projects also seemed abandoned, it seemed like a good idea at the time to start a new one, although the rss2email community has now restarted the project and may produce interesting results.

As an experiment, I tracked my time working on this project. It turns out it took about 45 hours to write that software. Considering feed2exec is about 1400 SLOC, that's 30 lines of code per hour. I don't know if that's slow or fast, but it's an interesting metric for future projects. It sure seems slow to me, but we need to keep in mind those 30 lines of code don't include documentation and repeated head banging on the keyboard. For example, I found two issues with the upstream feedparser package which I use to parse feeds which also seems unmaintained, unfortunately.

Feed2exec is beta software at this point, but it's working well enough for me and the design is much simpler than the other programs of the kind. The main issue people can expect from it at this point is formatting issues or parse errors on exotic feeds, and noisy error messages on network errors, all of which should be fairly easy to fix in the test suite. I hope it will be useful for the community and, as usual, I welcome contributions, help and suggestions on how to improve the software.

More Python templates

As part of the work on feed2exec, I did cleanup a few things in the ecdysis project, mostly to hook tests up in the CI, improve on the advancedConfig logger and cleanup more stuff.

While I was there, it turns out that I built a pretty decent basic CI configuration for Python on GitLab. Whereas the previous templates only had a non-working Django example, you should now be able to chose a Python template when you configure CI on GitLab 10 and above, which should hook you up with normal Python setup procedures like install and test.


I mentioned working on a monitoring tool in my last post, because it was a feature from Workrave missing in SafeEyes. It turns out there is already such a tool called selfspy. I did an extensive review of the software to make sure it wouldn't leak out confidential information out before using it, and it looks, well... kind of okay. It crashed on me at least once so far, which is too bad because then it loses track of the precious activity. I have used it at least once to figure out what the heck I worked on during the day, so it's pretty useful. I particularly used it to backtrack my work on feed2exec as I didn't originally track my time on the project.

Unfortunately, selfspy seems unmaintained. I have proposed a maintenance team and hopefully the project maintainer will respond and at least share access so we don't end up in a situation like linkchecker. I also sent a bunch of pull requests to fix some issues like being secure by default and fixing the build. Apart from the crash, the main issue I have found with the software is that it doesn't detect idle time which means certain apps are disproportionatly represented in statistics. There are also some weaknesses in the crypto that should be adressed for people that encrypt their database.

Next step is to package selfspy in Debian which should hopefully be simple enough...

Restic documentation security

As part of a documentation patch on the Restic backup software, I have improved on my previous Perl script to snoop on process commandline arguments. A common flaw in shell scripts and cron jobs is to pass secret material in the environment (usually safe) but often through commandline arguments (definitely not safe). The challenge, in this peculiar case, was the env binary, but the last time I encountered such an issue was with the Drush commandline tool, which was passing database credentials in clear to the mysql binary. Using my Perl sniffer, I could get to 60 checks per second (or 60Hz). After reimplementing it in Python, this number went up to 160Hz, which still wasn't enough to catch the elusive env command, which is much faster at hiding arguments than MySQL, in large part because it simply does an execve() once the environment is setup.

Eventually, I just went crazy and rewrote the whole thing in C which was able to get 700-900Hz and did catch the env command about 10-20% of the time. I could probably have rewritten this by simply walking /proc myself (since this is what all those libraries do in the end) to get better result, but then my point was made. I was able to prove to the restic author the security issues that warranted the warning. It's too bad I need to repeat this again and again, but then my tools are getting better at proving that issue... I suspect it's not the last time I have to deal with this issue and I am happy to think that I can come up with an even more efficient proof of concept tool the next time around.

Ansible 101

After working on documentation last month, I ended up writing my first Ansible playbook this month, converting my tasksel list to a working Ansible configuration. This was a useful exercise: it allow me to find a bunch of packages which have been removed from Debian and provides much better usability than tasksel. For example, it provides a --diff argument that shows which packages are missing from a given setup.

I am still unsure about Ansible. Manifests do seem really verbose and I still can't get used to the YAML DSL. I could probably have done the same thing with Puppet and just run puppet apply on the resulting config. But I must admit my bias towards Python is showing here: I can't help but think Puppet is going to be way less accessible with its rewrite in Clojure and C (!)... But then again, I really like Puppet's approach of having generic types like package or service rather than Ansible's clunky apt/yum/dnf/package/win_package types...

Pat and Ham radio

After responding (too late) to a request for volunteers to help in Puerto Rico, I realized that my amateur radio skills were somewhat lacking in the "packet" (data transmission in ham jargon) domain, as I wasn't used to operate a Winlink node. Such a node can receive and transmit actual emails over the airwaves, for free, without direct access to the internet, which is very useful in disaster relief efforts. Through summary research, I stumbled upon the new and very promising Pat project which provides one of the first user-friendly Linux-compatible Winlink programs. I provided improvements on the documentation and some questions regarding compatibility issues which are still pending.

But my pet issue is the establishment of pat as a normal internet citizen by using standard protocols for receiving and sending email. Not sure how that can be implemented, but we'll see. I am also hoping to upload an official Debian package and hopefully write more about this soon. Stay tuned!

Random stuff

I ended up fixing my Kodi issue by starting it as a standalone systemd service, instead of gdm3, which is now completely disabled on the media box. I simply used the following /etc/systemd/service/kodi.service file:

Description=Kodi Media Center

ExecStart=/usr/bin/xinit /usr/bin/dbus-launch --exit-with-session /usr/bin/kodi-standalone -- :1 -nolisten tcp vt7


The downside of this is that it requires root to run, whereas modern X can run without root. Not sure how to fix this or where...

After fooling around with iPython, I ended up trying the xonsh shell, which is supposed to provide a bash-compatible Python shell environment. Unfortunately, I found it pretty unusable as a shell: it works fine to do Python stuff, but then all my environment and legacy bash configuration files were basically ignored so I couldn't get working quickly. This is too bad because the project looked very promising...

Finally, one of my TLS hosts using a Let's Encrypt certificate wasn't renewing properly, and I figured out why. It turns out the ProxyPass command was passing everything to the backend, including the /.well-known requests, which obviously broke ACME verification. The solution was simple enough, disable the proxy for that directory:

ProxyPass /.well-known/ !

Planet Linux AustraliaSimon Lyall: DevOps Days Auckland 2017 – Monday Session 1

DevSecOps – Anthony Rees

“When Anthrax and Public Enemy came together, It was like Developers and Operations coming together”

  • Everybody is trying to get things out fast, sometimes we forget about security
  • Structural efficiency and optimised flow
  • Compliance putting roadblock in flow of pipeline
    • Even worse scanning in production after deployment
  • Compliance guys using Excel, Security using Shell-scripts, Develops and Operations using Code
  • Chef security compliance language – InSpec
    • Insert Sales stuff here
  • Lots of pre-written configs available

Immutable SQL Server Clusters – John Bowker (from Xero)

  • Problem
    • Pet Based infrastructure
    • Not in cloud, weeks to deploy new server
    • Hard to update base infrastructure code
  • 110 Prod Servers (2 regions).
  • 1.9PB of Disk
  • Octopus Deploy: SQL Schemas, Also server configs
  • Half of team in NZ, Half in Denver
    • Data Engineers, Infrastructure Engineers, Team Lead, Product Owner
  • Where we were – The Burning Platform
    • Changed mid-Migration from dedicated instances to dedicated Hosts in AWS
    • Big saving on software licensing
  • Advantages
    • Already had Clustered HA
    • Existing automation
    • 6 day team, 15 hours/day due to multiple locations of team
  • Migration had to have no downtime
    • Went with node swaps in cluster
  • Split team. Half doing migration, half creating code/system for the node swaps
  • We learnt
    • Dedicated hosts are cheap
    • Dedicated host automation not so good for Windows
    • Discovery service not so good.
    • Syncing data took up to 24h due to large dataset
    • Powershell debugging is hard (moving away from powershell a bit, but powershell has lots of SQL server stuff built in)
    • AWS services can timeout, allow for this.
  • Things we Built
    • Lots Step Templates in Octopus Deploy
    • Metadata Store for SQL servers – Dynamite (Python, Labda, Flask, DynamoDB) – Hope to Open source
    • Lots of PowerShell Modules
  • Node Swaps going forward
    • Working towards making this completely automated
    • New AMI -> Node swap onto that
    • Avoid upgrade in place or running on old version


Krebs on SecurityUSPS ‘Informed Delivery’ Is Stalker’s Dream

A free new service from the U.S. Postal Service that provides scanned images of incoming mail before it is slated to arrive at its destination address is raising eyebrows among security experts who worry about the service’s potential for misuse by private investigators, identity thieves, stalkers or abusive ex-partners. The USPS says it hopes to have changes in place by early next year that could help blunt some of those concerns.

The service, dubbed “Informed Delivery,” has been available to select addresses in several states since 2014 under a targeted USPS pilot program, but it has since expanded to include many ZIP codes nationwide, according to the Postal Service. U.S. residents can tell if their address is eligible by visiting

Image: USPS

Image: USPS

According to the USPS, some 6.3 million accounts have been created via the service so far. The Postal Service says consumer feedback has been overwhelmingly positive, particularly among residents who travel regularly and wish to keep close tabs on any mail being delivered while they’re on the road.

But a review of the methods used by the USPS to validate new account signups suggests the service is wide open to abuse by a range of parties, mainly because of weak authentication and because it is not easy to opt out of the service.

Signing up requires an eligible resident to create a free user account at, which asks for the resident’s name, address and an email address. The final step in validating residents involves answering four so-called “knowledge-based authentication” or KBA questions. KrebsOnSecurity has relentlessly assailed KBA as an unreliable authentication method because so many answers to the multiple-guess questions are available on sites like Spokeo and Zillow, or via social networking profiles.

Once signed up, a resident can view scanned images of the front of each piece of incoming mail in advance of its arrival. Unfortunately, because of the weak KBA questions (provided by recently-breached big-three credit bureau Equifax, no less) stalkers, jilted ex-partners, and private investigators also can see who you’re communicating with via the Postal mail.

Perhaps this wouldn’t be such a big deal if the USPS notified residents by snail mail when someone signs up for the service at their address, but it doesn’t.

Peter Swire, a privacy and security expert at Georgia Tech and a senior counsel at the law firm of Alston & Bird, said strong authentication relies on information collected from multiple channels — such as something you know (a password) and something you have (a mobile phone). In this case, however, the USPS has opted not to leverage a channel that it uniquely controls, namely the U.S. Mail system.

“The whole service is based on a channel they control, and they should use that channel to verify people,” Swire said. “That increases user trust that it’s a good service. Multi-channel authentication is becoming the industry norm, and the U.S. Postal Service should catch up to that.” 

I also wanted to know whether there was any way for households to opt out of having scanned images of their mail sent as part of this offering. The USPS replied that consumers may contact the Informed Delivery help desk to request that the service not be presented to anyone in their household. “Each request is individually reviewed and assessed by members of the Postal Service Informed Delivery, Privacy and Legal teams,” the Postal Service replied.

There does not appear to be any limit on the number of people who can sign up for the service at any one address, except that one needs to know the names and KBA question answers for a valid resident of that address.

“Informed Delivery may be accessed by any adult member of a household,” the USPS wrote in response to questions. “Each member of the household must be able to complete the identity proofing process implemented by the Postal Service.”

The Postal Service said it is not possible for an address occupant to receive emailed, scanned images of incoming mail at more than one email address. In other words, if you wish to prevent others from signing up in your name or in the name of any other adults at the address, the surest way to do that may be to register your own account and then urge all other adult residents at the address to create their own accounts.

A highly positive story about Informed Delivery published by NBC in April 2017 suggests another use for the service: Reducing mail theft. However, without stronger authentication, this service could let local ID thieves determine with pinpoint accuracy exactly when mail worth stealing is set to arrive.

The USPS says businesses are not currently eligible to sign up as recipients of Informed Delivery. However, people running businesses out of their home could also be the target of competitors hoping to steal away customers, or to pose as partner firms in demanding payment for outstanding invoices.

Informed Delivery seems like a useful service for those residents who wish to take advantage of it. But lacking stronger consumer validation the service seems ripe for abuse. The USPS should use its own unique communications channel (snail mail) to alert Americans when their physical address has been signed up for this service.

Bob Dixon, the executive program director for Informed Delivery, said the Postal Service is working on an approach that it hopes to make available to the public in January 2018 which would allow USPS to send written notification to addresses when someone at that residence signs up for Informed Delivery.

Dixon said that capability will build on technology already in place to notify Americans via mail when a change of address is requested. Currently, the USPS allows address changes via the USPS Web site or in-person at any one of more than 3,000 post offices nationwide. When a request is processed, the USPS sends a confirmation letter to both the old address and the new address.

If someone already signed up for Informed Delivery later posts a change of address request, the USPS does not automatically transfer the Informed Delivery service to the new address: Rather, it sends a mailer with a special code tied to the new address and to the username that requested the change. To resume Informed Delivery at the new address, that code needs to be entered online using the account that requested the address change.

“Part of coming up with a mail-based verification system will also let us do some additional notification that, candidly, we just haven’t built yet,” Dixon said. “It is our intent to have this ready by January 2018, and it is one of our higher priorities to get it done by then.”

There is a final precaution that should block anyone from signing up as you: Readers who have taken my advice to freeze their credit files with the four major consumer credit reporting bureaus (Equifax, Experian, Innovis and Trans Union) will find they are not able to sign up for Informed Delivery online. That’s because having a freeze in place should block Equifax from being able to ask you the four KBA questions.

By the way, this same dynamic works with other services that you may not wish to use but which require you otherwise to plant your flag of identity to prevent others from doing so on your behalf, such as managing your relationship to the Internal Revenue Service online and the Social Security Administration. For more information on why you should get a freeze and how to do that, see this piece.

Update, 3:48 p.m. ET: Added bit about how a freeze can block someone from signing up in your name.

Planet DebianJonathan Dowland: PhD

I'm very excited to (finally) announce that I've embarked upon a part-time PhD in Computing Science at Newcastle University!

I'm at the very beginning of a journey that is expected to last about six years. The area I am going to be working in is functional stream processing and distributed systems architecture, in the context of IoT. This means investigating and working with technologies such as Apache Spark; containers (inc. Docker); Kubernetes and OpenShift; but also Haskell. My supervisor is Prof. Paul Watson. This would not be possible without the support of my employer, Red Hat, for which I am extremely grateful.

I hope to write much more about this topic here in the near future, so watch this space!

Planet DebianLars Wirzenius: Attracting contributors to a new project

How do you attract contributors to a new free software project?

I'm in the very early stages of a new personal project. It is irrelevant for this blog post what the new project actually is. Instead, I am thinking about the following question:

Do I want the project to be mainly for myself, and maybe a handful of others, or do I want to try to make it a more generally useful, possibly even a well-known, popular project? In other words, do I want to just solve a specific problem I have or try to solve it for a large group of people?

If it's a personal project, I'm all set. I can just start writing code. (In fact, I have.) If it's the latter, I'll need to attract contributions from others, and how do I do that?

I asked that question on Twitter and Mastodon and got several suggestions. This is a summary of those, with some editorialising from me.

  • The most important thing is probably that the project should aim for something that interests other people. The more people it interests, the easier it will be to attract contributors. This should be written up and displayed prominently: what does (or will) the software do and what can it e used for.

  • Having something that kind of works, and easy to improve, seems to also be key. An empty project is daunting to do anything with. Part of this is that the software the project is producing should be easy to install and get running. It doesn't have to be fully featured. It doesn't even have to be alpha level quality. It needs to do something.

    If the project is about producing a spell checker, say, and it doesn't even try to read an input file, it's probably too early for anyone else to contribute. A spell checker that lists every word in the input file as badly spelt is probably more attractive to contribute to.

  • It helps to document where a new contributor should start, and how they would submit their contribution. A list of easy things to work on may also help. Having a roadmap of near future developent steps and a long-term vision will make things easier. Having an architectural document to explain how the system hangs together will help.

  • A welcoming, constructive atmosphere helps. People should get quick feedback to questions, issues, patches, in order to build momentum. Make it fun for people to contibute, and they'll contribute more.

  • A public source code repository, and a public ticketing system, and public discussion forums (mailing lists, web forums, IRC channels, etc) will help.

  • Share the power in the project. Give others the power to make decisions, or merge things from other contributors. Having a clear, functioning governance structure from the start helps.

I don't know if these things are all correct, or that they're enough to grow a successful, popular project.

Karl Foger'l seminal book Producing Open Source Software should also be mentioned.

CryptogramRemote Malware Attacks on ATMs

This report discusses the new trend of remote malware attacks against ATMs.

Worse Than FailureCodeSOD: Dashboard Confessional

Three years ago, this XKCD comic captured a lot of the problems we have with gathering requirements:

A comic where a customer asks a developer to a) Take a photo and determine if it's in a national park (easy says the dev), b) determine if it's of a bird (I need a research team and 5 years)

Our users have no idea which kinds of problems are hard and which kinds are easy. This isn’t just for advanced machine learning classification projects- I’ve had users who assumed changing the color of an element on a page was hard (it wasn’t), to users who assumed wiring up our in-house ERP to a purchased ERP was the simplest thing ever (it wasn’t).

Which brings us to Christopher Shankland’s contribution. He works for a game company, and while that often means doing game development, it often means doing tooling and platform management for the design team, like providing fancy dashboards for the designers to review how users play the game so that they can tweak the play.

That lead to this conversation:

Game Designer: I want to see how players progress through the game missions
Christopher: Great. I’ll add a funnel chart to our dashboard app, which can query data from the database!
Game Designer: Also, I need to change the order the missions display in all the time…
Christopher: Okay, that’ll require a data change every time you want to flip the order…
Game Designer: Fine, but I shouldn’t have to ask anyone else to do it…
Christopher: Um… I’d have to bolt a UI onto the database, it’s not really meant-
Game Designer: That sounds time consuming. I need this data YESTERDAY.
Christopher: I could-

So Christopher hacked together a solution. Between fighting with the designer’s fluid and every changing demands, the fact that what the designer wanted didn’t mesh well with how the dashboard system assumed analytics would be run, the demand that it be done in the dashboard system anyway, and the unnecessary time pressure, Christopher didn’t do his best work. He sends us this code, as penance. It’s long, it’s convoluted, and it uses lots of string concatenation to generate SQL statements.

As Chris rounded out his message to us: “This is why I drink.”

-- Create syntax for 'chart_first_map_daily'

DROP PROCEDURE IF EXISTS `chart_first_map_daily`;

CREATE DEFINER=`megaforce_stats`@`%` PROCEDURE `chart_first_map_daily`(IN timeline INT)

SET SESSION group_concat_max_len = 1000000;

DROP TABLE IF EXISTS `megaforce_stats`.`chart_first_map_daily`;
CREATE TABLE `megaforce_stats`.`chart_first_map_daily` (
        `absolute_order` INT(11) UNSIGNED NOT NULL,
        `date` DATE NOT NULL,
        `task_id` INT(11) UNSIGNED NOT NULL,
        `number_completed` INT(11) UNSIGNED NOT NULL DEFAULT 0,
        `new_user_completion_percentage` FLOAT(23) NOT NULL DEFAULT 0,
        `segment` VARCHAR(32) DEFAULT "Unknown",
        PRIMARY KEY (`date`, `task_id`, `segment`)

SET @last_date = date_sub(curdate(), INTERVAL 1 DAY);
SET @timeline = timeline;
SET @first_date = date_sub(@last_date, INTERVAL @timeline DAY);

SET @first_campaign_id = (SELECT `id` FROM `megaforce_game`.`campaigns` WHERE NOT EXISTS (SELECT * FROM `megaforce_game`.`campaign_dependencies` WHERE `unlocked_campaign_id` = `megaforce_game`.`campaigns`.`id`) AND `active` = 1 AND `type_id` NOT IN (2,3,4));

-- Create a helper table for ordering
DROP TABLE IF EXISTS `megaforce_stats`.`absolute_task_ordering`;
CREATE TABLE `megaforce_stats`.`absolute_task_ordering` (
        `task_id` INT(11) UNSIGNED NOT NULL,
        `absolute_order` INT(11) UNSIGNED NOT NULL AUTO_INCREMENT,
        PRIMARY KEY (`absolute_order`)

SET @current_mission_id = -1;
SET @sort_order = 2;

        IF(COUNT(`id`) > 0, `id`, -1) INTO @current_mission_id
        NOT EXISTS (
                SELECT * FROM `megaforce_game`.`mission_dependencies` WHERE `unlocked_mission_id` = `megaforce_game`.`missions`.`id`
        ) AND active = 1 AND campaign_id = @first_campaign_id AND type_id = 1;

WHILE @current_mission_id > 0 DO
                `megaforce_stats`.`absolute_task_ordering` (`task_id`)
                `mission_id` = @current_mission_id AND `active` = 1
        ORDER BY

                `megaforce_stats`.`chart_first_map_daily` (
                        `absolute_order`,`date`,`task_id`, `number_completed`,`new_user_completion_percentage`, `segment`
                `task_info`.`number_completed` / `sessions`.`new_users`,
        FROM (
                                `date`, SUM(`new_users`) AS `new_users`
                        FROM `megaforce_stats`.`sessions_daily`
                        WHERE DATE(`date`) > @first_date
                        AND DATE(`date`) <= @last_date
                        GROUP BY `date`
                ) AS `sessions`
        LEFT JOIN (
                        `absolute_order`, DATE(`date_completed`) AS `date`, COUNT(DISTINCT(`user_name`)) AS `number_completed`, `megaforce_game`.`tasks`.`id` AS `task_id`
                FROM `megaforce_game`.`track_completed_tasks`
                JOIN `megaforce_stats`.`accounts_real`
                ON `user_name` = `userName`
                JOIN `megaforce_game`.`tasks`
                ON `megaforce_game`.`tasks`.`id` = `megaforce_game`.`track_completed_tasks`.`task_id`
                JOIN `megaforce_stats`.`absolute_task_ordering`
                ON `megaforce_stats`.`absolute_task_ordering`.`task_id` = `megaforce_game`.`tasks`.`id`
                WHERE DATE(`date_completed`) = DATE(`date_created`) AND `mission_id` = @current_mission_id AND `active` = 1
                GROUP BY DATE(`date_completed`), `megaforce_game`.`tasks`.`id`
                ORDER BY `order`
        ) AS `task_info` ON `task_info`.`date` = `sessions`.`date`;

        -- Create our CREATE TABLE statement
        SET @mission_chart_table_name = CONCAT("chart_first_map_daily_", @current_mission_id);
                GROUP_CONCAT(`id` SEPARATOR "_completion` INT(11) UNSIGNED NOT NULL, `task_") INTO @mission_chart_task_columns
                `mission_id` = @current_mission_id AND `active` = 1
        ORDER BY

        SET @drop_mission_chart = CONCAT("DROP TABLE IF EXISTS `megaforce_stats`.`", @mission_chart_table_name, "`");

        PREPARE stmt FROM @drop_mission_chart;
        EXECUTE stmt;

        SET @create_mission_chart = CONCAT("
                CREATE TABLE `megaforce_stats`.`", @mission_chart_table_name, "` (
                        `date` DATE NOT NULL,
                        `task_", @mission_chart_task_columns, "_completion` INT(11) UNSIGNED NOT NULL,
                        `segment` VARCHAR(32) DEFAULT 'Unknown',
                        PRIMARY KEY (`date`,`segment`)
                ) ENGINE = InnoDB DEFAULT CHARSET=utf8

        PREPARE stmt FROM @create_mission_chart;
        EXECUTE stmt;

                GROUP_CONCAT(`id` SEPARATOR "_completion`.`number_completed` / `sessions`.`new_users` * 100, `task_") INTO @task_list
                `mission_id` = @current_mission_id AND `active` = 1
        ORDER BY

                                CONCAT(`id`, " GROUP BY DATE(`date_completed`), `segment`) AS `task_", `id`, "_completion` ON `task_", `id`, "_completion`.`segment` = `sessions`.`segment` AND `task_", `id`)
                                "_completion`.`date` = `sessions`.`date`
                                LEFT JOIN (
                                                DATE(`date_completed`) AS `date`, COUNT(*) AS `number_completed`, `segment`
                                        FROM `megaforce_game`.`track_completed_tasks`
                                        JOIN `megaforce_stats`.`accounts_real`
                                        ON `track_completed_tasks`.`user_name` = `accounts_real`.`userName`
                                        WHERE DATE(`date_created`) = DATE(`date_completed`) AND `task_id` = "
                ) INTO @task_join_tables
                `mission_id` = @current_mission_id AND `active` = 1
        ORDER BY

        SET @insert_mission_chart = CONCAT("
                INSERT INTO
                        `megaforce_stats`.`", @mission_chart_table_name, "`
                        `sessions`.`date`,`task_", @task_list, "_completion`.`number_completed` / `sessions`.`new_users` * 100, `sessions`.`segment`
                FROM (
                                `date`, `new_users`, `segment`
                        FROM `megaforce_stats`.`sessions_daily`
                        WHERE DATE(`date`) > @first_date
                        AND DATE(`date`) <= @last_date
                        GROUP BY `date`, `segment`
                ) AS `sessions`
                LEFT JOIN (
                                DATE(`date_completed`) AS `date`, COUNT(*) AS `number_completed`, `segment`
                        FROM `megaforce_game`.`track_completed_tasks`
                        JOIN `megaforce_stats`.`accounts_real`
                        ON `track_completed_tasks`.`user_name` = `accounts_real`.`userName`
                        WHERE DATE(`date_created`) = DATE(`date_completed`) AND `task_id` = ", @task_join_tables, "_completion`.`date` = `sessions`.`date`

        PREPARE stmt FROM @insert_mission_chart;
        EXECUTE stmt;

                                CONCAT(`id`, " GROUP BY DATE(`date_completed`)) AS `task_", `id`, "_completion` ON `task_", `id`)
                                "_completion`.`date` = `sessions`.`date`
                                LEFT JOIN (
                                                DATE(`date_completed`) AS `date`, COUNT(*) AS `number_completed`
                                        FROM `megaforce_game`.`track_completed_tasks`
                                        JOIN `megaforce_stats`.`accounts_real`
                                        ON `track_completed_tasks`.`user_name` = `accounts_real`.`userName`
                                        WHERE DATE(`date_created`) = DATE(`date_completed`) AND `task_id` = "
                ) INTO @task_join_tables
                `mission_id` = @current_mission_id AND `active` = 1
        ORDER BY

        SET @insert_mission_chart = CONCAT("
                INSERT INTO
                        `megaforce_stats`.`", @mission_chart_table_name, "`
                        `sessions`.`date`,`task_", @task_list, "_completion`.`number_completed` / `sessions`.`new_users` * 100, -1
                FROM (
                                `date`, SUM(`new_users`) AS `new_users`
                        FROM `megaforce_stats`.`sessions_daily`
                        WHERE DATE(`date`) > @first_date
                        AND DATE(`date`) <= @last_date
                        GROUP BY `date`
                ) AS `sessions`
                LEFT JOIN (
                                DATE(`date_completed`) AS `date`, COUNT(*) AS `number_completed`
                        FROM `megaforce_game`.`track_completed_tasks`
                        JOIN `megaforce_stats`.`accounts_real`
                        ON `track_completed_tasks`.`user_name` = `accounts_real`.`userName`
                        WHERE DATE(`date_created`) = DATE(`date_completed`) AND `task_id` = ", @task_join_tables, "_completion`.`date` = `sessions`.`date`

        PREPARE stmt FROM @insert_mission_chart;
        EXECUTE stmt;

        -- Dynamically create our charts (multiple data by mission)
        DELETE FROM `megaforce_stats`.`gecko_chart_sql` WHERE `sql_key` = CONCAT("CHART_FIRST_MAP_DAILY_", @current_mission_id);
        DELETE FROM `megaforce_stats`.`gecko_chart_info` WHERE `sql_key` = CONCAT("CHART_FIRST_MAP_DAILY_", @current_mission_id);

                `megaforce_stats`.`gecko_chart_sql` (`sql_key`,`sql_query`,`data_field`,`segment_field`)
                (CONCAT("CHART_FIRST_MAP_DAILY_", @current_mission_id), CONCAT("SELECT * FROM `megaforce_stats`.`", @mission_chart_table_name, "`"), "date", "segment");

                `megaforce_stats`.`gecko_chart_info` (`sql_key`,`data_field`,`title`,`category`,`sort_order`,`type`,`data_name`,`chart_type`)
                (CONCAT("CHART_FIRST_MAP_DAILY_", @current_mission_id), "", CONCAT("Mission ", @current_mission_id, " Task Completion"), 10, @sort_order, "spline", "", "hc_line_multiple_segments_date");

        SET @sort_order = @sort_order + 1;

                IF(COUNT(`unlocked_mission_id`) > 0, `unlocked_mission_id`, -1) INTO @current_mission_id
                `required_mission_id` = @current_mission_id;

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

Planet DebianUwe Kleine-König: IPv6 in my home network

I am lucky and get both IPv4 (without CGNAT) and IPv6 from my provider. Recently after upgrading my desk router (that is an Netgear WNDR3800 that serves the network on my desk) from OpenWRT to latest LEDE I looked into what can be improved in the IPv6 setup for both my home network (served by a FRITZ!Box) and my desk network.

Unfortunately I was unable to improve the situation compared to what I already had before.

Things that work

Making IPv6 work in general was easy, just a few clicks in the configuration of the FRITZ!Box and it mostly worked. After that I have:

  • IPv6 connectivity in the home net
  • IPv6 connectivity in the desk net

Things that don't work

There are a few things however that I'd like to have, that are not that easy it seems:

ULA for both nets

I let the two routers announce an ULA prefix each. Unfortunately I was unable to make the LEDE box announce its net on the wan interface for clients in the home net. So the hosts in the desk net know how to reach the hosts in the home net but not the other way round which makes it quite pointless. (It works fine as long as the FRITZ!Box announces a global net, but I'd like to have local communication work independent of the global connectivity.)

To fix this I'd need something like radvd on my LEDE router, but that isn't provided by LEDE (or OpenWRT) any more as odhcpd is supposed to be used which AFAICT is unable to send RAs on the wan interface though. Ok, probably I could install bird, but that seems a bit oversized. I created an entry in the LEDE forum but without any reply up to now.

Alternatively (but less pretty) I could setup an IPv6 route in the FRITZ!Box, but that only works with a newer firmware and as this router is owned by my provider I cannot update it.


The FRITZ!Box has a firewall that is not very configurable. I can punch a hole in it for hosts with a given interface-ID, but that only works for hosts in the home net, not the machines in the delegated subnet behind the LEDE router. In fact I think the FRITZ!Box should delegate firewalling for a delegated net also to the router of that subnet.

So having a global address on the machines on my desk doesn't allow me to reach them from the internet.

Update: according to the German changelog firmware 6.83 seems to include that feature. Cheers AVM. Now waiting for my provider to update ...

Planet DebianJunichi Uekawa: Recently I was writing log analysis tools in javascript.

Recently I was writing log analysis tools in javascript. Javascript part is challenging.

Planet DebianJames McCoy: Monthly FLOSS activity - 2017/09 edition



Before deciding to take an indefinite hiatus from devscripts, I prepared one more upload merging various contributed patches and a bit of last minute cleanup.

  • build-rdeps

    • Updated build-rdeps to work with compressed apt indices. (Debian bug #698240)
    • Added support for Build-Arch-{Conflicts,Depends} to build-rdeps. (adc87981)
    • Merged Andreas Henriksson's patch for setting remote.<name>.push-url when using debcheckout to clone a git repository. (Debian bug #753838)
  • debsign

    • Updated bash completion for gpg keys to use gpg --with-colons, instead of manually parsing gpg -K output. Aside from being the Right Way™ to get machine parseable information out of gpg, it fixed completion when gpg is a 2.x version. (Debian bug #837380)

I also setup integration with Travis CI to hopefully catch issues sooner than "while preparing an upload", as was typically the case before. Anyone with push access to the Debian/devscripts GitHub repo can take advantage of this to test out changes, or keep the development branches up to date. In the process, I was able to make some improvements to, namely support for DEB_BUILD_PROFILES ¹² and using a separate, minimal docker image for running autopkgtests.


  • Packaged the new upstream release (1.2.1)

  • Basic package maintenance (-dbgsym package, policy update, enabled hardening flags).

  • Uploaded 1.2.1-1


  • Attempted to nudge lua-nvim's builds along on a couple architectures where they were waiting for neovim to be installable

    • x32: Temporarily removed lua-nvim Build-Depends to break the BD-Uninstallable cycle between lua-nvim and neovim. ✓
    • powerpcspe: Temporarily removed luajit Build-Depends, reducing test scope, to fix the build. ❌
      • If memory serves, the test failures are fixed upstream for the next release.
  • Uploaded 0.2.0-4

Oddly, the mips64el builds were in BD-Uninstallable state, even though luajit's buildd status showed it was built. Looking further, I noticed the libluajit-5.1{,-dev} binary packages didn't have the mips64el architecture enabled, so I asked for it to be enabled.


There were a few packages left which would FTBFS if I uploaded msgpack-c 2.x to unstable.

All of the bug reports had either trivial work arounds (i.e., forcing use of the v1 C++ API) or trivial patches. However, I didn't want to continue waiting for the packages to get fixed since I knew other people had expressed interest in the new msgpack-c.

Trying to avoid making other packages insta-buggy, I NMUed autobahn-cpp with the v1 work around. That didn't go over well, partly because I didn't send a finalized "Hey, I'd like to get this done and here's my plan to NMU" email.

Based on that feedback, I decided to bump the remaining bugs to "serious" instead of NMUing and upload msgpack-c. Thanks to Jonas Smedegaard for quickly integrating my proposed fix for libdata-messagepack-perl. Hopefully, upstream has some time to review the PR soon.


  • Used the powerpc porterbox to debug and fix a 32-bit integer overflow that was causing test failures.

  • Asked the vim-perl folks about getting updated runtime files to Bram, after Jakub Wilk filed Debian bug #873755. This had been fixed 4+ years earlier, but not yet merged back into Vim. Thanks to Rob Hoelz for pulling things together and sending the updates to Bram.

  • I've continued to receive feedback from Debian users about their frustration with Vim's new "defaults.vim", both in regards to the actual default settings and its interaction with the system-wide vimrc file. While I still don't intend to deviate from upstream's behavior, I did push back some more on the existing behavior. I appreciate Christian Brabandt's effort, as always, to understand the issue at hand and have constructive discussions. His final suggestion seems like it will resolve the system vimrc interaction, so hopefully Bram is receptive to it.

  • Uploaded 2:8.0.1144-1

  • Thanks to a nudge from Salvatore Bonaccorso and Moritz Mühlenhoff, I uploaded 2:8.0.0197-4+deb9u1 which fixes CVE-2017-11109. I had intended to do this much sooner, but it fell through the cracks. Due to Adam Barratt's quick responses, this should make it into the upcoming Stretch 9.2 release.


  • Started work on updating the packaging
    • Converted to 3.0 (quilt) source format
    • Updated to debhelper 10 compat
    • Initial attempts at converting to a dh rules file
      • Running into various problems here and still trying to figure out whether they're in the upstream build system, Debian's patches, or both.


  • Worked with Niko Dittmann to fix build failures Niko was experiencing on OpenBSD 6.1 #7298

  • Merged upstream Vim patches into neovim from various contributors

  • Discussed focus detection behavior after a recent change in the implementation (#7221)

    • While testing focus detection in various terminal emulators, I noticed pangoterm didn't support this. I submitted a merge request on libvterm to provide an API for reporting focus changes. If that's merged, it will be trivial for pangoterm to notify applications when the terminal has focus.
  • Fixed a bug in our tooling around merging Vim patches, which was causing it to incorrectly drop certain files from the patches. #7328

Planet Linux AustraliaJames Morris: Linux Security Summit 2017 Roundup

The 2017 Linux Security Summit (LSS) was held last month in Los Angeles over the 14th and 15th of September.  It was co-located with Open Source Summit North America (OSSNA) and the Linux Plumbers Conference (LPC).

LSS 2017 sign at conference

LSS 2017

Once again we were fortunate to have general logistics managed by the Linux Foundation, allowing the program committee to focus on organizing technical content.  We had a record number of submissions this year and accepted approximately one third of them.  Attendance was very strong, with ~160 attendees — another record for the event.

LSS 2017 Attendees

LSS 2017 Attendees

On the day prior to LSS, attendees were able to access a day of LPC, which featured two tracks with a security focus:

Many thanks to the LPC organizers for arranging the schedule this way and allowing LSS folk to attend the day!

Realtime notes were made of these microconfs via etherpad:

I was particularly interested in the topic of better integrating LSM with containers, as there is an increasingly common requirement for nesting of security policies, where each container may run its own apparently independent security policy, and also a potentially independent security model.  I proposed the approach of introducing a security namespace, where all security interfaces within the kernel are namespaced, including LSM.  It would potentially solve the container use-cases, and also the full LSM stacking case championed by Casey Schaufler (which would allow entirely arbitrary stacking of security modules).

This would be a very challenging project, to say the least, and one which is further complicated by containers not being a first class citizen of the kernel.   This leads to security policy boundaries clashing with semantic functional boundaries e.g. what does it mean from a security policy POV when you have namespaced filesystems but not networking?

Discussion turned to the idea that it is up to the vendor/user to configure containers in a way which makes sense for them, and similarly, they would also need to ensure that they configure security policy in a manner appropriate to that configuration.  I would say this means that semantic responsibility is pushed to the user with the kernel largely remaining a set of composable mechanisms, in relation to containers and security policy.  This provides a great deal of flexibility, but requires those building systems to take a great deal of care in their design.

There are still many issues to resolve, both upstream and at the distro/user level, and I expect this to be an active area of Linux security development for some time.  There were some excellent followup discussions in this area, including an approach which constrains the problem space. (Stay tuned)!

A highlight of the TPMs session was an update on the TPM 2.0 software stack, by Philip Tricca and Jarkko Sakkinen.  The slides may be downloaded here.  We should see a vastly improved experience over TPM 1.x with v2.0 hardware capabilities, and the new software stack.  I suppose the next challenge will be TPMs in the post-quantum era?

There were further technical discussions on TPMs and container security during subsequent days at LSS.  Bringing the two conference groups together here made for a very productive event overall.

TPMs microconf at LPC with Philip Tricca presenting on the 2.0 software stack.

This year, due to the overlap with LPC, we unfortunately did not have any LWN coverage.  There are, however, excellent writeups available from attendees:

There were many awesome talks.

The CII Best Practices Badge presentation by David Wheeler was an unexpected highlight for me.  CII refers to the Linux Foundation’s Core Infrastructure Initiative , a preemptive security effort for Open Source.  The Best Practices Badge Program is a secure development maturity model designed to allow open source projects to improve their security in an evolving and measurable manner.  There’s been very impressive engagement with the project from across open source, and I believe this is a critically important effort for security.

CII Bade Project adoption (from David Wheeler’s slides).

During Dan Cashman’s talk on SELinux policy modularization in Android O,  an interesting data point came up:

We of course expect to see application vulnerability mitigations arising from Mandatory Access Control (MAC) policies (SELinux, Smack, and AppArmor), but if you look closely this refers to kernel vulnerabilities.   So what is happening here?  It turns out that a side effect of MAC policies, particularly those implemented in tightly-defined environments such as Android, is a reduction in kernel attack surface.  It is generally more difficult to reach such kernel vulnerabilities when you have MAC security policies.  This is a side-effect of MAC, not a primary design goal, but nevertheless appears to be very effective in practice!

Another highlight for me was the update on the Kernel Self Protection Project lead by Kees, which is now approaching its 2nd anniversary, and continues the important work of hardening the mainline Linux kernel itself against attack.  I would like to also acknowledge the essential and original research performed in this area by grsecurity/PaX, from which this mainline work draws.

From a new development point of view, I’m thrilled to see the progress being made by Mickaël Salaün, on Landlock LSM, which provides unprivileged sandboxing via seccomp and LSM.  This is a novel approach which will allow applications to define and propagate their own sandbox policies.  Similar concepts are available in other OSs such as OSX (seatbelt) and BSD (pledge).  The great thing about Landlock is its consolidation of two existing Linux kernel security interfaces: LSM and Seccomp.  This ensures re-use of existing mechanisms, and aids usability by utilizing already familiar concepts for Linux users.

Overall I found it to be an incredibly productive event, with many new and interesting ideas arising and lots of great collaboration in the hallway, lunch, and dinner tracks.

Slides from LSS may be found linked to the schedule abstracts.

We did not have a video sponsor for the event this year, and we’ll work on that again for next year’s summit.  We have discussed holding LSS again next year in conjunction with OSSNA, which is expected to be in Vancouver in August.

We are also investigating a European LSS in addition to the main summit for 2018 and beyond, as a way to help engage more widely with Linux security folk.  Stay tuned for official announcements on these!

Thanks once again to the awesome event staff at LF, especially Jillian Hall, who ensured everything ran smoothly.  Thanks also to the program committee who review, discuss, and vote on every proposal, ensuring that we have the best content for the event, and who work on technical planning for many months prior to the event.  And of course thanks to the presenters and attendees, without whom there would literally and figuratively be no event :)

See you in 2018!


Planet Linux AustraliaOpenSTEM: Stone Axes and Aboriginal Stories from Victoria

In the most recent edition of Australian Archaeology, the journal of the Australian Archaeological Association, there is a paper examining the exchange of stone axes in Victoria and correlating these patterns of exchange with Aboriginal stories in the 19th century. This paper is particularly timely with the passing of legislation in the Victorian Parliament on […]


Planet DebianIain R. Learmonth: Free Software Efforts (2017W39)

Here’s my weekly report for week 39 of 2017. In this week I have travelled to Berlin and caught up on some podcasts in doing so. I’ve also had some trouble with the RSS feeds on my blog but hopefully this is all fixed now.

Thanks to Martin Milbret I now have a replacement for my dead workstation, an HP Z600, and there will be a blog post about this new set up to come next week. Thanks also to Sýlvan and a number of others that made donations towards getting me up and running again. A breakdown of the donations and expenses can be found at the end of this post.


Two of my packages measurement-kit from OONI and python-azure-devtools used to build the Azure Python SDK (packaged as python-azure) have been accepted by ftp-master into Debian’s unstable suite.

I have also sponsored uploads for comptext, comptty, fllog, flnet and gnustep-make.

I had previously encouraged Eric Heintzmann to become a DM and I have given him DM upload privileges for the gnustep-make package as he has shown to care for the GNUstep packages well.

Bugs closed (fixed/wontfix): #8751251, #8751261, #861753, #873083

Tor Project

My Tor Project contributions this week were primarily attending the Tor Metrics meeting which I have reported on in a separate blog post.


I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

The replacement workstation arrived on Friday and is now up and running. In total I received £308.73 in donations and spent £36.89 on video adapters and £141.94 on replacement hard drives for my NAS (which includes my local Debian mirror and backups).

For the Tor Metrics meeting in Berlin, Tor Project paid my flights and accommodation and I paid only for ground transport and food myself. The total cost for ground transport during the trip was £45.92 (taxi to airport, 1 Tageskarte) and total cost for food was £23.46.

The current funds I have available for equipment, travel and other free software expenses is now £60.52. I do not believe that any hardware I rely on is looking at imminent failure.

  1. Fixed by a sponsored upload, not by my changes [return]

Planet DebianThorsten Alteholz: My Debian Activities in September 2017

FTP assistant

This month almost the same numbers as last month appeared in the statistics. I accepted 213 packages and rejected 15 uploads. The overall number of packages that got accepted this month was 425.

Debian LTS

This was my thirty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 15.75h. During that time I did LTS uploads of:

  • [DLA 1109-1] libraw security update for one CVE
  • [DLA 1117-1] opencv security update for 13 CVEs

I also took care of libstrusts1.2-java and marked all CVEs as not-affected and I marked all CVEs for jasper as no-dsa. I also started to work on sam2p.

Just as I wanted to upload a new version of libofx, a new CVE was discovered that was not closed in time. I tried to find a patch on my own but had difficulties in reproducing this issue.

Other stuff

This month I made myself familiar with glewlwyd and according to upstream, the Debian packages work out-of-the box. However upstream does not stop working on that software, so I uploaded new versions of hoel, ulfius and glewlwyd.

As libjwt needs libb64, which was orphanded, I used it as DOPOM and adopted it.

Does anybody still know the Mayhem-bugs? I could close one by uploading an updated version of siggen.

I also went through my packages and looked for patches that piled up in the BTS. As a result i uploaded updated versions of radlib, te923con, node-starttls, harminv and uucp.

New upstream versions of openoverlayrouter and fasttree also made it into the archive.

Last but not least I moved several packages to the debian-mobcom group.

Don MartiThe capital dynamics are all wrong.

Ben Werdmuller, in Why open source software isn’t as ethical as you think it is:

When you release open source software, you have this egalitarian idea that you’re making it available to people who can really use it, who can then built on it to make amazing things....While this is a fine position to take, consider who has the most resources to build on top of a project that requires development. With most licenses, you’re issuing a free pass to corporations and other wealthy organizations, while providing no resources to those needy users. OpenSSL, which every major internet company depends on, was until recently receiving just $2,000 a year in donations, with the principal author in financial difficulty.

This is a good example of one of the really interesting problems of working in an immature industry. We don't have our incentives hooked up right yet.

  • Why does open source have some bugs that stay open longer than careers do?

  • Why do people have the I've been coding to create lots of value for big companies for years and I'm still broke problem?

  • How does millions of dollars of shared vigilance even make the news, when the value extracted is in the billions?

  • Why is the meritocracy of open source even more biased than other technical and collaborative fields? (Are we at the bottom of the standings?) Why are we walking away from that many potential contributors?

Quinn Norton: Software is a Long Con:

It is to the benefit of software companies and programmers to claim that software as we know it is the state of nature. They can do stupid things, things we know will result in software vulnerabilities, and they suffer no consequences because people don’t know that software could be well-written. Often this ignorance includes developers themselves. We’ve also been conditioned to believe that software rots as fast as fruit. That if we waited for something, and paid more, it would still stop working in six months and we’d have to buy something new. The cruel irony of this is that despite being pushed to run out and buy the latest piece of software and the latest hardware to run it, our infrastructure is often running on horribly configured systems with crap code that can’t or won’t ever be updated or made secure.

We have two possible futures.

  • People finally get tired of software's boyish antics lethal irresponsibility, and impose a regulatory regime. Rent-seekers rejoice. Software innovation as we know it ceases, and we get something like the pre-breakup Bell System—you have to be an insider to build and deploy anything that reaches real people.

  • The software scene outgrows the "disclaimer of implied warranty" level of quality, on its own.

How do we get there? One approach is to use market mechanisms to help quantify software risk, then enable users with a preference for high quality and developers with a preference for high quality to interact directly, not through the filter of software companies that win by releasing early at a low quality level.

There is an opportunity here for the kinds of companies that are now doing open source license analysis. Right now they're analyzing relatively few files in a project—the licenses and copyrights. A tool will go through your software stack, and hooray, you don't have anything that depends on something with a consistent license, or on a license that would look bad to the people you want to see your company to.

What if that same tool would give you a better quality number for your stack, based on walking your dependency tree and looking for weak points based on market activity?

Why blockchain?

One important reason is that black or gray hat security researchers are likely to have extreme confidentiality requirements, especially when trading on knowledge from a co-conspirator who may not be aware of the trade. (A possible positive externality win from bug futures markets is the potential to reduce the trustworthiness of underground vulnerability markets, driving marginal vuln transactions to the legit market.)

Bug futures series so far

Planet DebianPaul Wise: FLOSS Activities September 2017





  • icns: merged patches
  • Debian: help guest user with access, investigate/escalate broken network, restart broken stunnels, investigate static.d.o storage, investigate weird RAID mails, ask hoster to investigate power issue,
  • Debian mentors: lintian/security updates & reboot
  • Debian wiki: merged & deployed patch, redirect DDTSS translator, redirect user support requests, whitelist email addresses, update email for accounts with bouncing email,
  • Debian derivatives census: merged/deployed patches
  • Debian PTS: debugged cron mails, deployed changes, reran scripts, fixed configuration file
  • Openmoko: debug reboot issue, debug load issues



The samba bug was sponsored by my employer. All other work was done on a volunteer basis.


Planet DebianChris Lamb: Free software activities in September 2017

Here is my monthly update covering what I have been doing in the free software world in September 2017 (previous month):

  • Submitted a pull request to Quadrapassel (the Gnome version of Tetris) to start a new game when the pause button is pressed outside of a game. This means you would no longer have to use the mouse to start a new game. [...]
  • Made a large number of improvements to AptFS — my FUSE-based filesystem that provides a view on unpacked Debian source packages as regular folders — including moving away from manual parsing of package lists [...] and numerous code tidying/refactoring changes.
  • Sent a small patch to django-sitetree, a Django library for menu and breadcrumb navigation elements to not mask test exit codes from the surrounding shell. [...]
  • Updated, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds:
    • Add support for "sloppy" backports. Thanks to Bernd Zeimetz for the idea and ongoing testing. [...]
    • Merged a pull request from James McCoy to pass DEB_BUILD_PROFILES through to the build. [...]
    • Workaround Travis CI's HTTP proxy which does not appear to support SRV records. [...]
    • Run debc from devscripts if the build was successful [...] and output the .buildinfo file if it exists [...].
  • Fixed a few issues in local-debian-mirror, my package to easily maintain and customise a local Debian mirror via the DebConf configuration tool:
    • Fix an issue where file permissions from the remote could result in a local archive that was impossible to access. [...]
    • Clear out empty directories on the local repository. [...]
  • Updated django-staticfiles-dotd, my Django staticfiles adaptor to concatentate static media in .d-style directories to support Python 3.x by using bytes objects (commit) and move away from monkeypatch as it does not have a Python 3.x port yet (commit).
  • I also posted a short essay to my blog entitled "Ask the Dumb Questions" as well as provided an update on the latest Lintian release.

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:

  • Published a short blog post about how to determine which packages on your system are reproducible. [...]
  • Submitted a pull request for Numpy to make the generated files reproducible. [...]
  • Provided a patch to GTK upstream to ensure the immodules.cache files are reproducible. [...]
  • Within Debian:
    • Updated, moving it to HTTPS, adding cachebusting as well as keeping the number up-to-date.
    • Submitted the following patches to fix reproducibility-related toolchain issues:
      • gdk-pixbuf: Make the output of gdk-pixbuf-query-loaders reproducible. (#875704)
      • texlive-bin: Make PDF IDs reproducible. (#874102)
    • Submitted a patch to fix a reproducibility issue in doit.
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Chaired our monthly IRC meeting. [...]
  • Worked on publishing our weekly reports. (#123, #124, #125, #126 & #127)

I also made the following changes to our tooling:


reproducible-check is our script to determine which packages actually installed on your system are reproducible or not.

  • Handle multi-architecture systems correctly. (#875887)
  • Use the "restricted" data file to mask transient issues. (#875861)
  • Expire the cache file after one day and base the local cache filename on the remote name. [...] [...]

I also blogged about this utility. [...]


diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Filed an issue attempting to identify the causes behind an increased number of timeouts visible in our CI infrastructure, including running a number of benchmarks of recent versions. (#875324)
  • New features:
    • Add "binwalking" support to analyse concatenated CPIO archives such as initramfs images. (#820631).
    • Print a message if we are reading data from standard input. [...]
  • Bug fixes:
    • Loosen matching of file(1)'s output to ensure we correctly also match TTF files under file version 5.32. [...]
    • Correct references to path_apparent_size in comparators.utils.file and self.buf in diffoscope.diff. [...] [...]
  • Testing:
    • Make failing some critical flake8 tests result in a failed build. [...]
    • Check we identify all CPIO fixtures. [...]
  • Misc:
    • No need for try-assert-except block in [...]
    • Compare types with identity not equality. [...] [...]
    • Use's lazy argument interpolation. [...]
    • Remove unused imports. [...]
    • Numerous PEP8, flake8, whitespace, other cosmetic tidy-ups.


strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Log which handler processed a file. (#876140). [...]


disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.


My activities as the current Debian Project Leader are covered in my monthly "Bits from the DPL" email to the debian-devel-announce mailing list.


I made a large number of changes to Lintian, the static analysis tool for Debian packages. It reports on various errors, omissions and general quality-assurance issues to maintainers:

I also blogged specifically about the Lintian 2.5.54 release.

Patches contributed

  • debconf: Please add a context manager to (#877096)
  • Add pronouns to ALL_STATUS_DESC. (#875128)
  • user-setup: Please drop set_special_users hack added for "the convenience of heavy testers". (#875909)
  • postgresql-common: Please update README.Debian for PostgreSQL 10. (#876438)
  • django-sitetree: Should not mask test failures. (#877321)
  • charmtimetracker:
    • Missing binary dependency on libqt5sql5-sqlite. (#873918)
    • Please drop "Cross-Platform" from package description. (#873917)

I also submitted 5 patches for packages with incorrect calls to find(1) in debian/rules against hamster-applet, libkml, pyferret, python-gssapi & roundcube.

Debian LTS

This month I have been paid to work 15¾ hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Documented an example usage of autopkgtests to test security changes.
  • Issued DLA 1084-1 and DLA 1085-1 for libidn and libidn2-0 to fix an integer overflow vulnerabilities in Punycode handling.
  • Issued DLA 1091-1 for unrar-free to prevent a directory traversal vulnerability from a specially-crafted .rar archive. This update introduces an regression test.
  • Issued DLA 1092-1 for libarchive to prevent malicious .xar archives causing a denial of service via a heap-based buffer over-read.
  • Issued DLA 1096-1 for wordpress-shibboleth, correcting an cross-site scripting vulnerability in the Shibboleth identity provider module.


  • python-django:
    • 1.11.5-1 — New upstream security release. (#874415)
    • 1.11.5-2 — Apply upstream patch to fix QuerySet.defer() with "super" and "subclass" fields. (#876816)
    • 2.0~alpha1-2 — New upstream alpha release of Django 2.0, dropping support for Python 2.x.
  • redis:
    • 4.0.2-1 — New upstream release.
    • 4.0.2-2 — Update 0004-redis-check-rdb autopkgtest test to ensure that the redis.rdb file exists before testing against it.
    • 4.0.2-2~bpo9+1 — Upload to stretch-backports.
  • aptfs (0.11.0-1) — New upstream release, moving away from using /var/lib/apt/lists internals. Thanks to Julian Andres Klode for a helpful bug report. (#874765)
  • lintian (2.5.53, 2.5.54) — New upstream releases. (Documented in more detail above.)
  • bfs (1.1.2-1) — New upstream release.
  • docbook-to-man (1:2.0.0-39) — Tighten autopkgtests and enable testing via
  • python-daiquiri (1.3.0-1) — New upstream release.

I also made the following non-maintainer uploads (NMUs):

  • vimoutliner (0.3.4+pristine-9.3):
    • Make the build reproducible. (#776369)
    • Expand placeholders in Debian.README. (#575142, #725634)
    • Recommend that the ftplugin is enabled. (#603115)
    • Correct "is not enable" typo.
  • bittornado (0.3.18-10.3):
    • Make the build reproducible. (#796212).
    • Add missing Build-Depends on dh-python.
  • dtc-xen (0.5.17-1.1):
    • Make the build reproducible. (#777322)
    • Add missing Build-Depends on dh-python.
  • dict-gazetteer2k (1.0.0-5.4):
    • Make the build reproducible. (#776376).
    • Override empty-binary-packagea Lintian warning to avoid dak autoreject.
  • cgilib (0.6-1.1) — Make the build reproducible. (#776935)
  • dhcping (1.2-4.2) — Make the build reproducible. (#777320)
  • dict-moby-thesaurus (1.0-6.4) — Make the build reproducible. (#776375)
  • dtaus (0.9-1.1) — Make the build reproducible. (#777321)
  • fastforward (1:0.51-3.2) — Make the build reproducible. (#776972)
  • wily (0.13.41-7.3) — Make the build reproducible. (#777360)

Debian bugs filed

  • clipit: Please choose a sensible startup default in "live" mode. (#875903)
  • git-buildpackage: Please add a --reset option to gbp pull. (#875852)
  • bluez: Please default Device "friendly name" to hostname without domain. (#874094)
  • Please explicitly link to {packages,tracker} (#876746)
  • Requests for packaging:
    • selfspy — log everything you do on the computer. (#873955)
    • shoogle — use the Google API from the shell. (#873916)

FTP Team

As a Debian FTP assistant I ACCEPTed 86 packages: bgw-replstatus, build-essential, caja-admin, caja-rename, calamares, cdiff, cockpit, colorized-logs, comptext, comptty, copyq, django-allauth, django-paintstore, django-q, django-test-without-migrations, docker-runc, emacs-db, emacs-uuid, esxml, fast5, flake8-docstrings, gcc-6-doc, gcc-7-doc, gcc-8, golang-github-go-logfmt-logfmt, golang-github-google-go-cmp, golang-github-nightlyone-lockfile, golang-github-oklog-ulid, golang-pault-go-macchanger, h2o, inhomog, ip4r, ldc, libayatana-appindicator, libbson-perl, libencoding-fixlatin-perl, libfile-monitor-lite-perl, libhtml-restrict-perl, libmojo-rabbitmq-client-perl, libmoosex-types-laxnum-perl, libparse-mime-perl, libplack-test-agent-perl, libpod-projectdocs-perl, libregexp-pattern-license-perl, libstring-trim-perl, libtext-simpletable-autowidth-perl, libvirt, linux, mac-fdisk, myspell-sq, node-coveralls, node-module-deps, nov-el, owncloud-client, pantomime-clojure, pg-dirtyread, pgfincore, pgpool2, pgsql-asn1oid, phpliteadmin, powerlevel9k, pyjokes, python-evdev, python-oslo.db, python-pygal, python-wsaccel, python3.7, r-cran-bindrcpp, r-cran-dotcall64, r-cran-glue, r-cran-gtable, r-cran-pkgconfig, r-cran-rlang, r-cran-spatstat.utils, resolvconf-admin, retro-gtk, ring-ssl-clojure, robot-detection, rpy2-2.8, ruby-hocon, sass-stylesheets-compass, selinux-dbus, selinux-python, statsmodels, webkit2-sharp & weston.

I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against: comptext, comptext, ldc & python-oslo.concurrency.

Rondam RamblingsThe Bitcoin apocalypse is coming in mid-November to a block chain near you

[UPDATE: This post was originally said that the SegWit2X fork will happen on November 1.  In fact it is scheduled to occur on block 494,764 .  It is impossible to predict exactly when this will happen, but at current hash rates it will probably be some time in mid-to-late November.  The post has been edited to reflect this.] Back in 2004 someone launched a web site called

Planet DebianIain R. Learmonth: Breaking RSS Change in Hugo

My website and blog are managed by the static site generator Hugo. I’ve found this to be a stable and flexible system, but at the last upgrade a breaking change has occurred that broken the syndication of my blog on various planets.

At first I thought perhaps with my increased posting rate the planets were truncating my posts but this was not the case. The problem was in Hugo pull request #3129 where for some reason they have changed the RSS feed to contain only a “lead” instead of the full article.

I’ve seen other content management systems offer a similar option but at least they point out that it’s truncated and offer a “read more” link. Here it just looks like I’m publishing truncated unfinished really short posts.

If you take a look at the post above, you’ll see that the change is in an embedded template and it took a little reading the docs to work out how to revert the change. The steps are actually not that difficult, but it’s still annoying that the change occurred.

In a Hugo site, you will have a layouts directory that will contain your overrides from your theme. Create a new file in the path layouts/_default/rss.xml (you may need to create the _default directory) with the following content:

<rss version="2.0" xmlns:atom="">
    <title>{{ if eq  .Title  .Site.Title }}{{ .Site.Title }}{{ else }}{{ with .Title }}{{.}} on {{ end }}{{ .Site.Title }}{{ end }}</title>
    <link>{{ .Permalink }}</link>
    <description>Recent content {{ if ne  .Title  .Site.Title }}{{ with .Title }}in {{.}} {{ end }}{{ end }}on {{ .Site.Title }}</description>
    <generator>Hugo --</generator>{{ with .Site.LanguageCode }}
    <language>{{.}}</language>{{end}}{{ with }}
    <managingEditor>{{.}}{{ with $ }} ({{.}}){{end}}</managingEditor>{{end}}{{ with }}
    <webMaster>{{.}}{{ with $ }} ({{.}}){{end}}</webMaster>{{end}}{{ with .Site.Copyright }}
    <copyright>{{.}}</copyright>{{end}}{{ if not .Date.IsZero }}
    <lastBuildDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</lastBuildDate>{{ end }}
    {{ with .OutputFormats.Get "RSS" }}
        {{ printf "<atom:link href=%q rel=\"self\" type=%q />" .Permalink .MediaType | safeHTML }}
    {{ end }}
    {{ range .Data.Pages }}
      <title>{{ .Title }}</title>
      <link>{{ .Permalink }}</link>
      <pubDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</pubDate>
      {{ with }}<author>{{.}}{{ with $ }} ({{.}}){{end}}</author>{{end}}
      <guid>{{ .Permalink }}</guid>
      <description>{{ .Content | html }}</description>
    {{ end }}

If you like my new Hugo theme, please let me know and I’ll bump tidying it up and publishing it further up my todo list.

Planet DebianHideki Yamane: MIRROR DISK USAGE: growing

One year later: mirror disk usage is growing
I'll prepare exchanging whole system in the end of this year.

Planet DebianArturo Borrero González: Installing spotify-client in Debian testing (Buster)

debian-spotify logo

Similar to the problem described in the post Google Hangouts in Debian testing (Buster), the Spotify application for Debian (a package called spotify-client) is not ready to run in Debian testing (Buster) as is.

In this particular case, it seems there is only one problem, and is related to openssl/libssl. The spotify-client package requires libssl1.0.0 while in Debian testing (Buster) we have an updated libssl1.1.

Fortunately, this is rather easy to solve, given the little additional dependencies of both spotify-client and libssl1.0.0.

What we will do is to install libssl1.0.0 from jessie-backports, coexisting with libssl1.1.

Simple steps:

  • 1) add jessie-backports repository to your /etc/apt/sources.list file:
    deb jessie-backports main

  • 2) update your repo database:
    % user@debian:~ $ sudo aptitude update
  • 3) verify we have both libssl1.1 and libssl1.0.0 ready to install:
    % user@debian:~ $ aptitude search libssl
    p   libssl1.0.0       - Secure Sockets Layer toolkit - shared libraries                                       
    i   libssl1.1         - Secure Sockets Layer toolkit - shared libraries
  • 4) Follow steps by Spotify to install the spotify-client package:

  • 5) Run it and enjoy your music!

  • 6) You can cleanup the jessie-backports line from /etc/apt/sources.list.

Bonus point: Why jessie-backports?? Well, according to the openssl package tracker, jessie-backports contains the most recent version of the libssl1.0.0 package.

BTW, thanks to the openssl Debian maintainers, their work is really appreciated :-) And thanks to Spotify for providing a Debian package :-)


Rondam RamblingsA brief history of political discourse in the United States

1776 When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature's God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to

Planet DebianEnrico Zini: Systemd socket units

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.socket units

Socket units tell systemd to listen on a given IPC, network socket, or file system FIFO, and use another unit to service requests to it.

For example, this creates a network service that listens on port 55555:

# /etc/systemd/system/ddate.socket
Description=ddate service on port 55555


# /etc/systemd/system/ddate@.service
Description=Run ddate as a network service

ExecStart=/bin/sh -ec 'while true; do /usr/bin/ddate; sleep 1m; done'

Note that the .service file is called ddate@ instead of ddate: units whose name ends in '@' are template units which can be activated multiple times, by adding any string after the '@' in the unit name.

If I run nc localhost 55555 a couple of times, and then check the list of running units, I see ddate@… instantiated twice, adding the local and remote socket endpoints to the unit name:

$ systemctl list-units 'ddate@*'
  UNIT                                             LOAD   ACTIVE SUB     DESCRIPTION
  ddate@15- loaded active running Run ddate as a network service (
  ddate@16- loaded active running Run ddate as a network service (

This allows me to monitor each running service individually.

systemd also automatically creates a slice unit called system-ddate.slice grouping all services together:

$ systemctl status system-ddate.slice
   Loaded: loaded
   Active: active since Thu 2017-09-21 14:25:02 CEST; 9min ago
    Tasks: 4
   CGroup: /system.slice/system-ddate.slice
            ├─18214 /bin/sh -ec while true; do /usr/bin/ddate; sleep 1m; done
            └─18661 sleep 1m
             ├─18228 /bin/sh -ec while true; do /usr/bin/ddate; sleep 1m; done
             └─18670 sleep 1m

This allows to also work with all running services for this template unit as a whole, sending a signal to all their processes and setting up resource control features for the service as a whole.


CryptogramFriday Squid Blogging: Squid Empire Is a New Book

Regularly I receive mail from people wanting to advertise on, write for, or sponsor posts on my blog. My rule is that I say no to everyone. There is no amount of money or free stuff that will get me to write about your security product or service.

With regard to squid, however, I have no such compunctions. Send me any sort of squid anything, and I am happy to write about it. Earlier this week, for example, I received two -- not one -- copies of the new book Squid Empire: The Rise and Fall of Cephalopods. I haven't read it yet, but it looks good. It's the story of prehistoric squid.

Here's a review by someone who has read it.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDHealing hearts at the intersection of modern medicine and indigenous culture

Worldwide, nearly one out of every hundred children is born with a congenital heart disease, which can vary from defective vessels and leaky valves, to holes in the heart. Dr. Franz Freudenthal (TED Talk: A new way to heal hearts without surgery) deals in the latter as a pediatric cardiologist who has developed a better, invasive surgery-free alternative to close these life-threatening cavities.

So, when a baby is born with a hole in its heart, what happens and how do you fix it?

For a hole in the heart to develop, prematurity and genetic conditions tend to be the leading cause. A baby in the womb does not breathe and relies on the mother until it takes first breaths at birth, which signals major changes to take place in the body — especially within the cardiovascular and respiratory systems. Breathing, a new experience for the baby, stimulates some vessels in the heart to close. However, this is not always the case, and abnormal communication between atria can leave passages underdeveloped and gaping.

“When you look at patients with this condition, they seem desperate to breathe,” says Freudenthal. “To close the hole, major surgery used to be the only solution.”

Decades of research reveals that lack of oxygen can also be to blame. In high-altitude locations where air is thin, such as the mountainous regions of Freudenthal’s native Bolivia, the frequency of this kind of heart defect increases dramatically. For high-altitude patients, the holes tend to be more severe due to a larger gap between arteries.

The first of many breakthroughs for a non-invasive mechanism to solve these kinds of heart defects came to Freudenthal during his time in medical school, brainstorming with a classmate while they camped in the Amazon. As they were building their fire, adding kindling to feed the flames, he noticed something that piqued his scientific curiosity.

“The only thing that would not burn in the fire was a green avocado branch,” he says. “Then came a moment of inspiration. So, we used the branch as a mold for our first invention.”

Filling hearts, one hole at a time

Observing the properties of the green avocado branch as it reacted to the flames was a great place to start. The fact that the branch withstood the heat of the fire allowed Freudenthal to look for a metal that could replicate its properties under similar conditions. He eventually landed on a smart material called Nitinol. Made of a nickel-titanium alloy, Nitinol has two unique properties that are incredibly useful in biomedical applications: It can be worked into unique shapes and retain them; and it’s superelastic, meaning that when it’s stretched or flattened, it needs no heating in order to regain its original form.

“I knew this material was ideal since it keeps its shape,” he says. “This is why the device can be transported into the body inside a tube [implantation catheter]. It can be deployed in the right spot inside the heart, recovering its ‘memorized’ shape.”

From that discovery came thousands of hours of lab work, numerous in-vitro and in-vivo studies, and a persistent enthusiasm to unravel such a complex issue. It was a lengthy, demanding process on the road to creating a prototype, a specialized piece of wire coiled into into the shape of a plug that could be be transferred through a catheter to wherever in the heart it is needed, neatly plugging the hole.

However, an issue arose when Freudenthal and Dr. Alexandra Heath, his wife and partner, realized the device could only service patients below a certain altitude level. Many of their patients lived at 12,000 feet above sea level and had extra-wide gaps in their arteries — larger than the plug of coiled-up wire could cover.

“The first coil could successfully treat only half of the patients in Bolivia,” Freudenthal says. “The search started again. We went back to the drawing board.”

The next generation of device, influenced by past generations

After many trials and several iterations, a key development came from an unlikely source — the loom-weaving technique of the native Andes peoples. Freudenthal’s grandmother, Dr. Ruth Tichauer, a Jewish refugee who resettled in the heart of the Andes mountains, had worked closely — and Freudenthal alongside her, growing up — with remote indigenous communities, and that connection proved ever more fruitful.

For centuries, the women of these communities told stories by weaving complex patterns using looms. With Freudenthal’s vision, instead of fabric yarn, the women carefully weave Nitinol.

“We take this traditional method of weaving and make a design,” Freudenthal says. “The weaving allows us to create a seamless device that doesn’t rust because it’s made of only one piece. It can change by itself into very complex structures.”

From this insight evolved the Nit-Occlud ASD-R system and a way to fix a baby’s heart without major invasive surgery.

As seen above, the device enters the heart through the body’s natural channels via the implantation catheter and expands, placing itself before closing the hole. From start to finish, the entire procedure takes 30 minutes to complete.

After a few days, heart tissue begins to grow over the device — a process called epithelialization — eventually covering it entirely. If the hole is not too large to warrant further surgery, the implant stays as part of the child’s heart for the rest of their life.

“We are so proud that some of our former patients are part of our team,” Freudenthal shares. “We receive strength from our patients — their resilience and courage inspire our creativity.”

Right now, Freudenthal’s company PFM SRL has the Nit-Occlud ASD-R system registered in around 60 countries and estimates it has saved the lives of some 2,500 children.

Krebs on SecurityHere’s What to Ask the Former Equifax CEO

Richard Smith — who resigned as chief executive of big-three credit bureau Equifax this week in the wake of a data breach that exposed 143 million Social Security numbers — is slated to testify in front of no fewer than four committees on Capitol Hill next week. If I were a lawmaker, here are some of the questions I’d ask when Mr. Smith goes to Washington.


Before we delve into the questions, a bit of background is probably in order. The new interim CEO of Equifax — Paulino do Rego Barros Jr. — took to The Wall Street Journal and other media outlets this week to publish a mea culpa on all the ways Equifax failed in responding to this breach (the title of the op-ed in The Journal was literally “I’m sorry”).

“We were hacked,” Barros wrote. “That’s the simple fact. But we compounded the problem with insufficient support for consumers. Our website did not function as it should have, and our call center couldn’t manage the volume of calls we received. Answers to key consumer questions were too often delayed, incomplete or both.”

Barros stated that Equifax was working to roll out a new system by Jan. 31, 2018 that would let consumers “easily lock and unlock access to their Equifax credit files.”

“You will be able to do this at will,” he continued. “It will be reliable, safe, and simple. Most significantly, the service will be offered free, for life.”

I have argued for years that all of the data points needed for identity thieves to open new lines of credit in your name and otherwise ruin your credit score are available for sale in the cybercrime underground. To be certain, the Equifax breach holds the prospect that ID thieves could update all that stolen data with newer records. I’ve argued that the only sane response to this sorry state of affairs is for consumers to freeze their files at the bureaus, which blocks potential creditors — and ID thieves — from trashing your credit file and credit score.

Equifax is not the only bureau promoting one of these lock services. Since Equifax announced its breach on Sept. 7, big-three credit bureaus Trans Union and Experian have worked feverishly to steer consumers seeking freezes toward these locks instead, arguing that they are easier to use and allow consumers to lock and unlock their credit files with little more than the press of a button on a mobile phone app. Oh, and the locks are free, whereas the bureaus can (and do) charge consumers for placing and/or thawing a freeze (the laws freeze fee laws differ from state to state).


My first group of questions would center around security freezes or credit freezes, and the difference between those and these credit lock services being pushed hard by the bureaus.

Currently, even consumer watchdog groups say they are uncertain about the difference between a freeze and a lock. See this press release from Thursday by U.S. PIRG, the federation of state Public Interest Research Groups, for one such example.

Also, I’m curious to know what percentage of Americans had a freeze prior to the breach, and how many froze their credit files (or attempted to do so) after Equifax announced the breach. The answers to these questions may help explain why the bureaus are now massively pushing their new credit lock offerings (i.e., perhaps they’re worried about the revenue hit they’ll take should a significant percentage of Americans decide to freeze their credit files).

I suspect the pre-breach number is less than one percent. I base this guess loosely on some data I received from the head of security at Dropbox, who told KrebsOnSecurity last year that less than one percent of its user base of 500 million registered users had chosen to turn on 2-factor authentication for their accounts. This extra security step can block thieves from accessing your account even if they steal your password, but many consumers simply don’t take advantage of such offerings because either they don’t know about them or they find them inconvenient.

Bear in mind that while most two-factor offerings are free, most freezes involve fees, so I’d expect the number of pre-breach freezers to be a fraction of one percent. However, if only one half of one percent of Americans chose to freeze their credit files before Equifax announced its breach — and if the total number of Americans requesting a freeze post-breach rose to, say, one percent — that would still be a huge jump (and potentially a painful financial hit to Equifax and the other bureaus).


So without further ado, here are some questions I’d ask on the topic of credit locks and freezes:

-Approximately how many credit files on Americans does Equifax currently maintain?

-Prior to the Equifax breach, approximately how many Americans had chosen to freeze their credit files at Equifax?

-Approximately how many total Americans today have requested a freeze from Equifax? This should include the company’s best estimate on the number of people who have requested a freeze but — because of the many failings of Equifax’s public response cited by Barros — were unable to do so via phone or the Internet.

-Approximately how much does Equifax charge each time the company sells a credit check (i.e., a bank or other potential creditor performs a “pull” on a consumer credit file)?

-On average, how many times per year does Equifax sell access to consumer’s credit file to a potential creditor?

-Mr. Barros said Equifax will extend its offer of free credit freezes until the end of January 2018. Why not make them free indefinitely, just as the company says it plans to do with its credit lock service?

-In what way does a consumer placing a freeze on their credit file limit Equifax’s ability to do business?

-In what way does a consumer placing a lock on their credit file limit Equifax’s ability to do business?

-If a lock accomplishes the same as a freeze, why create more terminology that only confuses consumers?

-By agreeing to use Equifax’s lock service, will consumers also be opting in to any additional marketing arrangements, either via Equifax or any of its partners?


Equifax could hardly have bungled their breach response more if they tried. It is said that one should never attribute to malice what can more easily be explained by incompetence, but Equifax surely should have known that how they handled their public response would be paramount to their ability to quickly put this incident behind them and get back to business as usual.


Equifax has come under heavy criticism for waiting too long to disclose this breach. It has said that the company became aware of the intrusion on July 29, and yet it did not publicly disclose the breach until Sept. 7.However, when Equifax did disclose, it seemed like everything about the response was rushed and ill-conceived.

One theory that I simply cannot get out of my head is that perhaps Equifax rushed preparations for is breach disclosure and response because it was given a deadline by extortionists who were threatening to disclose the breach on their own if the company did not comply with some kind of demand.

-I’d ask a question of mine that Equifax refused to answer shortly after the breach: Whether the company was the target of extortionists over this data breach *before* the breach was officially announced on Sept. 7.

-Equifax said the attackers abused a vulnerability in Apache Struts to break in to the company’s Web applications. That Struts flaw was patched by the Apache Foundation on March 8, 2017, but Equifax waited until after July 30, 2017 — after it learned of the breach — to patch the vulnerability. Why did Equifax decide to wait four and a half months to apply this critical update?

-How did Equifax become aware of this breach? Was it from an external source, such as law enforcement?

-Assuming Equifax learned about this breach from law enforcement agencies, what did those agencies say regarding how they learned about the breach?


Multiple news organizations have reported that companies which track crimes related to identity theft — such as account takeovers, new account fraud, and e-commerce fraud — saw huge upticks in all of these areas corresponding to two periods that are central to Equifax’s breach timeline; the first in mid-May, when Equifax said the intruders began abusing their access to the company, and the second late July/early August, when Equifax said it learned about the breach.

This chart shows spikes in various forms of identity abuse — including account takeovers and new account fraud — as tracked by ThreatMetrix, a San Jose, Calif. firm that helps businesses prevent fraud.

-Has Equifax performed any analysis on consumer credit reports to determine if there has been any pattern of consumer harm as a result of this breach?

-Assuming the answer to the previous question is yes, did the company see any spikes in applications for new lines of consumer credit corresponding to these two time periods in 2017?

Many fraud experts report that a fast-growing area of identity theft involves so-called “synthetic ID theft,” in which fraudsters take data points from multiple established consumer identities and merge them together to form a new identity. This type of fraud often takes years to result in negative consequences for consumers, and very often the debt collection agencies will go after whoever legitimately owns the Social Security number used by that identity, regardless of who owns the other data points.

-Is Equifax aware of a noticeable increase in synthetic identity theft in recent months or years?

-What steps, if any, does Equifax take to ensure that multiple credit files are not using the same Social Security number?

-Prior to its breach disclosure, Equifax spent more than a half million dollars in the first half of 2017 lobbying Congress to pass legislation that would limit the legal liability of credit bureaus in connection with data security lapses. Do you still believe such legislation is necessary? Why or why not?

What questions did I leave out, Dear Readers? Or is there a way to make a question above more succinct? Sound off in the comments below, and I may just add yours to the list!

In the meantime, here are the committees at which Former Equifax CEO Richard Smith will be testifying next week on Capitol Hill. Some of these committees will no doubt be live-streaming the hearings. Check back at the links below on the morning-of for more information on that. Also, C-SPAN almost certainly will be streaming some of these as well:

-Tuesday, Oct. 3, 10:00 a.m., House Energy and Commerce Committee. Rayburn House Office Bldg. Room 2123.

-Wednesday, Oct. 4, 10:00 a.m., Senate Committee on Banking, Housing, & Urban Affairs. Dirksen Senate Office Bldg., Room 538.

-Wednesday, Oct. 4, 2:30 p.m., Senate Judiciary Subcommittee on Privacy, Technology and the Law. Dirksen Senate Office Bldg., Room 226.

-Thursday, Oct. 5, 9:15 a.m., House Financial Services Committee. Rayburn House Office Bldg., Room 2128.

Planet DebianIain R. Learmonth: Tor Metrics Team Meeting in Berlin

We had a meeting of the Metrics Team in Berlin yesterday to organise a roadmap for the next 12 months. This roadmap isn’t yet finalised as it will now be taken to the main Tor developers meeting in Montreal where perhaps there are things we thought were needed but aren’t, or things that we had forgotten. Still we have a pretty good draft and we were all quite happy with it.

We have updated tickets in the Metrics component on the Tor trac to include either “metrics-2017“ or “metrics-2018“ in the keywords field to identify tickets that we expect to be able to resolve either by the end of this year or by the end of next year (again, not yet finalised but should give a good idea). In some cases this may mean closing the ticket without fixing it, but only if we believe that either the ticket is out of scope for the metrics team or that it’s an old ticket and no one else has had the same issue since.

Having an in-person meeting has allowed us to have easy discussion around some of the more complex tickets that have been sitting around. In many cases these are tickets where we need input from other teams, or perhaps even just reassigning the ticket to another team, but without a clear plan we couldn’t do this.

My work for the remainder of the year will be primarily on Atlas where we have a clear plan for integrating with the Tor Metrics website, and may include some other small things relating to the website.

I will also be triaging the current Compass tickets as we look to shut down compass and integrate the functionality into Atlas. Compass specific tickets will be closed but some tickets relating to desirable functionality may be moved to Atlas with the fix implemented there instead.

CryptogramDeloitte Hacked

The large accountancy firm Deloitte was hacked, losing client e-mails and files. The hackers had access inside the company's networks for months. Deloitte is doing its best to downplay the severity of this hack, but Bran Krebs reports that the hack "involves the compromise of all administrator accounts at the company as well as Deloitte's entire internal email system."

So far, the hackers haven't published all the data they stole.

Planet DebianSven Hoexter: Last rites to the lyx and elyxer packaging

After having been a heavy LyX user from 2005 to 2010 I've continued to maintain LyX more or less till now. Finally I'm starting to leave that stage and removed myself from the Uploaders list. The upload with some other last packaging changes is currently sitting in the git repo. Mainly because lintian on ftp-master currently rejects 'pakagename@packages.d.o' maintainer addresses (the alternative to the lists.alioth.d.o maintainer mailinglists). For elyxer I filled a request for removal. It hasn't seen any upstream activity for a while and the LyX build in HTML export support improved.

My hope is that if I step away far enough someone else might actually pick it up. I had this strange moment when I lately realized that xchat got reintroduced to Debian after mapreri and myself spent some time last year to get it removed before the stretch release.

Worse Than FailureError'd: Please Leave a Message

"So is this the email equivalent of one man's trash is another man's treasure?" writes Allan.


David C. wrote, "I received this automated bill notification from Canada Post's online inbox service saying that, possibly, nobody wants me to pay them."


"Well, to be fair, the did say that using Mail Chimp makes it easy to send email," Jacob R. wrote.


"Here at M*******t we take your privacy seriously!" James writes.


Kurt W. writes, "It's funny because email clients usually crap out before filtering 9 quintillion messages."


"I'm a little bit suspicious about these files I found in our logging directory," wrote Michael G., "Sadly, I am not working for the National Lottery..."


[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

Planet DebianPetter Reinholdtsen: Visualizing GSM radio chatter using gr-gsm and Hopglass

Every mobile phone announce its existence over radio to the nearby mobile cell towers. And this radio chatter is available for anyone with a radio receiver capable of receiving them. Details about the mobile phones with very good accuracy is of course collected by the phone companies, but this is not the topic of this blog post. The mobile phone radio chatter make it possible to figure out when a cell phone is nearby, as it include the SIM card ID (IMSI). By paying attention over time, one can see when a phone arrive and when it leave an area. I believe it would be nice to make this information more available to the general public, to make more people aware of how their phones are announcing their whereabouts to anyone that care to listen.

I am very happy to report that we managed to get something visualizing this information up and running for Oslo Skaperfestival 2017 (Oslo Makers Festival) taking place today and tomorrow at Deichmanske library. The solution is based on the simple recipe for listening to GSM chatter I posted a few days ago, and will show up at the stand of Åpen Sone from the Computer Science department of the University of Oslo. The presentation will show the nearby mobile phones (aka IMSIs) as dots in a web browser graph, with lines to the dot representing mobile base station it is talking to. It was working in the lab yesterday, and was moved into place this morning.

We set up a fairly powerful desktop machine using Debian Buster/Testing with several (five, I believe) RTL2838 DVB-T receivers connected and visualize the visible cell phone towers using an English version of Hopglass. A fairly powerfull machine is needed as the grgsm_livemon_headless processes from gr-gsm converting the radio signal to data packages is quite CPU intensive.

The frequencies to listen to, are identified using a slightly patched scan-and-livemon (to set the --args values for each receiver), and the Hopglass data is generated using the patches in my meshviewer-output branch. For some reason we could not get more than four SDRs working. There is also a geographical map trying to show the location of the base stations, but I believe their coordinates are hardcoded to some random location in Germany, I believe. The code should be replaced with code to look up location in a text file, a sqlite database or one of the online databases mentioned in the github issue for the topic.

If this sound interesting, visit the stand at the festival!

Planet DebianDirk Eddelbuettel: Rcpp 0.12.13: Updated vignettes, and more

The thirteenth release in the 0.12.* series of Rcpp landed on CRAN this morning, following a little delay because Uwe Ligges was traveling and whatnot. We had announced its availability to the mailing list late last week. As usual, a rather substantial amount of testing effort went into this release so you should not expect any surprise.

This release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, and the 0.12.12 release in July 2017 making it the seventeeth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1069 packages (and hence 73 more since the last release) on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This releases contains a large-ish update to the documentation as all vignettes (apart from the unit test one, which is a one-off) now use Markdown and the (still pretty new) pinp package by James and myself. There is also a new vignette corresponding to the PeerJ preprint James and I produced as an updated and current Introduction to Rcpp replacing the older JSS piece (which is still included as a vignette too).

A few other things got fixed: Dan is working on const iterators you would expect with modern C++, Lei Yu spotted error in Modules, and more. See below for details.

Changes in Rcpp version 0.12.13 (2017-09-24)

  • Changes in Rcpp API:

    • New const iterators functions cbegin() and cend() have been added to several vector and matrix classes (Dan Dillon and James Balamuta in #748) starting to address #741).
  • Changes in Rcpp Modules:

    • Misplacement of one parenthesis in macro LOAD_RCPP_MODULE was corrected (Lei Yu in #737)
  • Changes in Rcpp Documentation:

    • Rewrote the macOS sections to depend on official documentation due to large changes in the macOS toolchain. (James Balamuta in #742 addressing issue #682).

    • Added a new vignette ‘Rcpp-introduction’ based on new PeerJ preprint, renamed existing introduction to ‘Rcpp-jss-2011’.

    • Transitioned all vignettes to the 'pinp' RMarkdown template (James Balamuta and Dirk Eddelbuettel in #755 addressing issue #604).

    • Added an entry on running 'compileAttributes()' twice to the Rcpp-FAQ (##745).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianEnrico Zini: Systemd path units

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.path units

This kind of unit can be used to monitor a file or directory for changes using inotify, and activate other units when an event happens.

For example, this activates a unit that manages a spool directory, which activates another unit whenever a .pdf file is added to /tmp/spool/:

Description=Monitor /tmp/spool/ for new .pdf files


This instead activates another unit whenever /tmp/ready is changed, for example by someone running touch /tmp/ready:

Description=Monitor /tmp/ready


And beeponce.service:

Description=Beeps once

ExecStart=/usr/bin/aplay /tmp/beep.wav

See man systemd.path

Planet DebianSean Whitton: Debian Policy released

I just released Debian Policy version

There are only two normative changes, and neither is very important. The main thing is that this upload fixes a lot of packaging bugs that were found since we converted to build with Sphinx.

There are still some issues remaining; I hope to submit some patches to the www-team’s scripts to fix those.

Planet DebianRicardo Mones: Long time no post

Seems the breakage of my desktop computer more than 3 months ago did also caused also a hiatus on my online publishing activities... it was not really intended, it happened I was just busy with other things ಠ_ಠ.

With a broken computer being able to build software on the laptop became a priority. Around September 2016 or so the good'n'old black MacBook decided to stop working. I didn't really need a replacement by that time, but never liked to have just a single working system, and in October just found an offer which I could not resist and bought a ThinkPad X260. It helped to build my final project (it was faster than the desktop), but lacking time for FOSS hadn't used it for much more.

Setting up the laptop for software (Debian packages and Claws Mail, mainly) was somewhat easy. Finding a replacement for the broken desktop was a bit more difficult. I considered a lot of configurations and prices, from those new Ryzen to just buying the same components (pretty difficult now because they're discontinued). In the end, I decided to spend the minimum and make good use of everything else still working (memory, discs and wireless card), so I finally got an AMD A10-7860K on top of an Asus A88M-PLUS. This board has more SATA ports, so I added an unused SSD, remains of a broken laptop, to install the new system —Debian Stretch, of course ʘ‿ʘ— while keeping the existing software RAID partitions of the spinning drives.

The last thing distracting from the usual routine was replacing the car. Our child is growing as expected and the Fiesta was starting to appear small and uncomfortable, specially for long distance travel. We went for an hybrid model, with a high capacity boot. Given our budget, we only found 3 models below the limit: Kia Niro, Hyundai Ioniq and Toyota Auris TS. The color was decided by the kid (after forbidding black), and this was the winner...

In the middle of all of this we also took some vacation to travel to the south of Galicia, mostly around Vigo area, but also visiting Oporto and other nice places.

CryptogramNew Internet Explorer Bug

There's a newly discovered bug in Internet Explorer that allows any currently visited website to learn the contents of the address bar when the user hits enter. This feels important; the site I am at now has no business knowing where I go next.

Sociological ImagesThe different media spheres of the right and the left — and how they’re throwing elections to the Republicans

A new study tackles the media landscape building up to the election. The lead investigator, Rob Faris, runs a center at Harvard that specializes in the internet and society. He and his co-authors asked what role partisanship and disinformation might have played in the 2016 U.S. election. The study looked at links between internet news sites and also the behavior of Twitter and Facebook users, so it paints a picture of how news and opinion is being produced by media conglomerates and also how individuals are using and sharing this information.

They found severe ideological polarization, something we’ve known for some time, but also asymmetry in how media production and consumption works on either side. That is, journalists and readers on the left are behaving differently from those on the right.

The right is more insular and more partisan than the left: conservatives consume less neutral and “other side” news than liberals do and their outlets are more aggressively partisan. Breitbart News now sits neatly at the center. Measured by inlinks, it’s as influential as FOX News and, on social media, substantially more. Here’s the  network map for Twitter:

Breitbart’s centrality on the right is a symptom of how extreme the Republican base has become. Breitbart’s Executive Chairman, Steve Bannon — former White House Chief Strategist — calls it “home of the alt-right,” a group that shows “extreme” bias against racial minorities and other out-groups. 

The insularity and lack of interest in balanced reporting made right-leaning readers susceptible to fake stories. Faris and his colleagues write:

The more insulated right-wing media ecosystem was susceptible to sustained network propaganda and disinformation, particularly misleading negative claims about Hillary Clinton. Traditional media accountability mechanisms — for example, fact-checking sites, media watchdog groups, and cross-media criticism — appear to have wielded little influence on the insular conservative media sphere.

There is insularity and partisanship on the left as well, but it is mediated by commitments to traditional journalistic norms — e.g., covering “both sides” — and so, on the whole, the left got more balance in their media diet and less “fake news” because they were more friendly to fact checkers.

The interest in balance, however, perhaps wasn’t entirely good. Faris and his co-authors found that the right exploited the left’s journalistic principles, pushing left-leaning and neutral media outlets to cover negative stories about Clinton by claiming that not doing so was biased. Centrist media outlets responded with coverage, but didn’t ask the same of the right (it is possible this shaming tactic wouldn’t have worked the other way).

The take home message is: During the 2016 election season, right-leaning media consumers got rabid, un-fact checked, and sometimes false anti-Clinton and pro-Trump material and little else, while left-leaning media consumers got relatively balanced coverage of Clinton: both good stories and bad ones, but more bad ones than they would have gotten (for better or worse) if the right hadn’t been yanking their chain about being “fair.”

We should be worried about how polarization, “fake news,” horse-race journalism, and infotainment are influencing the ability of voters to gather meaningful information with which to make voting decisions, but the asymmetry between the left and the right media sphere — particularly how it makes the right vulnerable to propagandists and the left vulnerable to ideological bullying by the right — should leave us even more worried. These are powerful forces, held up both by the institutions and the individuals, that are dramatically skewing election coverage, undermining democracy, and throwing elections, and governance itself, to the right.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at

TEDSymbolic logic: How African alphabets got to the TEDGlobal stage

All around the theater space, characters from African alphabets were projected on walls and floors in vivid color. The characters came from many languages, and were chosen by designer Saki Mafundikwa to match the theme of the conference: Builders. Truth-Tellers. Catalysts. Photo: Bret Hartman / TED

TEDGlobal 2017 was an important homecoming to the African continent, and a ton of work went into creating an authentic experience, from the curation of talks to the music to the graphics and stage design. Saki Mafundikwa, a graphic designer, filmmaker, design teacher and founder of the Zimbabwe Institute of Vigital Arts (and a TED speaker himself) was commissioned to create an aesthetic for the theatre stage that was as elegant as it was culturally and thematically relevant.

These 3 preliminary designs were part of the process of developing the design system for the theater. While they ended up not being used as is, their gorgeous colors and shapes showcased the potential of using alphabets as a design feature throughout the space. Images courtesy Saki Mafundikwa.

The elegant final designs for the stage backdrop highlight subtle color combinations, to complement the lively design elements projected around the theater space. Images courtesy Saki Mafundikwa

Most people who watch the talks online will see Mafundikwa’s abstract fabric designs on fabric drapes over the stage. But it might be that only those who were actually there in the theater will be able to truly appreciate the true stars of the show: giant symbols, beamed down on the floor and sides of the 600-seater with gobos. “TED loved the idea of gobos,” Mafundikwa says. “It’s one of those rare but beautiful moments when, as a designer, you have an idea and the client loves it!”

The symbols are not Klingon (obviously). They are alphabets from ancient African writing systems, of which Mafundikwa is a globally recognized expert.

“Some of the symbols are proverbs, like the Adinkra of the Akan people of Ghana. Those were easier to find in keeping with the theme. But others, like Ethiopic, which are syllabaries — each character stands for a syllable — were not so easy.”

These characters come from the Adinkra of the Akan people of Ghana. Saki chose symbols that matched with the conference’s theme.

Not all parts of Africa produced writing systems, Mafundikwa says, so finding a gamut of symbols that were truly representative proved to be a challenge. Nonetheless, he was ultimately able to present symbols that spanned all four hemispheres of the continent.

“In the end, there were two sets of designs: the symbols projected on the auditorium walls and floor and the stage backdrop. Initially, I just went crazy and produced a bunch of ideas and there was quite some back-and-forth until we settled on what you saw in Arusha.”

Characters from the Bantu language, from South Africa, create poetic matches to the conference themes — where “goddess of creation” represents truth-tellers, and the character for “bee” represents builders. Image courtesy Saki Mafundikwa

Keep an eye out for Mafundikwa’s designs onstage and in camera angles during the TEDGlobal 2017 talks, which have already begun to go live. To learn more about Mafundikwa’s work, watch his own TED Talk about the beauty and ingenuity of ancient African alphabet from 2013.

Characters from Angola’s Jokwe language and Nigeria’s Nsibidi, at top, and examples of Ethiopic, Wolof (from Senegal) and Somali.

Planet DebianMatthias Klumpp: Adding fonts to software centers

Last year, the AppStream specification gained proper support for adding metadata for fonts, after Richard Hughes did some work on it years ago. We weren’t happy with how fonts were handled at that time, so we searched for better solutions, which is why this took a bit longer to be done. Last year, I was implementing the final support for fonts in both appstream-generator (the metadata extractor used by Debian and a few others) as well as the AppStream specification. This blogpost was sitting on my todo list as a draft for a long time now, and I only just now managed to finish it, so sorry for announcing this so late. Fonts are already available via AppStream for a year, and this post just sums up the status quo and some neat tricks if you want to write metainfo files for fonts. If you are following AppStream (or the Debian fonts list), you know everything already 🙂 .

Both Richard and I first tried to extract all the metadata to display fonts in a proper way to the users from the font files directly. This turned out to be very difficult, since font metadata is often wrong or incomplete, and certain desirable bits of metadata (like a longer description) are missing entirely. After messing around with different ways to solve this for days (afterall, by extracting the data from font files directly we would have hundreds of fonts directly available in software centers), I also came to the same conclusion as Richard: The best and easiest solution here is to mandate the availability of metainfo files per font.

Which brings me to the second issue: What is a font? For any person knowing about fonts, they will understand one font as one font face, e.g. “Lato Regular Italic” or “Lato Bold”. A user however will see the font family as a font, e.g. just “Lato” instead of all the font faces separated out. Since AppStream data is used primarily by software centers, we want something that is easy for users to understand. Hence, an AppStream “font” components really describes a font family or collection of fonts, instead of individual font faces. We do also want AppStream data to be useful for system components looking for a specific font, which is why font components will advertise the individual font face names they contain via a

 -tag. Naming fonts and making them identifiable is a whole other issue, I used a document from Adobe on font naming issues as a rough guideline while working on this.

How to write a good metainfo file for a font is best shown with an example. Lato is a well-looking font family that we want displayed in a software center. So, we write a metainfo file for it an place it in

  for the AppStream metadata generator to pick up:

<?xml version="1.0" encoding="UTF-8"?>
<component type="font">

  <summary>A sanserif type­face fam­ily</summary>
      Lato is a sanserif type­face fam­ily designed in the Sum­mer 2010 by Warsaw-based designer
      Łukasz Dziedzic (“Lato” means “Sum­mer” in Pol­ish). In Decem­ber 2010 the Lato fam­ily
      was pub­lished under the open-source Open Font License by his foundry tyPoland, with
      sup­port from Google.

  <url type="homepage"></url>

    <font>Lato Regular</font>
    <font>Lato Black Italic</font>
    <font>Lato Black</font>
    <font>Lato Bold Italic</font>
    <font>Lato Bold</font>
    <font>Lato Hairline Italic</font>

When the file is processed, we know that we need to look for fonts in the package it is contained in. So, the appstream-generator will load all the fonts in the package and render example texts for them as an image, so we can show users a preview of the font. It will also use heuristics to render an “icon” for the respective font component using its regular typeface. Of course that is not ideal – what if there are multiple font faces in a package? What if the heuristics fail to detect the right font face to display?

This behavior can be influenced by adding

  tags to a
  tag in the metainfo file. The font-provides tags should contain the fullnames of the font faces you want to associate with this font component. If the font file does not define a fullname, the family and style are used instead. That way, someone writing the metainfo file can control which fonts belong to the described component. The metadata generator will also pick the first mentioned font name in the
  list as the one to render the example icon for. It will also sort the example text images in the same order as the fonts are listed in the provides-tag.

The example lines of text are written in a language matching the font using Pango.

But what about symbolic fonts? Or fonts where any heuristic fails? At the moment, we see ugly tofu characters or boxes instead of an actual, useful representation of the font. This brings me to an inofficial extension to font metainfo files, that, as far as I know, only appstream-generator supports at the moment. I am not happy enough with this solution to add it to the real specification, but it serves as a good method to fix up the edge cases where we can not render good example images for fonts. AppStream-Generator supports the FontIconText and FontSampleText custom AppStream properties to allow metainfo file authors to override the default texts and autodetected values. FontIconText will override the characters used to render the icon, while FontSampleText can be a line of text used to render the example images. This is especially useful for symbolic fonts, where the heuristics usually fail and we do not know which glyphs would be representative for a font.

For example, a font with mathematical symbols might want to add the following to its metainfo file:

  <value key="FontIconText">∑√</value>
  <value key="FontSampleText">∑ ∮ √ ‖...‖ ⊕ 𝔼 ℕ ⋉</value>

Any unicode glyphs are allowed, but asgen will but some length restrictions on the texts.

So, In summary:

  • Fonts are hard
  • I need to blog faster
  • Please add metainfo files to your fonts and submit them upstream if you can!
  • Fonts must have a metainfo file in order to show up in GNOME Software, KDE Discover, AppCenter, etc.
  • The “new” font specification is backwards compatible to Richard’s pioneer work in 2014
  • The appstream-generator supports a few non-standard values to influence how font images are rendered that you might be interested in (maybe we can do something like that for appstream-builder as well)
  • The appstream-generator does not (yet?) support the <extends/> logic Richard outlined in his blog post, mainly because it wasn’t necessary in Debian/Ubuntu/Arch yet (which is asgen’s primary audience), and upstream projects would rarely want to write multiple metainfo files.
  • The metaInfo files are not supposed to replace the existing fontconfig files, and we can not generate them from existing metadata, sadly
  • If you want a more detailed look at writing font metainfo files, take a look at the AppStream specification.
  • Please write more font metadata 😉


Planet DebianRussell Coker: Process Monitoring

Since forking the Mon project to etbemon [1] I’ve been spending a lot of time working on the monitor scripts. Actually monitoring something is usually quite easy, deciding what to monitor tends to be the hard part. The process monitoring script ps.monitor is the one I’m about to redesign.

Here are some of my ideas for monitoring processes. Please comment if you have any suggestions for how do do things better.

For people who don’t use mon, the monitor scripts return 0 if everything is OK and 1 if there’s a problem along with using stdout to display an error message. While I’m not aware of anyone hooking mon scripts into a different monitoring system that’s going to be easy to do. One thing I plan to work on in the future is interoperability between mon and other systems such as Nagios.

Basic Monitoring

ps.monitor tor:1-1 master:1-2 auditd:1-1 cron:1-5 rsyslogd:1-1 dbus-daemon:1- sshd:1- watchdog:1-2

I’m currently planning some sort of rewrite of the process monitoring script. The current functionality is to have a list of process names on the command line with minimum and maximum numbers for the instances of the process in question. The above is a sample of the configuration of the monitor. There are some limitations to this, the “master” process in this instance refers to the main process of Postfix, but other daemons use the same process name (it’s one of those names that’s wrong because it’s so obvious). One obvious solution to this is to give the option of specifying the full path so that /usr/lib/postfix/sbin/master can be differentiated from all the other programs named master.

The next issue is processes that may run on behalf of multiple users. With sshd there is a single process to accept new connections running as root and a process running under the UID of each logged in user. So the number of sshd processes running as root will be one greater than the number of root login sessions. This means that if a sysadmin logs in directly as root via ssh (which is controversial and not the topic of this post – merely something that people do which I have to support) and the master process then crashes (or the sysadmin stops it either accidentally or deliberately) there won’t be an alert about the missing process. Of course the correct thing to do is to have a monitor talk to port 22 and look for the string “SSH-2.0-OpenSSH_”. Sometimes there are multiple instances of a daemon running under different UIDs that need to be monitored separately. So obviously we need the ability to monitor processes by UID.

In many cases process monitoring can be replaced by monitoring of service ports. So if something is listening on port 25 then it probably means that the Postfix “master” process is running regardless of what other “master” processes there are. But for my use I find it handy to have multiple monitors, if I get a Jabber message about being unable to send mail to a server immediately followed by a Jabber message from that server saying that “master” isn’t running I don’t need to fully wake up to know where the problem is.

SE Linux

One feature that I want is monitoring SE Linux contexts of processes in the same way as monitoring UIDs. While I’m not interested in writing tests for other security systems I would be happy to include code that other people write. So whatever I do I want to make it flexible enough to work with multiple security systems.

Transient Processes

Most daemons have a second process of the same name running during the startup process. This means if you monitor for exactly 1 instance of a process you may get an alert about 2 processes running when “logrotate” or something similar restarts the daemon. Also you may get an alert about 0 instances if the check happens to run at exactly the wrong time during the restart. My current way of dealing with this on my servers is to not alert until the second failure event with the “alertafter 2” directive. The “failure_interval” directive allows specifying the time between checks when the monitor is in a failed state, setting that to a low value means that waiting for a second failure result doesn’t delay the notification much.

To deal with this I’ve been thinking of making the ps.monitor script automatically check again after a specified delay. I think that solving the problem with a single parameter to the monitor script is better than using 2 configuration directives to mon to work around it.


Mon currently has a loadavg.monitor script that to check the load average. But that won’t catch the case of a single process using too much CPU time but not enough to raise the system load average. Also it won’t catch the case of a CPU hungry process going quiet (EG when the SETI at Home server goes down) while another process goes into an infinite loop. One way of addressing this would be to have the ps.monitor script have yet another configuration option to monitor CPU use, but this might get confusing. Another option would be to have a separate script that alerts on any process that uses more than a specified percentage of CPU time over it’s lifetime or over the last few seconds unless it’s in a whitelist of processes and users who are exempt from such checks. Probably every regular user would be exempt from such checks because you never know when they will run a file compression program. Also there is a short list of daemons that are excluded (like BOINC) and system processes (like gzip which is run from several cron jobs).

Monitoring for Exclusion

A common programming mistake is to call setuid() before setgid() which means that the program doesn’t have permission to call setgid(). If return codes aren’t checked (and people who make such rookie mistakes tend not to check return codes) then the process keeps elevated permissions. Checking for processes running as GID 0 but not UID 0 would be handy. As an aside a quick examination of a Debian/Testing workstation didn’t show any obvious way that a process with GID 0 could gain elevated privileges, but that could change with one chmod 770 command.

On a SE Linux system there should be only one process running with the domain init_t. Currently that doesn’t happen in Stretch systems running daemons such as mysqld and tor due to policy not matching the recent functionality of systemd as requested by daemon service files. Such issues will keep occurring so we need automated tests for them.

Automated tests for configuration errors that might impact system security is a bigger issue, I’ll probably write a separate blog post about it.

Planet DebianLior Kaplan: LibreOffice community celebrates 7th anniversary

The Document foundation blog have a post about LibreOffice 7th anniversary:

Berlin, September 28, 2017 – Today, the LibreOffice community celebrates the 7th anniversary of the leading free office suite, adopted by millions of users in every continent. Since 2010, there have been 14 major releases and dozens of minor ones, fulfilling the personal productivity needs of both individuals and enterprises, on Linux, macOS and Windows.

I wanted to take a moment to remind people that 7 years ago the community decided to make the de facto fork of official after life under Sun (and then Oracle) were problematic. From the very first hours the project showed its effectiveness. See my post about LibreOffice first steps. Not to mention what it achieved in the past 7 years.

This is still one of my favourite open source contributions, not because it was sophisticated or hard, but because it as about using the freedom part of the free software:
Replace hardcoded “product by Oracle” with “product by %OOOVENDOR”.

On a personal note, for me, after years of trying to help with OOo l10n for Hebrew and RTL support, things started to go forward in a reasonable pace, getting patches in after years of trying, having upstream fix some of the issues, and actually able doing the translation. We made it to 100% with LibreOffice 3.5.0 in February 2012 (something we should redo soon…).

Filed under: i18n & l10n, Israeli Community, LibreOffice

CryptogramDepartment of Homeland Security to Collect Social Media of Immigrants and Citizens

New rules give the DHS permission to collect "social media handles, aliases, associated identifiable information, and search results" as part of people's immigration file. The Federal Register has the details, which seems to also include US citizens that communicate with immigrants.

This is part of the general trend to scrutinize people coming into the US more, but it's hard to get too worked up about the DHS accessing publicly available information. More disturbing is the trend of occasionally asking for social media passwords at the border.

TEDFuture visions: The talks of TEDGlobalNYC

A night of TED Talks at The Town Hall theater in Manhattan covered topics ranging from climate change and fake news to the threat AI poses for democracy. Photo: Ryan Lash / TED

The advance toward a more connected, united, compassionate world is in peril. Some voices are demanding a retreat, back to a world where insular nations battle for their own interests. But most of the big problems we face are collective in nature and global in scope. What can we do, together, about it?

In a night of talks curated and hosted by TED International Curator Bruno Giussani and TED Curator, Chris Anderson, at The Town Hall in Manhattan, eight speakers covered topics ranging from climate change and fake news to the threat AI poses for democracy and the future of markets, imagining what a globally connected world could and should look like.

What stake do we have in common? Naoko Ishii is all about building bridges between people and the environment (her organization is one of the main partners in a Herculean effort to restore the Amazon). As the CEO and chair of the Global Environment Facility, it’s her job is to get everyone on board with protecting and respecting the global commons (water, air, forests, biodiversity, the oceans), if only for the simple fact that the world’s economy is intimately linked to the wellness of Earth. Ishii opened TEDGlobal>NYC with a necessary reminder: that despite their size, these global commons have been neglected for too long, and the price is too high not to make fundamental changes in our collective behavior to save them from collapse. This current generation, she says, is the last generation that can preserve what’s left of our natural resources. If we change how eat, reduce our waste and make determined strides toward sustainable cities, there’s a chance that all hope is not lost.

Climate psychologist Per Espen Stoknes explains a new way of talking about climate change at TEDGlobal>NYC, September 20, 2017, The Town Hall, NY. Photo: Ryan Lash / TED

What we think about when we try not to think about global warming. From “scientese” to visions of the apocalypse, climate-change advocates have struggled with communicating the realities of our warming planet in a way that actually gets people to do something. “Climate psychologist” Per Espen Stoknes wondered why so many climate-change messages leave us feeling helpless and in denial instead of inspired to seek solutions. He shares with us his findings for “a more brain-friendly climate communication” — one that feels personal, doable and empowering. By scaling actions and examples down to local and more relatable levels, we can begin to feel more in control, and start to feel like our actions will have impact, Stoknes suggests. Stepping away from the doomsday narratives and instead reframing green behavior in terms of its positive additions to our lives, such as job growth and better health, can also limit our fear and increase our desire to engage in these important conversations. Our planet may be in trouble, but telling new stories could just save us.

Building the resilient cities. With fantastic new maps that provide interactive and visual representations of large data sets, Robert Muggah articulates an ancient but resurging idea: that cities should be not only the center of economic life but also the foundation of our political lives. Cities bear a significant burden of the world’s problems and have been catalysts for catastrophe, Muggah says — as an example, he shows how, in the run-up to the civil war in Syria, fragile cities like Homs and Aleppo could not bear the weight of internally displaced refugees running away from drought and famine. While this should alarm us, Muggah also sees opportunity and a chance to ride the chaotic waves of the 21st century. Looking around the world, he puts down six principles for building the resilient city. For instance, he highlights integrated and multi-use solutions like Seoul’s expanding public transportation system, where cars once dominated how people move. The current model of the nation-state that emerged in the 17th century is no longer what it once was; nation-states cannot face global crises decisively and efficiently. But the work of urban leaders and coalitions of cities like the C-40 can guide us to a healthier, more peaceful planet.

Christiane Amanpour speaks about the era of fake news at TEDGlobal>NYC, September 20, 2017, The Town Hall, NY, NY. Photo: Ryan Lash / TED

Seeking the truth. Known worldwide for her courage and clarity, Christiane Amanpour has spent the past three decades interviewing business, cultural and political leaders who have shaped history. This time she’s the one being interviewed, by TED curator Chris Anderson, in a comprehensive conversation covering fake news, objectivity in journalism, the leadership vacuum in global politics and much more. Amanpour opens with her experience reporting the Srebrenica genocide in the 1990s, and connects it to the state of journalism today, making a strong case for refusing to be an accomplice to fake news. “We’ve never faced such a massive amount of information which is not curated by those whose profession leads them to abide by the truth,” she says. “Objectivity means giving all sides an equal hearing but not creating a forced moral equivalence.” Facebook and other outlets need to step up and combat fake news, she continues, calling for a moral code of conduct and algorithms to “filter out the crap” that populates our news feeds. Amanpour — fresh from her interview with French president Emmanuel Macron, his first with an international journalist — leaves us with some wisdom: “Be careful where you get information from. Unless we are all engaged as global citizens who appreciate the truth, who understand science, empirical evidence and facts, then we are going to be wandering around — to a potential catastrophe.”

Though he had a cold and could not sing for us, Yusuf Islam (Cat Stevens) takes a moment onstage to discuss faith and music with TED’s own Chris Anderson, at TEDGlobal NYC, September 20, 2017, The Town Hall, New York. Photo: Ryan Lash / TED

A cat’s attic. Yusuf Islam (Cat Stevens)‘s music has been embraced by generations of fans as anthems of peace and unity. In conversation with TED curator Chris Anderson, Yusuf discusses the influence of his music, the arc of his career and his Muslim faith. “I discovered something beyond the facade of what we are taught to believe about others,” Yusuf says of his embrace of Islam in the late ’70s. “There are ways of looking at this world other than the material … Islam brought together all the strands of religion I could ever wish for.” Connecting his return to music after 9/11 to his current work and new album, The Laughing Apple, Yusuf sees his mission as spreading messages of peace and hope. “Be careful about exclusion,” he says. “In the [education] curriculum, we’ve got to start looking towards a globalized curriculum … We should know a bit more about the other to avoid the build up of antagonization.”

“Wherever I look, I see nuances withering away.” In a personal talk, author and political commentator Elif Shafak cautions against the dangers of a dualist worldview. A native of Turkey, she has experienced the devastation that a loss of diversity can bring firsthand, and she knows the revolutionary power of plurality in response to authoritarianism. She reminds us that there are no binaries, whether between developed and developing nations, politics and emotions, or even our own identities. By embracing our countries and societies as mosaics, we push back against tribalism and reach across borders. “One should never ever remain silent for fear of complexity,” Shafak says.

We know what we are saying “no” to, but what are we saying “yes” to? In her classic book The Shock Doctrine — and her new book No Is Not Enough — writer and activist Naomi Klein examines how governments use large-scale shocks like natural disasters, financial crises and terrorist attacks to exploit the public and push through radical pro-corporate measures. At TEDGlobal>NYC, Klein explains that resistance to policies that attack the public is not enough; we also must have a concrete plan for how we want to reorganize society. A few years ago, Klein and a consortium of indigenous leaders, urban hipsters, climate change activists, oil and gas workers, faith leaders, anarchists, migrant rights organizers and leading feminists decided to lock themselves in a room to discuss their utopian vision for the future. They emerged two days later with a manifesto known as The Leap Manifesto, which is all about caring for the earth and one another. Klein shares a few propositions from the platform, including a call for a 100 percent renewable economy, new investment in the low-carbon workforce, comprehensive programs to retrain workers who are losing their jobs in extractive and industrial sectors, and a demand that those who that profit from pollution pay for it. “We live in a time where every alarm in our house is going off,” she concludes. “It’s time to listen. It’s time — together — to leap.”

Could a Facebook algorithm tell us how to vote? Zeynep Tufekci asks why algorithms are controlling more and more of our behavior, like it or not. She speaks at TEDGlobal>NYC, September 20, 2017, The Town Hall, New York. Photo: Ryan Lash / TED

There’s nothing left to fear from AI but the humans behind it. Technosociologist Zeynep Tufecki isn’t worried about AI — it’s the intention behind the technology that’s truly concerning. Data about you is being collected and sold daily, says Tufecki, and the prodigious potential of machine learning comes with potentially catastrophic risks. Companies like Facebook and Google haven’t thoroughly factored in the ethical dilemmas that come with automated systems that are programmed to exploit human weakness in order to place ads in front of exactly the people most likely to buy. If not checked, the ads and recommendations that follow you around well after you’ve stopped searching can snowball from well-meaning to insidious. It’s not to say that social media and the internet are all bad — in fact, Tufecki has written at length about the benefits and power it has bestowed upon many — but her talk is a strong reminder to be aware of the negative potential of AI as well as the positives, and to fight for our collective digital future.

Competition is only fair, says the EU’s Commissioner for Competition, Margrethe Vestager at TEDGlobal NYC, September 20, 2017, The Town Hall, New York. Photo: Ryan Lash / TED

The fight for fairness. This June, the EU levied a record $2.7 billion fine against Google for breaching antitrust rules by unfairly favoring its comparison shopping service in search. More than double the previous largest penalty in this type of antitrust case, the penalty confirmed Margrethe Vestager, European Commissioner for Competition, as one of the world’s most powerful trustbusters. In the closing talk of TEDGlobal>NYC, Vestager makes the connection between how fairness in the markets — and corrective action to ensure it exists — can establish trust in society and each other. Competition in markets gives us the power to demand a fair deal, Vestager says; when it’s removed, either by colluding businesses or biased governments, trust disappears too. “Lack of trust in the market can rub off on society, so we lose trust in society as well,” she says. “Without trust, everything becomes harder.” But competition rules — and those that enforce them — can reestablish the balance between individuals and powerful, seemingly invulnerable multinational corporations. “Trust cannot be imposed, it has be to earned,” Vestager says. “Competition makes the market work for everyone. And that’s why I’m convinced that real and fair competition has a vital role to play in building the trust we need to get the best out of society. And that starts with enforcing our rules.”

Shine as bright as you can. Electro-soul duo Ibeyi closed out TEDGlobal>NYC with a minimalistic, deeply transportive lyrical set. A harmony of voices, piano and cajon drum filled the venue as the pair sang in a mixture of Yoruba, English and French. “Look at the star,” they sing. “I know she’s proud of who you’ve been and who you are.”

TEDGlobal>NYC was made possible by support from Ford Foundation, The Skoll FoundationUnited Nations Foundation and Global Citizen.

Worse Than FailureNews Roundup: EquiTF

We generally don’t do news roundups when yet another major company gets hacked and leaks personally compromising data about the public. We know that “big company hacked” isn’t news, it’s a Tuesday. So the Equifax hack didn’t seem like something worth spending any time to write an article about.

But then new things kept coming out. It got worse. And worse. And worse. It’s like if a dumpster caught on fire, but then the fire itself also caught on fire.

If you have been living under a rock, Equifax, a company that spies on the financial behavior of Americans and sells that intelligence to banks, credit card companies, and anyone else who’s paying, was hacked, and the culprits have everything they need to steal the identities of 143 million people.

The Equifax logo being flushed in a toilet, complete with some artsy motion blur

That’s bad, but everything else about it is worse. First, the executives kept the breach secret for months, and then sold stock just before the news went public. That is a move so utterly brazen that they might as well be a drunk guy with no shirt shouting, “Come at me bro! Come at me!” They’re daring the Securities and Exchange Commission to do something about it, and are confident that they won’t be punished.

Speaking of punishment, the CEO retired, and he’ll be crying about this over the $90M he’s collecting this year. The CIO and CSO went first, of course. They probably won’t be getting huge compensation packages, but I’m sure they’ll land cushy gigs somewhere.

Said CSO, by the way, had no real qualifications to be a Chief Security Officer. Her background is in music composition.

Now, I want to be really clear here: I don’t think her college degree is actually relevant. What you did in college isn’t nearly as important as your work experience, which is the real problem- she doesn’t really have that, either. She’s spent her entire career in “executive” roles, and while she was a CSO before going to Equifax, that was at First Data. Funny thing about First Data: up until 2013 (about when she left), it was in a death spiral that was fixed after some serious house-cleaning and restructuring- like clearing out dead-weight in their C-level.

Don't worry about the poor shareholders, though. Remember Wells Fargo, the bank that fraudulently signed up lots of people for accounts? They list Equifax as an investment opportunity that's ready to "outperform".

That’s the Peter Principle and corporate douchebaggerry in action, and it certainly starts getting me angry, but this site isn’t about class struggle- it’s about IT. And it’s on the IT side where the real WTFs come into play.

Equifax spies on you and sells the results. The US government put a mild restriction on this behavior: they can spy on you, but you have the right to demand that they stop selling the results. This is a “credit freeze”, and every credit reporting agency- every business like Equifax- has to do this. They get to charge you money for the privilege, but they have to do it.

To “secure” this transaction, when you freeze your credit, the credit reporting companies give you a “password” which you can use in the future to unfreeze it (because if you want a new credit card, you have to let Equifax share your data again). Some agencies give you a random string. Some let you choose your own password. Equifax used the timestamp on your request.

The hack itself was due to an unpatched Struts installation. The flaw itself is a pretty fascinating one, where a maliciously crafted XML file gets deserialized into a ProcessBuilder object. The flaw was discovered in March, and a patch was available shortly thereafter. Apache rightfully called it “Critical”, and encouraged all Struts users to apply the fix.

Even if they didn’t apply the fix, Apache provided workarounds- some of which were as simple as, “Turn off the REST plugin if you’re not using it,” or “if you ARE using it, turn off the XML part”. It’s certainly not the easiest fix, especially if you’re on a much older version of Struts, but you could even patch just the REST plugin, cutting down on the total work.

Now, if you’re paying attention, you might be saying to yourself, “Hey, Remy, didn’t you say that they were breached (initially) in March? The month the bug was discovered? Isn’t it kinda reasonable that they wouldn’t have rolled out the fix in time?” Yes, that would be reasonable: if a flaw exposed in March was exploited within a few days or even weeks of the flaw being discovered, I could understand that. But remember, the breach that actually got announced was in July- they were breached in March, and they still didn’t apply the patch. This honestly makes it worse.

Even then, I’d argue that we’re giving them too much of the benefit of the doubt. I’m going to posit that they simply don’t care. Not only did they not apply the patch, they likely had no intention of applying the patch, because they assumed they’d get away with it. Remember: you are the product, not the customer. If they accidentally cut the sheep while shearing, it doesn’t matter: they’ve still got the wool.

As an example of “they clearly don’t care”, let’s turn our attention to their Argentinian Branch, where their employee database was protected by the password admin/admin. Yes, with that super-secure password, you could log in from anywhere in the world and see the users usernames, employee IDs, and personal details. Of course, their passwords were obscured as “******”… in the rendered DOM. A simple “View Source” would reveal the plaintext of their passwords, in true “hunter2” fashion.

Don’t worry, it gets dumber. Along with the breach announcement, Equifax took to social media to direct users to a site where, upon entering their SSN, it would tell them whether or not they were compromised. That was the promise, but the reality was that it was little better than flipping a coin. Worse, the site was a thinly veiled ad for their "identity protection" service, and the agreement contained an arbitration clause which kept you from suing them.

That is, at least if you went to the right site. Setting aside the wisdom of encouraging users to put confidential information into random websites, for weeks Equifax’s social media team was directing people to the wrong site! In fact, it was directing them to a site which warns about the dangers of putting confidential information into random websites.

And all of that, all of that, isn’t the biggest WTF. The biggest WTF is the Social Security Number, which was never meant to be used as a private identifier, but as it’s the closest thing to unique data about every American, it substitutes for a national identification system even when it’s clearly ill-suited to the task.

I’ll leave you with the CGP Grey video on the subject:

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaMichael Still: I think I found a bug in python's unittest.mock library

Mocking is a pretty common thing to do in unit tests covering OpenStack Nova code. Over the years we've used various mock libraries to do that, with the flavor de jour being unittest.mock. I must say that I strongly prefer unittest.mock to the old mox code we used to write, but I think I just accidentally found a fairly big bug.

The problem is that python mocks are magical. Its an object where you can call any method name, and the mock will happily pretend it has that method, and return None. You can then later ask what "methods" were called on the mock.

However, you use the same mock object later to make assertions about what was called. Herein is the problem -- the mock object doesn't know if you're the code under test, or the code that's making assertions. So, if you fat finger the assertion in your test code, the assertion will just quietly map to a non-existent method which returns None, and your code will pass.

Here's an example:

    from unittest import mock
    class foo(object):
        def dummy(a, b):
            return a + b
    @mock.patch.object(foo, 'dummy')
    def call_dummy(mock_dummy):
        f = foo()
        f.dummy(1, 2)
        print('Asserting a call should work if the call was made')
        mock_dummy.assert_has_calls([, 2)])
        print('Assertion for expected call passed')
        print('Asserting a call should raise an exception if the call wasn\'t made')
        mock_worked = False
            mock_dummy.assert_has_calls([, 4)])
        except AssertionError as e:
            mock_worked = True
            print('Expected failure, %s' % e)
        if not mock_worked:
            print('*** Assertion should have failed ***')
        print('Asserting a call where the assertion has a typo should fail, but '
        mock_worked = False
            mock_dummy.typo_assert_has_calls([, 4)])
        except AssertionError as e:
            mock_worked = True
            print('Expected failure, %s' % e)
        if not mock_worked:
            print('*** Assertion should have failed ***')
    if __name__ == '__main__':

If I run that code, I get this:

    $ python3 
    Asserting a call should work if the call was made
    Assertion for expected call passed
    Asserting a call should raise an exception if the call wasn't made
    Expected failure, Calls not found.
    Expected: [call(3, 4)]
    Actual: [call(1, 2)]
    Asserting a call where the assertion has a typo should fail, but doesn't
    *** Assertion should have failed ***
    [call(1, 2), call.typo_assert_has_calls([call(3, 4)])]

So, we should have been told that typo_assert_has_calls isn't a thing, but we didn't notice because it silently failed. I discovered this when I noticed an assertion with a (smaller than this) typo in its call in a code review yesterday.

I don't really have a solution to this right now (I'm home sick and not thinking straight), but it would be interesting to see what other people think.

Tags for this post: python unittest.mock mock testing
Related posts: Implementing SCP with paramiko; Packet capture in python; A pythonic example of recording metrics about ephemeral scripts with prometheus; mbot: new hotness in Google Talk bots; Starfish Prime; Calculating a SSH host key with paramiko


Planet DebianRuss Allbery: Review: The Seventh Bride

Review: The Seventh Bride, by T. Kingfisher

Publisher: 47North
Copyright: 2015
ISBN: 1-5039-4975-3
Format: Kindle
Pages: 225

There are two editions of this book, although only one currently for sale. This review is of the second edition, released in November of 2015. T. Kingfisher is a pen name for Ursula Vernon when she's writing for adults.

Rhea is a miller's daughter. She's fifteen, obedient, wary of swans, respectful to her parents, and engaged to Lord Crevan. The last was a recent and entirely unexpected development. It's not that she didn't expect to get married eventually, since of course that's what one does. And it's not that Lord Crevan was a stranger, since that's often how it went with marriage for people like her. But she wasn't expecting to get married now, and it was not at all clear why Lord Crevan would want to marry her in particular.

Also, something felt not right about the entire thing. And it didn't start feeling any better when she finally met Lord Crevan for the first time, some days after the proposal to her parents. The decidedly non-romantic hand kissing didn't help, nor did the smug smile. But it's not like she had any choice. The miller's daughter doesn't say no to a lord and a friend of the viscount. The miller's family certainly doesn't say no when they're having trouble paying the bills, the viscount owns the mill, and they could be turned out of their livelihood at a whim.

They still can't say no when Lord Crevan orders Rhea to come to his house in the middle of the night down a road that quite certainly doesn't exist during the day, even though that's very much not the sort of thing that is normally done. Particularly before the marriage. Friends of the viscount who are also sorcerers can get away with quite a lot. But Lord Crevan will discover that there's still a limit to how far he can order Rhea around, and practical-minded miller's daughters can make a lot of unexpected friends even in dire circumstances.

The Seventh Bride is another entry in T. Kingfisher's series of retold fairy tales, although the fairy tale in question is less clear than with The Raven and the Reindeer. Kirkus says it's a retelling of Bluebeard, but I still don't quite see that in the story. I think one could argue equally easily that it's an original story. Nonetheless, it is a fairy tale: it has that fairy tale mix of magical danger and practical morality, and it's about courage and friendships and their consequences.

It also has a hedgehog.

This is an T. Kingfisher story, so it's packed full of bits of marvelous phrasing that I want to read over and over again. It has wonderful characters, the hedgehog among them, and it has, at its heart, a sort of foundational decency and stubborn goodness that's deeply satisfying for the reader.

The Seventh Bride is a lot closer to horror than the other T. Kingfisher books I've read, but it never fell into my dislike of the horror genre, despite a few gruesome bits. I think that's because neither Rhea nor the narrator treat the horrific aspects as representative of the true shape of the world. Rhea instead confronts them with a stubborn determination and an attempt to make the best of each moment, and with a practical self-awareness that I loved reading about.

The problem with crying in the woods, by the side of a white road that leads somewhere terrible, is that the reason for crying isn't inside your head. You have a perfectly legitimate and pressing reason for crying, and it will still be there in five minutes, except that your throat will be raw and your eyes will itch and absolutely nothing else will have changed.

Lord Crevan, when Rhea finally reaches him, toys with her by giving her progressively more horrible puzzle tasks, threatening her with the promised marriage if she fails at any of them. The way this part of the book finally resolves is one of the best moments I've read in any book. Kingfisher captures an aspect of moral decisions, and a way in which evil doesn't work the way that evil people expect it to work, that I can't remember seeing an author capture this well.

There are a lot of things here for Rhea to untangle: the nature of Crevan's power, her unexpected allies in his manor, why he proposed marriage to her, and of course how to escape his power. The plot works, but I don't think it was the best part of the book, and it tends to happen to Rhea rather than being driven by her. But I have rarely read a book quite this confident of its moral center, or quite as justified in that confidence.

I am definitely reading everything Vernon has published under the T. Kingfisher name, and quite possibly most of her children's books as well. Recommended, particularly if you liked the excerpt above. There's an entire book full of paragraphs like that waiting for you.

Rating: 8 out of 10

Planet DebianDirk Eddelbuettel: RcppZiggurat 0.1.4


A maintenance release of RcppZiggurat is now on the CRAN network for R. It switched the vignette to the our new pinp package and its two-column pdf default.

The RcppZiggurat package updates the code for the Ziggurat generator which provides very fast draws from a Normal distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl---all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

The NEWS file entry below lists all changes.

Changes in version 0.1.4 (2017-07-27)

  • The vignette now uses the pinp package in two-column mode.

  • Dynamic symbol registration is now enabled.

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppZiggurat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianEnrico Zini: Systemd device units

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.device units

Several devices are automatically represented inside systemd by .device units, which can be used to activate services when a given device exists in the file system.

See systemctl --all --full -t device to see a list of all decives for which systemd has a unit in your system.

For example, this .service unit plays a sound as long as a specific USB key is plugged in my system:

Description=Beeps while a USB key is plugged


ExecStart=/bin/sh -ec 'while true; do /usr/bin/aplay -q /tmp/beep.wav; sleep 2; done'

If you need to work with a device not seen by default by systemd, you can add a udev rule that makes it available, by adding the systemd tag to the device with TAG+="systemd".

It is also possible to give the device an extra alias using ENV{SYSTEMD_ALIAS}="/dev/my-alias-name".

To figure out all you can use for matching a device:

  1. Run udevadm monitor --environment and plug the device
  2. Look at the DEVNAME= values and pick one that addresses your device the way you prefer
  3. udevadm info --attribute-walk --name=*the value of devname* will give you all you can use for matching in the udev rule.


Planet DebianEnrico Zini: Qt cross-architecture development in Debian

Use case: use Debian Stable as an environment to run amd64 development machines to develop Qt applications for Raspberry Pi or other smallish armhf devices.

Qt Creator is used as Integrated Development Environment, and it supports cross-compiling, running the built source on the target system, and remote debugging.

Debian Stable (vanilla or Raspbian) runs on both the host and the target systems, so libraries can be kept in sync, and both systems have access to a vast amount of libraries, with security support.

On top of that, armhf libraries can be installed with multiarch also in the host machine, so cross-builders have access to the exact same libraries as the target system.

This sounds like a dream system. But. We're not quite there yet.

cross-compile attempts

I tried cross compiling a few packages:

$ sudo debootstrap stretch cross
$ echo "strech_cross" | sudo tee cross/etc/debian_chroot
$ sudo systemd-nspawn -D cross
# dpkg --add-architecture armhf
# echo "deb-src stretch main" >> /etc/apt/sources.list
# apt update
# apt install --no-install-recommends build-essential crossbuild-essential-armhf

Some packages work:

# apt source bc
# cd bc-1.06.95/
# apt-get build-dep -a armhf .
# dpkg-buildpackage -aarmhf -j2 -b
dh_auto_configure -- --prefix=/usr --with-readline
        ./configure --build=x86_64-linux-gnu --prefix=/usr --includedir=\${prefix}/include --mandir=\${prefix}/share/man --infodir=\${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --disable-silent-rules --libdir=\${prefix}/lib/arm-linux-gnueabihf --libexecdir=\${prefix}/lib/arm-linux-gnueabihf --disable-maintainer-mode --disable-dependency-tracking --host=arm-linux-gnueabihf --prefix=/usr --with-readline
dpkg-deb: building package 'dc-dbgsym' in '../dc-dbgsym_1.06.95-9_armhf.deb'.
dpkg-deb: building package 'bc-dbgsym' in '../bc-dbgsym_1.06.95-9_armhf.deb'.
dpkg-deb: building package 'dc' in '../dc_1.06.95-9_armhf.deb'.
dpkg-deb: building package 'bc' in '../bc_1.06.95-9_armhf.deb'.
 dpkg-genbuildinfo --build=binary
 dpkg-genchanges --build=binary >../bc_1.06.95-9_armhf.changes
dpkg-genchanges: info: binary-only upload (no source code included)
 dpkg-source --after-build bc-1.06.95
dpkg-buildpackage: info: binary-only upload (no source included)

With qmake based Qt packages, qmake is not configured for cross-building, probably because it is not currently supported:

# apt source pumpa
# cd pumpa-0.9.3/
# apt-get build-dep -a armhf .
# dpkg-buildpackage -aarmhf -j2 -b
        qmake -makefile -nocache "QMAKE_CFLAGS_RELEASE=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=.
          -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_CFLAGS_DEBUG=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong
          -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_CXXFLAGS_RELEASE=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong
          -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_CXXFLAGS_DEBUG=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong
          -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_LFLAGS_RELEASE=-Wl,-z,relro -Wl,-z,now"
          "QMAKE_LFLAGS_DEBUG=-Wl,-z,relro -Wl,-z,now" QMAKE_STRIP=: PREFIX=/usr
qmake: could not exec '/usr/lib/x86_64-linux-gnu/qt5/bin/qmake': No such file or directory
debian/rules:19: recipe for target 'build' failed
make: *** [build] Error 2
dpkg-buildpackage: error: debian/rules build gave error exit status 2

With cmake based Qt packages it goes a little better in that it finds the cross compiler, pkg-config and some multiarch paths, but then it tries to run armhf moc, which fails:

# apt source caneda
# cd caneda-0.3.0/
# apt-get build-dep -a armhf .
# dpkg-buildpackage -aarmhf -j2 -b
          -DCMAKE_SYSTEM_PROCESSOR=arm -DCMAKE_C_COMPILER=arm-linux-gnueabihf-gcc
CMake Error at /usr/lib/arm-linux-gnueabihf/cmake/Qt5Core/Qt5CoreConfig.cmake:27 (message):
  The imported target "Qt5::Core" references the file


  but this file does not exist.  Possible reasons include:

  * The file was deleted, renamed, or moved to another location.

  * An install or uninstall procedure did not complete successfully.

  * The installation package was faulty and contained


  but not all the files it references.

Note: Although I improvised a chroot to be able to fool around with it, I would use pbuilder or sbuild to do the actual builds.

Helmut suggests pbuilder --host-arch or sbuild --host.

Doing it the non-Debian way

This guide in the meantime explains how to set up a cross-compiling Qt toolchain in a rather dirty way, by recompiling Qt pointing it at pieces of the Qt deployed on the Raspberry Pi.

Following that guide, replacing the CROSS_COMPILE value with /usr/bin/arm-linux-gnueabihf- gave me a working qtbase, for which it is easy to create a Kit for Qt Creator that works, and supports linking applications with Debian development packages that do not use Qt.

However, at that point I need to recompile all dependencies that use Qt myself, and I quickly got stuck at that monster of QtWebEngine, whose sources embed the whole of Chromium.

Having a Qt based development environment in which I need to become the maintainer for the whole Qt toolchain is not a product I can offer to a customer. Cross compiling qmake based packages on stretch is not currently supported, so at the moment I had to suggest to postpone all plans for total world domination for at least two years.

Cross-building Debian

In the meantime, Helmut Grohne has been putting a lot of effort into making Debian packages cross-buildable:

helmut> enrico: yes, cross building is painful. we have ~26000 source packages. of those, ~13000 build arch-dep packages. of those, ~6000 have cross-satisfiable build-depends. of those, I tried cross building ~2300. of those 1300 cross built. so we are at about 10% working.

helmut> enrico: plus there are some 607 source packages affected by some 326 bugs with patches.

helmut> enrico: gogo nmu them

helmut> enrico: I've filed some 1000 bugs (most of them with patches) now. around 600 are fixed :)

He is doing it mostly alone, and I would like people not to be alone when they do a lot of work in Debian, so…

Join Helmut in the effort of making Debian cross-buildable!

Build any Debian package for any device right from the comfort of your own work computer!

Have a single development environment seamlessly spanning architecture boundaries, with the power of all that there is in Debian!

Join Helmut in the effort of making Debian cross-buildable!

Apply here, or join #debian-bootstrap on OFTC!

Cross-building Qt in Debian

mitya57 summarised the situation on the KDE team side:

mitya57> we have cross-building stuff on our TODO list, but it will likely require a lot of time and neither Lisandro nor I have it currently.

mitya57> see for a summary of what needs to be done.

mitya57> Any help or patches are always welcome :))


Helmut also suggested to use qemu-user-static to make the host system able to run binaries compiled for the target system, so that even if a non-cross-compiling Qt build tries to run moc and friends in their target architecture version, they would transparently succeed.

At that point, it would just be a matter of replacing compiler paths to point to the native cross-compiling gcc, and the build would not be slowed down by much.

Fixing bug #781226 would help in making it possible to configure a multiarch version of qmake as the qmake used for cross compiling.

I have not had a chance of trying to cross-build in this way yet.

In the meantime...

Having qtcreator able to work on an amd64 devel machine and deploy/test/debug remotely on an arm target machine, where both machine run debian stable and have libraries in sync, would be a great thing to have even though packages do not cross-build yet.

Helmut summarised the situation on IRC:

svuorela and others repeat that Qt upstream is not compatible with Debian's multiarch thinking, in that Qt upstream insists on having one toolchain for each pair of architectures, whereas the Debian way tends to be to make packages generic and split stuff such that it can be mixed and matched.

An example being that you need to run qmake (thus you need qmake for the build architecture), but qmake also embeds the relevant paths and you need to query it for them (so you need qmake for the host architecture)

Either you run it through qemu, or you have a particular cross qmake for your build/host pair, or you fix qt upstream to stop this madness

Building qmake in Debian for each host-target pair, even just limited to released architectures, would mean building Qt 100 times, and that's not going to scale.

I wonder:

  • can I have a qmake-$ARCH binary that can build a source using locally installed multiarch Qt libraries, do I need to recompile and ship the whole of Qt, or just qmake?
  • is there a recipe for building a cross-building Qt environment that would be able use Debian development libraries installed the normal multiarch way?
  • we can't do perfect yet, but can we do better than this?

Worse Than FailureCodeSOD: An Exception to the Rule

“Throw typed exceptions,” is generically good advice in a strongly typed language, like Java. It shouldn’t be followed thoughtlessly, but it’s a good rule of thumb. Some people may need a little more on the point, though.

Alexander L sends us this code:

  public boolean isCheckStarted (final String nr) throws CommonException {
    final BigDecimal sqlCheckStarted = executeDBBigDecimalQueryFirstResult (

    CommonException commonException = new CommonException ("DB Query fail to get 'CheckStarted'");
    int checkStarted = -1;
    checkStarted = Integer.parseInt (Utility.bigDecimalToString (sqlCheckStarted));
    if (checkStarted == 1 || checkStarted == 0) {
      return checkStarted == 1 ? true : false;
    } else {
      throw commonException;

At a glance, it looks ugly, but the scope of its badness doesn’t really set in until Alexander fills some of the surrounding blanks:

  • CommonException is a generic class for failures in talking to the database
  • It is almost never caught directly anywhere in the code, and the rare places that do wrap it in a RuntimeException
  • executeDBBigDecimalQueryFirstResult throws a CommonException if the query failed.

It’s also important to note that Java captures the stack trace when an exception is created, not when it’s thrown, and this method is called from pretty deep in the stack, so that’s expensive.

And all of that isn’t even the worst. The “CheckStarted” field is apparently stored in the database as a Decimal type, or at least is fetched from the database that way. Its only legal values are “0” and “1”, making this a good bit of overkill. To round out the type madness, we convert it to a string only to parse it back into an int.

And that’s still not the worst.

This line: return checkStarted == 1 ? true : false; That’s the kind of line that just sets my skin crawling. It bugs me even more than using an if statement, because the author apparently knew enough to know about ternaries, but not enough to know about boolean expressions.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianDirk Eddelbuettel: RcppAnnoy 0.0.10

A few short weeks after the more substantial 0.0.9 release of RcppAnnoy, we have a quick bug-fix update.

RcppAnnoy is our Rcpp-based R integration of the nifty Annoy library by Erik. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours.

Michaël Benesty noticed that our getItemsVector() function didn't, ahem, do much besides crashing. Simple bug, they happen--now fixed, and a unit test added.

Changes in this version are summarized here:

Changes in version 0.0.10 (2017-09-25)

  • The getItemsVector() function no longer crashes (#24)

Courtesy of CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianEnrico Zini: Systemd timer units

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.timer units

Configure activation of other units (usually a .service unit) at some given time.

The functionality is similar to cron, with more features and a finer time granularity. For example, in Debian Stretch apt has a timer for running apt update which runs at a random time to distribute load on servers:

# /lib/systemd/system/apt-daily.timer
Description=Daily apt download activities

OnCalendar=*-*-* 6,18:00


The corresponding apt-daily.service file then only runs when the system is on mains power, to avoid unexpected batter drains for systems like laptops:

# /lib/systemd/system/apt-daily.service
Description=Daily apt download activities

ExecStart=/usr/lib/apt/apt.systemd.daily update

Note that if you want to schedule tasks with an accuracy under a minute (for example to play a beep every 5 seconds when running on battery), you need to also configure AccuracySec= for the timer to a delay shorter than the default 1 minute.

This is how to make your computer beep when on battery:

# /etc/systemd/system/beep-on-battery.timer
Description=Beeps every 10 seconds


# /etc/systemd/system/beep-on-battery.service
Description=Beeps when on battery

ExecStart=/usr/bin/aplay /tmp/beep.wav


Krebs on SecurityBreach at Sonic Drive-In May Have Impacted Millions of Credit, Debit Cards

Sonic Drive-In, a fast-food chain with nearly 3,600 locations across 45 U.S. states, has acknowledged a breach affecting an unknown number of store payment systems. The ongoing breach may have led to a fire sale on millions of stolen credit and debit card accounts that are now being peddled in shadowy underground cybercrime stores, KrebsOnSecurity has learned.


The first hints of a breach at Oklahoma City-based Sonic came last week when I began hearing from sources at multiple financial institutions who noticed a recent pattern of fraudulent transactions on cards that had all previously been used at Sonic.

I directed several of these banking industry sources to have a look at a brand new batch of some five million credit and debit card accounts that were first put up for sale on Sept. 18 in a credit card theft bazaar previously featured here called Joker’s Stash:

This batch of some five million cards put up for sale Sept. 26, 2017 on the popular carding site Joker's Stash has been tied to a breach at Sonic Drive-In

This batch of some five million cards put up for sale today (Sept. 26, 2017) on the popular carding site Joker’s Stash has been tied to a breach at Sonic Drive-In. The first batch of these cards appear to have been uploaded for sale on Sept. 15.

Sure enough, two sources who agreed to purchase a handful of cards from that batch of accounts on sale at Joker’s discovered they all had been recently used at Sonic locations.

Armed with this information, I phoned Sonic, which responded within an hour that it was indeed investigating “a potential incident” at some Sonic locations.

“Our credit card processor informed us last week of unusual activity regarding credit cards used at SONIC,” reads a statement the company issued to KrebsOnSecurity. “The security of our guests’ information is very important to SONIC. We are working to understand the nature and scope of this issue, as we know how important this is to our guests. We immediately engaged third-party forensic experts and law enforcement when we heard from our processor. While law enforcement limits the information we can share, we will communicate additional information as we are able.”

Christi Woodworth, vice president of public relations at Sonic, said the investigation is still in its early stages, and the company does not yet know how many or which of its stores may be impacted.

The accounts apparently stolen from Sonic are part of a batch of cards that Joker’s Stash is calling “Firetigerrr,” and they are indexed by city, state and ZIP code. This geographic specificity allows potential buyers to purchase only cards that were stolen from Sonic customers who live near them, thus avoiding a common anti-fraud defense in which a financial institution might block out-of-state transactions from a known compromised card.

Malicious hackers typically steal credit card data from organizations that accept cards by hacking into point-of-sale systems remotely and seeding those systems with malicious software that can copy account data stored on a card’s magnetic stripe. Thieves can use that data to clone the cards and then use the counterfeits to buy high-priced merchandise from electronics stores and big box retailers.

Prices for the cards advertised in the Firetigerr batch are somewhat higher than for cards stolen in other breaches, likely because this batch is extremely fresh and unlikely to have been canceled by card-issuing banks yet.

Dumps available for sale on Joker’s Stash from the “FireTigerrr” base, which has been linked to a breach at Sonic Drive-In. Click image to enlarge.

Most of the cards range in price from $25 to $50, and the price is influenced by a number of factors, including: the type of card issued (Amex, Visa, MasterCard, etc); the card’s level (classic, standard, signature, platinum, etc.); whether the card is debit or credit; and the issuing bank.

I should note that it remains unclear whether Sonic is the only company whose customers’ cards are being sold in this particular batch of five million cards at Joker’s Stash. There are some (as yet unconfirmed) indications that perhaps Sonic customer cards are being mixed in with those stolen from other eatery brands that may be compromised by the same attackers.

The last known major card breach involving a large nationwide fast-food chain impacted more than a thousand Wendy’s locations and persisted for almost nine months after it was first disclosed here. The Wendy’s breach was extremely costly for card-issuing banks and credit unions, which were forced to continuously re-issue customer cards that kept getting re-compromised every time their customers went back to eat at another Wendy’s.

Part of the reason Wendy’s corporate offices had trouble getting a handle on the situation was that most of the breached locations were not corporate-owned but instead independently-owned franchises whose payment card systems were managed by third-party point-of-sale vendors.

According to Sonic’s Wikipedia page, roughly 90 percent of Sonic locations across America are franchised.

Dan Berger, president and CEO of the National Association of Federally Insured Credit Unions, said he’s not looking forward to the prospect of another Wendy’s-like fiasco.

“It’s going to be the financial institution that makes them whole, that pays off the charges or replaces money in the customer’s checking account, or reissues the cards, and all those costs fall back on the financial institutions,” Berger said. “These big card breaches are going to continue until there’s a national standard that holds retailers and merchants accountable.”

Financial institutions also bear some of the blame for the current state of affairs. The United States is embarrassingly the last of the G20 nations to make the shift to more secure chip-based cards, which are far more expensive and difficult for criminals to counterfeit. But many financial institutions still haven’t gotten around to replacing traditional magnetic stripe cards with chip-based cards. According to Visa, 58 percent of the more than 421 million Visa cards issued by U.S. financial institutions were chip-based as of March 2017.

Likewise, retailers that accept chip cards may present a less attractive target to hackers than those that don’t. In March 2017, Visa said the number of chip-enabled merchant locations in the country reached two million, representing 44 percent of stores that accept Visa.

Google AdsenseAdSense now understands Bengali (Bangla)

Today, we’re excited to announce the addition of Bengali (Bangla), a language spoken by millions in Bangladesh, India and many other countries around the world, to the family of AdSense supported languages.

The interest for Bengali language content has been growing steadily over the last few years. AdSense provides an easy way for publishers to monetize the content they create in Bengali, and help advertisers looking to connect with the growing online Bengali audience to reach them with relevant ads.

To start monetizing your Bengali (Bangla) content website with Google AdSense:

  1. Check the AdSense program policies and make sure your website is compliant.
  2. Add the AdSense code to start displaying relevant ads to your users.

Welcome to AdSense! Sign Up now!

Posted by: AdSense Internationalization Team

LongNowIs the Bristlecone Pine in Peril? An Interview with Great Basin Scientist Scotty Strachan

Earlier this month, the bristlecone pine, one of the oldest and most isolated organisms on Earth, found itself in unfamiliar territory: in the headlines. News outlets such as the Chicago Tribune and the Washington Post reported that the bristlecone pine was “in peril” and threatened by extinction due to a warming climate. The news came from a study published in Global Change Biology that suggested that the limber pine was “leapfrogging” the bristlecone as they “raced” up the mountains, with climate change acting as the “starting gun.”

Scotty Strachan, an environmental scientist at the University of Nevada, Reno, is skeptical of the statements that this finding “imperils” bristlecone. Strachan has a background in dendrochronology with a specific focus on the Great Basin, where the bristlecone pine grows. He has previously spoken at The Interval and has been collaborating with Long Now on bristlecone pine research on its property on Mt. Washington. We had a chance to sit down with Strachan and get his take on the study and the ideal relationship between what Strachan calls “short-term science” and “long-term science.”

The following has been edited for length and clarity.

Great Basin scientist Scotty Strachan

LONG NOW: When this study first came out, you commented that the press release was speculative. Could you elaborate on what you took issue with?

The bristlecone pine as a species do not exist inside one particular seasonal climatic envelope. But this paper makes this assumption [of uniform seasonality and stand dynamics]. This doesn’t represent the regional climate variability well, especially where bristlecone biogeography and potentially centennial-scale regeneration is concerned.

The study in question is based on data that actually continue recent work in Great Basin done by Constance I. Millar, who’s been working at treeline for decades. She came up with the idea that perhaps the lower [elevation] species of limber pine in the subalpine woodlands was “leapfrogging” over the bristlecone tree line in terms of its fifty-year recruitment pattern. [Recruitment refers to the addition of new individuals into a population or community]. The difference here is that she didn’t immediately rush to “species in peril” judgement like this press release emphasizes.

Photo by Scotty Strachan

The researcher [UC Davis PhD student Brian Smithers] went to a few sites where bristlecones have been studied previously in the Great Basin. But the bristlecone occur in more than twenty-five mountain ranges in the Great Basin, and in several cases are not co-located with limber pine.

The paper spends a good deal of time, as it should, talking about what is known and what has been studied about bristlecone regeneration—which in terms of long term multi-decadal work, is actually very little. Bristlecone live in a region where you have high [climate] variability both interannually and interdecadally.

The bristlecone pine are distributed in space from the White Mountains in the western Great Basin, where they have irregular summertime input of rain, to Mt. Washington, Great Basin National Park, and ranges in central Utah, where more often than not you have a significant summertime component of moisture that actually can alleviate drought conditions. The structure of healthy bristlecone stands across these ranges can be very different, and you can bet that regeneration processes vary accordingly.

Photo by Scotty Strachan

LONG NOW: Why do you think there’s such an appetite for stories like these that sound an alarmist tone?

We see this as a recurring theme in science, and not just in the environmental fields: “We’re not telling anything new, but we’re going to make an alarmist story about it.” Sometimes these alarmist press runs can generate certain momentum inside agency mechanisms that lead policy and science down the wrong road with detrimental effects, particularly if the details of the system in question are not well-understood.

For instance, if you are the Bureau of Land Management, you control large swaths of the interior west, and you’re responsible for maintaining the viability of the land in some way. You have mixed mandates where you balance the current resource users of the land, cattle ranchers maybe, or solar farm industrialists, with sometimes competing conservation issues that range from sagebrush to horses. You’re being pulled in all these different directions, so which science is right?

Photo by Scotty Strachan

We’ve seen over the last thirty years an uptick in the amount of agency time spent in conservation efforts rather than resource use, just broadly. So the question is, how are those funds being directed and then, is it always a good idea for people to actually mess with the landscape in a sort of conservation management approach? So conservation of the forests in California for the last many decades has resulted in the catastrophe that’s waiting to happen in any given watershed in terms of forest densities and the fires that come from that.

The same thing can happen when you have niche science that says we’re going to manage the shrublands for say, a single species, like the sage grouse. So if you poke your nose in the sage grouse issue you’ll find that hundreds of millions of dollars have been spent via combinations of special interest groups and researchers to help conserve sage grouse habitat only. Well, that includes lots of cutting down woodlands that are naturally growing, amid similar “alarmist” claims that the woodlands are “invasive,” when there is plenty of science out there that says in many locations that’s simply not true. Effects to soils, other bird populations, indigenous tradition, recurring management costs, and so forth are sidelined, and that’s a problem.

Photo by Scotty Strachan

I’ll go back to a quote from Sierra bighorn sheep scientist John Wehausen:

“Ecology is quite messy statistically, unlikely to yield simple, clean answers. Be prepared to devote a long time if you want an adequate understanding at a system level; e.g. decades, not years…be open to the possibility that variables you never considered may be very important, relegating a lot of previous research to little more than preliminary study.”

And he’s talking about sheep, not bristlecone.

So you have this repeated approach of niche management as a rallying cry, to the detriment of many other considerations on the landscape system. If knee-jerk landscape-scale human interference get extended to bristlecone, then yeah, I’d say risk to the species increases.

Photo by Scotty Strachan

LONG NOW: What’s a better approach?

I look at it in terms of long-term science and short-term science. Short science operates like this: we go out there, we take a look at what’s happening, maybe we like what we see maybe we don’t like what we see, we draw some conclusions based on what we can observe at the time, we go forward and we say: “This is what we think happened, and we need policy X.” That’s great, except that’s effectively snapshot science. The same is true even if you include, say, some modeling—this is done often, like ecological modeling based on climate models—and say, “here’s what we think has been going on for the last one hundred years and therefore our look is not necessarily a snapshot.” Very often that short science is stated as fact, absolute fact, and that is the problem.

So short science is good because you need to go out there, you need to do an intensive look at something and get a snapshot, so that somebody can follow that snapshot one year, ten years, fifty years from now. That’s what’s really critical. Drawing those conclusions and stating it as absolute fact without having any long science to back it up, especially when we’re talking about landscapes or ecosystems where you have multi-century cyclic behavior in the ecology, let alone any climate changes, now all of a sudden you’ve got a bit of a conundrum. To me, one without the other is not good science from the management or landscape interference point of view.

Photo by Scotty Strachan

LONG NOW: So it’s not that you’re dismissing short term science. Rather, you’re saying the short term science should be informed by long term science.

Yes, and here’s the other thing. Very often the short science takes the easiest path, which means you aren’t studying the mechanisms so much as you are simply observing the current status of things. Obviously you have to start somewhere. So you can still do short science—by short in this context I mean less than decades—you can still do that and try to observe some mechanisms rather than rely on perhaps other mechanistic studies that came in some cases very long before you and may have been very rudimentary in nature. Scale is a critical issue, geographically and temporally. And something that I don’t usually see in papers that shoot for sweeping conclusions is a section that takes on “Sources of Uncertainty” and then lists them, explaining how each of those sources has either been controlled for, or if not controlled for then a reasoned, fact-filled explanation as to why the author believes the influence is negligible.

We’ve been working with the Long Now Foundation out on Mount Washington to study more of the mechanistic processes to do with Great Basin woodlands and bristlecone pine for a number of years now. We’ve got the first multi-year, continuous sub-daily record of bristlecone growth response to climate and interactions with seasonal resources and surrounding species, including limber pine, and the data are becoming more fascinating every year. We hope to run this study for decades. Yes, more of those papers are coming! That’s the kind of investigative approach that needs to be developed more around the west, and not just for bristlecone. You can’t manage what you don’t monitor. Investing in this kind of longer science and maintaining it is a huge challenge—because long science doesn’t write headlines, or at least, not until much much later! I think that the Long Now Foundation has a part to play in helping re-orient the dialogue around how short and long science differ, and also how each informs our views and interactions with the geography around us.  

Planet DebianColin Watson: A mysterious bug with Twisted plugins

I fixed a bug in Launchpad recently that led me deeper than I expected.

Launchpad uses Buildout as its build system for Python packages, and it’s served us well for many years. However, we’re using 1.7.1, which doesn’t support ensuring that packages required using setuptools’ setup_requires keyword only ever come from the local index URL when one is specified; that’s an essential constraint we need to be able to impose so that our build system isn’t immediately sensitive to downtime or changes in PyPI. There are various issues/PRs about this in Buildout (e.g. #238), but even if those are fixed it’ll almost certainly only be in Buildout v2, and upgrading to that is its own kettle of fish for other reasons. All this is a serious problem for us because newer versions of many of our vital dependencies (Twisted and testtools, to name but two) use setup_requires to pull in pbr, and so we’ve been stuck on old versions for some time; this is part of why Launchpad doesn’t yet support newer SSH key types, for instance. This situation obviously isn’t sustainable.

To deal with this, I’ve been working for some time on switching to virtualenv and pip. This is harder than you might think: Launchpad is a long-lived and complicated project, and it had quite a number of explicit and implicit dependencies on Buildout’s configuration and behaviour. Upgrading our infrastructure from Ubuntu 12.04 to 16.04 has helped a lot (12.04’s baseline virtualenv and pip have some deficiencies that would have required a more complicated bootstrapping procedure). I’ve dealt with most of these: for example, I had to reorganise a lot of our helper scripts (1, 2, 3), but there are still a few more things to go.

One remaining problem was that our Buildout configuration relied on building several different environments with different Python paths for various things. While this would technically be possible by way of building multiple virtualenvs, this would inflate our build time even further (we’re already going to have to cope with some slowdown as a result of using virtualenv, because the build system now has to do a lot more than constructing a glorified link farm to a bunch of cached eggs), and it seems like unnecessary complexity. The obvious thing to do seemed to be to collapse these into a single environment, since there was no obvious reason why it should actually matter if txpkgupload and txlongpoll were carefully kept off the path when running most of Launchpad: so I did that.

Then our build system got very sad.

Hmm, I thought. To keep our test times somewhat manageable, we run them in parallel across 20 containers, and we randomise the order in which they run to try to shake out test isolation bugs. It’s not completely unknown for there to be some oddities resulting from that. So I ran it again. Nope, but slightly differently sad this time. Furthermore, I couldn’t reproduce these failures locally no matter how hard I tried. Oh dear. This was obviously not going to be a good day.

In fact I spent a while on various different guesswork-based approaches. I found bug 571334 in Ampoule, an AMP-based process pool implementation that we use for some job runners, and proposed a fix for that, but cherry-picking that fix into Launchpad didn’t help matters. I tried backing out subsets of my changes and determined that if both txlongpoll and txpkgupload were absent from the Python module path in the context of the tests in question then everything was fine. I tried running strace locally and staring at the output for some time in the hope of enlightenment: that reminded me that the two packages in question install modules under twisted.plugins, which did at least establish a reason they might affect the environment that was more plausible than magic, but nothing much more specific than that.

On Friday I was fiddling about with this again and trying to insert some more debugging when I noticed some interesting behaviour around plugin caching. If I caused the txpkgupload plugin to raise an exception when loaded, the Twisted plugin system would remove its dropin.cache (because it was stale) and not create a new one (because there was now no content to put in it). After that, running the relevant tests would fail as I’d seen in our buildbot. Aha! This meant that I could also reproduce it by doing an even cleaner build than I’d previously tried to do, by removing the cached txpkgupload and txlongpoll eggs and allowing the build system to recreate them. When they were recreated, they didn’t contain dropin.cache, instead allowing that to be created on first use.

Based on this clue I was able to get to the answer relatively quickly. Ampoule has a specialised bootstrapping sequence for its worker processes that starts by doing this:

from twisted.application import reactors

Now, twisted.application.reactors.installReactor calls twisted.plugin.getPlugins, so the very start of this bootstrapping sequence is going to involve loading all plugins found on the module path (I assume it’s possible to write a plugin that adds an alternative reactor implementation). If dropin.cache is up to date, then it will just get the information it needs from that; but if it isn’t, it will go ahead and import the plugin. If the plugin happens (as Twisted code often does) to run from twisted.internet import reactor at some point while being imported, then that will install the platform’s default reactor, and then twisted.application.reactors.installReactor will raise ReactorAlreadyInstalledError. Since Ampoule turns this into an info-level log message for some reason, and the tests in question only passed through error-level messages or higher, this meant that all we could see was that a worker process had exited non-zero but not why.

The Twisted documentation recommends generating the plugin cache at build time for other reasons, but we weren’t doing that. Fixing that makes everything work again.

There are still a few more things needed to get us onto pip, but we’re now pretty close. After that we can finally start bringing our dependencies up to date.

Planet DebianNorbert Preining: Debian/TeX Live 2017.20170926-1

A full month or more has past since the last upload of TeX Live, so it was high time to prepare a new package. Nothing spectacular here I have to say, two small bugs fixed and the usual long list of updates and new packages.

From the new packages I found fontloader-luaotfload and interesting project. Loading fonts via lua code in luatex is by now standard, and this package allows for experiments with newer/alternative font loaders. Another very interesting new-comer is pdfreview which lets you set pages of another PDF on a lined background and add notes to it, good for reviewing.


New packages

abnt, algobox, beilstein, bib2gls, cheatsheet, coelacanth, dijkstra, dynkin-diagrams, endofproofwd, fetchcls, fixjfm, fontloader-luaotfload, forms16be, hithesis, ifxptex, komacv-rg, ku-template, latex-refsheet, limecv, mensa-tex, multilang, na-box, notes-tex, octave, pdfreview, pst-poker, theatre, upzhkinsoku, witharrows.

Updated packages

2up, acmart, acro, amsmath, animate, babel, babel-french, babel-hungarian, bangorcsthesis, beamer, beebe, biblatex-gost, biblatex-philosophy, biblatex-source-division, bibletext, bidi, bpchem, bxjaprnind, bxjscls, bytefield, checkcites, chemmacros, chet, chickenize, complexity, curves, cweb, datetime2-german, e-french, epstopdf, eqparbox, esami, etoc, fbb, fithesis, fmtcount, fnspe, fontspec, genealogytree, glossaries, glossaries-extra, hvfloat, ifptex, invoice2, jfmutil, jlreq, jsclasses, koma-script, l3build, l3experimental, l3kernel, l3packages, latexindent, libertinust1math, luatexja, lwarp, markdown, mcf2graph, media9, nddiss, newpx, newtx, novel, numspell, ocgx2, philokalia, phfqit, placeat, platex, poemscol, powerdot, pst-barcode, pst-cie, pst-exa, pst-fit, pst-func, pst-geometrictools, pst-ode, pst-plot, pst-pulley, pst-solarsystem, pst-solides3d, pst-tools, pst-vehicle, pst2pdf, pstricks, pstricks-add, ptex-base, ptex-fonts, pxchfon, quran, randomlist, reledmac, robustindex, scratch, skrapport, spectralsequences, tcolorbox, tetex, tex4ht, texcount, texdef, texinfo, texlive-docindex, texlive-scripts, tikzducks, tikzsymbols, tocloft, translations, updmap-map, uplatex, widetable, xepersian, xetexref, xint, xsim, zhlipsum.

Planet DebianIain R. Learmonth: SMS Verification

I’ve received an email today from Barclaycard with the following:

“From time to time, to make sure it’s you who’s using your Barclaycard online, we’ll send you a text with a verification code for you to use on the Verified by Visa screen that’ll pop up on your payment page.”

The proprietary nature of mobile phones with the hardware specifications and the software being closed off from inspection or audit and considered to be trade secrets make my phone and my tablet the least trusted devices I own and use.

Due to this lack of trust, I’ve often held back from using my phone or tablet for certain tasks where I can still get away with not doing so. I have experimented with having read-only access to my calendars and contacts to ensure that if my phone is compromised they can’t just be wiped out, though in the end I had to give in as my calendar was becoming too difficult to manage using a paper system as part of entry for new events.

I wanted to try to reduce the attractiveness of compromising my phone. Anyone that really wants to have a go at my phone could probably get in. It’s an older Samsung Android phone on a UK network and software updates rarely come through in a timely manner. Anything that I give my phone access to is at risk and that risk needs to be balanced by some real world benefits.

These are just the problems with the phone itself. When you’re using SMS authentication, even with the most secure phone ever, you’re still going to be using the phone network. SMS authentication is about equivalent, in terms of the security it really offers, to your mobile phone number being your password when it comes to an even mildly motivated attacker. You probably don’t treat your mobile phone number as a password, nor does the provider or anyone you’ve given it to, so you can assume that it’s compromised.

Why are mobile phones so popular for two factor (on in increasing numbers of cases, single factor) authentication? Not because they improve security but because they’re convenient and everyone has one. This seems like a bad plan.

Planet DebianIain R. Learmonth: SMS Verification

I’ve received an email today from Barclaycard with the following: “From time to time, to make sure it’s you who’s using your Barclaycard online, we’ll send you a text with a verification code for you to use on the Verified by Visa screen that’ll pop up on your payment page.” The proprietary nature of mobile phones with the hardware specifications and the software being closed off from inspection or audit and considered to be trade secrets make my phone and my tablet the least trusted devices I own and use.

CryptogramThe Data Tinder Collects, Saves, and Uses

Under European law, service providers like Tinder are required to show users what information they have on them when requested. This author requested, and this is what she received:

Some 800 pages came back containing information such as my Facebook "likes," my photos from Instagram (even after I deleted the associated account), my education, the age-rank of men I was interested in, how many times I connected, when and where every online conversation with every single one of my matches happened...the list goes on.

"I am horrified but absolutely not surprised by this amount of data," said Olivier Keyes, a data scientist at the University of Washington. "Every app you use regularly on your phone owns the same [kinds of information]. Facebook has thousands of pages about you!"

As I flicked through page after page of my data I felt guilty. I was amazed by how much information I was voluntarily disclosing: from locations, interests and jobs, to pictures, music tastes and what I liked to eat. But I quickly realised I wasn't the only one. A July 2017 study revealed Tinder users are excessively willing to disclose information without realising it.

"You are lured into giving away all this information," says Luke Stark, a digital technology sociologist at Dartmouth University. "Apps such as Tinder are taking advantage of a simple emotional phenomenon; we can't feel data. This is why seeing everything printed strikes you. We are physical creatures. We need materiality."

Reading through the 1,700 Tinder messages I've sent since 2013, I took a trip into my hopes, fears, sexual preferences and deepest secrets. Tinder knows me so well. It knows the real, inglorious version of me who copy-pasted the same joke to match 567, 568, and 569; who exchanged compulsively with 16 different people simultaneously one New Year's Day, and then ghosted 16 of them.

"What you are describing is called secondary implicit disclosed information," explains Alessandro Acquisti, professor of information technology at Carnegie Mellon University. "Tinder knows much more about you when studying your behaviour on the app. It knows how often you connect and at which times; the percentage of white men, black men, Asian men you have matched; which kinds of people are interested in you; which words you use the most; how much time people spend on your picture before swiping you, and so on. Personal data is the fuel of the economy. Consumers' data is being traded and transacted for the purpose of advertising."

Tinder's privacy policy clearly states your data may be used to deliver "targeted advertising."

It's not Tinder. Surveillance is the business model of the Internet. Everyone does this.

Worse Than FailureAn Emphasized Color

One of the major goals of many software development teams is to take tedious, boring, simplistic manual tasks and automate them. An entire data entry team can be replaced by a single well-written application, saving the company money, greatly improving processing time, and potentially reducing errors.

That is, if it’s done correctly.

Peter G. worked for a state government. One of his department’s tasks involved processing carbon copies of forms for most of the state’s residents. To save costs, improve processing time, and reduce the amount of manual data entry they had to perform, the department decided to automate the process and use optical character recognition (OCR) to scan in the carbon copies and convert the handwritten data into text which was eventually entered into a database.

A pile of paperwork on a desk, with an old style phone and a stream of light artistically highlighting the paper. By By Aaron Logan
[CC BY 2.5], via Wikimedia Commons

The software was written and the department received boxes and boxes and boxes worth of the carbon copy paper forms. The printer had a very long lead time, so they ordered their entire supply of forms for the state for the next year. There were so many boxes that Peter joked about building a castle with them.

Then the system went live. And it didn’t work, at all. Something was wrong with the OCR software and Peter was pulled into the project to help find a fix.

While researching the project history, he found that much of the data on the paper forms wasn’t required, and the decision was made to print those boxes in a different, very specific color. During processing, their custom OCR software would ignore that color, blanking out the box and removing the extraneous information before it was unnecessarily entered into the system. Since it still needed to be visible, but wasn’t important, they chose, with the help of their printer, Pantone 5507.

So he filled out a sample form for one “Homer J. Simpson” and scanned it to see what was meant by “The system doesn’t work.” The system briefly churned and created a record in the test database for his form, but when he inspected the record, it was missing the mandatory unique ID. This ID came from the paper form and was comparable to a license number or Social Security Number, and was absolutely required for the data to be usable.

He filled out a couple more forms in case the system was having trouble understanding his handwriting, but they came out the same way. No unique ID.

He scratched his head and examined the paper forms some more. Eventually, he realized the issue. The box for the unique ID was considered “important” but not “something for users to interact with”, and thus was de-emphasized, and prrinted printed in that different, very specific color that the OCR software ignored: Pantone 5507. So the ID was blanked out and ignored during scanning.

Being a competent developer, Peter quickly came up with a plan to add a step to the task. After scanning, but before handing off to the OCR task, a new task would do a simple color-based find-and-replace within a region of the scan to correct the color of the ID field so it wouldn’t be blanked out.

“No, we don’t have time or money for that,” his manager explained to him. “I’ll have the offshore guys fix it for next year. For now, just cobble something together so the original scan stays with the record.”

The department hired a team of interns to perform manual data entry for the year, whose sole task was to sift through the database records, pull up the corresponding scan, and read and type in the single unique ID field that the OCR software ignored. Meanwhile, the department promised that something bigger, better, and fancier was on the way for next year…

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #126

Here's what happened in the Reproducible Builds effort between Sunday September 17th and Saturday September 23rd 2017:

Media coverage

  • Christos Zoulas gave a talk entitled Reproducible builds on NetBSD at EuroBSDCon 2017

Reproducible work in other packages

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

1 package reviews was added, 49 have been updated and 54 have been removed in this week, adding to our knowledge about identified issues.

One issue type was updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (56)
  • Bas Couwenberg (1)
  • Helmut Grohne (1)
  • Nobuhiro Iwamatsu (2)

diffoscope development

Version 87 was uploaded to unstable by Mattia Rizzolo. It included contributions from:

strip-nondeterminism development

reprotest development

Version 0.7 was uploaded to unstable by Ximin Luo:

Vagrant Cascadian and Holger Levsen:

  • Re-add and armhf build node that had been disabled due to performance issues, but works linux 4.14-rc1 now! #876212

Holger Levsen:


This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianRuss Allbery: Review: Artemis Fowl

Review: Artemis Fowl, by Eoin Colfer

Series: Artemis Fowl #1
Publisher: Disney-Hyperion
Copyright: 2001
ISBN: 1-4231-2452-9
Format: Kindle
Pages: 281

Artemis Fowl is the heir to the Fowl criminal empire and a child prodigy. He's also one of the few humans to know of the existence of fairies, who are still present in the world, hiding from humans and living by their own rules. As the book opens, he's in search of those rules: a copy of the book that governs the lives of fairies. With that knowledge, he should be able to pull off a heist worthy of his family's legacy.

Captain Holly Short is a leprechaun... or, more correctly, a LEPrecon. She's one of the fairy police officers that investigate threats to the fairies who are hiding in a vast underground civilization. The fairies have magic, but they also have advanced (and miniaturized) technology, maintained in large part by a grumpy and egotistical centaur (named Foaly, because it's that sort of book). She's also the fairy unlucky enough to be captured by Artemis's formidable personal bodyguard their first attempt to kidnap a hostage for their ransom demands.

This is the first book of a long series of young adult novels that has also spawned graphic novels and a movie currently in production. It has that lean and clear feeling of the younger side of young adult writing: larger-than-life characters who are distinctive and easy to remember, a short introductory setup that dives directly into the main plot, and a story that neatly pulls together every element raised in the story. The world-building is its strongest point, particularly the mix of tongue-in-cheek technology — ships that ride magma plumes, mechanical wings, and helmet-mounted lights to blind trolls — and science-tinged magic that the fairies build their police and army on. Fairies are far beyond humans in capability, and can be deadly and ruthless, but they have to follow a tightly constrained set of rules that are often not convenient.

Sadly, the characters don't live up to the world-building. I did enjoy a few of them, particularly Artemis's loyal bodyguards and the dwarf Mulch Diggums. But Holly, despite being likable, is a bit of a blank slate: the empathetic, overworked trooper who is mostly indistinguishable from other characters in similar stories. The gruff captain, the sarcastic technician Foaly, and the various other LEP agents all felt like they were taken straight from central casting. And then there's Artemis himself.

Artemis is the protagonist of the story, in that he's the one who initiates all of the action and the one who has the most interesting motivations. The story is about him, as the third-person narrator in the introduction makes clear. He's trying very hard to be a criminal genius with the deductive abilities of Sherlock Holmes and the speaking style of a Bond villain, but he's also twelve, his father has disappeared, and his mother is going slowly insane. I picked this book up on the recommendation of another reader who found that contrast compelling.

Unfortunately, I thought Artemis was just an abusive jerk. Yes, yes, family tragedy, yes, he's trapped in his conception of himself, but he's arrogant, utterly uncaring about how his actions affect other people, and dismissive and cruel even to his bodyguards (who are much better friends than he deserves). I think liking this book requires liking Artemis at least well enough to consider him an anti-hero, and I can squint and see that appeal if you have that reaction. But I just wanted him to lose. Not in the "you will be slowly redeemed over the course of a long series" way, but in the "you are a horrible person and I hope you get what's coming to you" way. The humor of the fairy parts of the book was undermined too much by the fact that many of them would like to kill Artemis for real, and I mostly wanted them to succeed.

This may or may not have to do with my low tolerance for egotistical smart-asses who order other people to do things that they refuse to explain.

Without some appreciation for Artemis, this is a story with some neat world-building, a fairly generic protagonist in Holly, and a plot in which the bad guys win. To make matters worse, I thought the supposedly bright note at the end of the story was just creepy, as was everything else involving Artemis's mother. The review I read was of the first three books, so it's entirely possible that this series gets better as it goes along, but there wasn't enough I enjoyed in the first book for me to keep reading.

Followed by Artemis Fowl: The Arctic Incident.

Rating: 5 out of 10


Planet DebianEnrico Zini: Systemd mount and swap units

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.mount and .swap units

Describe mount points similarly as what /etc/fstab, but with more functionality and integration with the dependency system.

It is possible to define, for example, a filesystem that should be mounted only when the network is available and a given service has successfully started, and a service that should be started only after a given filesystem has been successfully mounted.

At boot, systemd uses systemd-fstab-generator to generate mount and swap units from /etc/fstab, so that the usual fstab configuration file can still be used to configure the file system layout.

See man systemd.mount, and man systemd.swap.

See systemctl --all -t mount and systemctl --all -t swap for examples.

Planet DebianSteve Kemp: Started work on an internet-of-things Radio

So recently I was in York at the Bytemark office, and I read a piece about building a radio in a Raspberry Pi magazine. It got me curious, so when I got home to sunny Helsinki I figured I'd have a stab at it.

I don't have fixed goal in mind, but what I do have is:

  • A WeMos Mini D1
    • Cost €3.00
    • ESP8266-powered board, which can be programmed easily in C++ and contains on-board WiFi as well as a bunch of I/O pins.
  • A RDA5807M FM Radio chip.
    • Cost 37 cents.
    • With a crystal for support.

The initial goal is simple wire the receiver/decoder to the board, and listen to the radio.

After that there are obvious extenstions, such as adding an LCD display to show the frequency (What's the frequency Kenneth), and later to show the station details, via RDS.

Finally I could add some buttons/switches/tweaks for selecting next/previous stations, and adjusting the volume. Initially that'll be handled by pointing a browser at the IP-address of the device.

The first attempt at using the RDA5807M chip was a failure, as the thing was too damn small and non-standardly sized. Adding header-pins to the chips was almost impossible, and when I did get them soldered on the thing just gave me static-hisses.

However I later read the details of the chip more carefully and realized that it isn't powerfull enough to drive (even) headphones. It requires an amp of some kind. With that extra knowledge I was able to send the output to the powered-speakers I have sat beside my PC.

My code is basic, it sets up the FM-receiver/decoder, and scans the spectrum. When it finds a station it outputs the name over the serial console, via RDS, and then just plays it.

I've got an PAM8403-based amplifier board on-order, when that arrives I'll get back to the project, and hookup WiFi and a simple web-page to store stations, tuning, etc.

My "token goal" at the moment is a radio that switches on at 7AM and switches off at 8AM. In addition to that it'll serve a web-page allowing interactive control, regardless of any buttons that are wired in.

I also have another project in the wings. I've ordered a software-defined radio (USB-toy) which I'm planning to use to plot aircraft in real-time, as they arrive/depart/fly over Helsinki. No doubt I'll write that up too.

TEDMeet the Fall 2017 class of TED Residents

Here, two new Residents, “chief reading inspirer” Alvin Irby and filmmaker Karen Palmer, meet at the TED office on September 11, 2017, in New York.

The goal of the TED Residency is to incubate breakthrough projects of all kinds. Our Residents come from many areas of expertise, backgrounds and regions — and when they meet each other, new ideas spark. Here, two new Residents, “chief reading inspirer” Alvin Irby and filmmaker Karen Palmer, meet at the TED office on September 11, 2017, in New York. Photo: Dian Lofton / TED

On September 11, TED welcomed its latest class to the TED Residency program, an in-house incubator for breakthrough ideas. Residents spend four months in TED’s New York headquarters with other exceptional people from all over the map—including the Netherlands, the UK, Tennessee and Georgia.

The new Residents include:

  • A filmmaker creating a movie experience that progresses using your reaction
  • An entrepreneur bringing reading spaces to unlikely places
  • A journalist advocating for better support for women after they’ve given birth
  • An artist looking to bring more humanity to citizens of North Korea

Tobacco Brown is an artist whose medium is plants and gardens. In her public art installations, she comments on sociopolitical realities by bringing nature to underinvested urban environments. During her Residency, she is turning her lifetime of experiences into a book.

A former foreign-aid worker and White House staffer, Stan Byers is an expert on emerging markets, geopolitical stability and security. His current project is applying AI to the Fragile States Index to identify more innovative and effective responses to state instability. He is working to incorporate more real-time data sources and, long-term, to help design more equitable, creative and resilient social and market structures.

William Frey is a qualitative researcher and digital ethnographer at Columbia University who is using machine learning to detect patterns in social media posts and police reports to map the genesis of violence. His goal is to spot imminent violence before it erupts and then alert communities to intervene.

Inside the TED office theater, TED Residency program manager Katrina Conanan and director Cyndi Stivers welcome the new class of Residents and alumn

Inside the TED office theater, TED Residency program manager Katrina Conanan and director Cyndi Stivers welcome the new class of Residents and alumni on September 11, 2017, in New York. Photo: Dian Lofton / TED

Alvin Irby is the founder and “chief reading inspirer” at Barbershop Books, which creates child-friendly reading spaces in barbershops across America to encourage young
Black boys to read for fun. He is developing an education podcast to share insights about helping children of color realize their full potential.

London-based filmmaker Karen Palmer uses AI interactive stories to inspire and enlighten her audience. Her current project, RIOT, is a live-action film with 3D sound that helps viewers navigate through a dangerous riot. She uses facial recognition and machine-learning technology to give viewers real-time feedback about their own visceral reactions.

Web designer Derrius Quarles is the cofounder and CTO of BREAUX Capital, a financial wellness startup devoted to Black millennials. Using a combination of technology, education, and behavioral economics, he hopes to break down the systemic barriers to financial health that people of color have long faced.

TED Residency alum Liz Jackson chats with new Residents Anouk Wipprecht and Eiji Han Shimizu

From left, TED Residency alum Liz Jackson, a fashion designer and activist from our very first class, chats with new Residents Anouk Wipprecht, a fashion designer and technologist, and animator Eiji Han Shimizu, during our meetup on September 11, 2017, in New York. Photo: Dian Lofton / TED

Michael Rain is the creator of Enodi, a digital gallery that highlights the stories of first-generation Black immigrants of African, Caribbean and Latinx descent. He is also cofounder of ZNews Africa, which makes mobile, web and email products for the global Pan-African community.

Kifah Shah is cofounder of SuKi Se, an ethical fashion brand produced by artisans in Pakistan. Her company strives to offer access to technologies that ensure high production standards and inclusive supply chains. Kifah is also a digital campaign strategist for MPower Change.

How do organizations hire better employees? That is a question Jason Shen has been thinking about through his company Headlight, a platform for tech employers to manage assignments, and The Talent Playbook, an open-source repository of best practices for hiring.

Eiji Han Shimizu is a creative activist from Japan who uses animation and graphic novels to galvanize his audiences. His current project is an animated film depicting the stories of North Korean political prisoners and ordinary people whose lives are hidden behind the headlines.

Bob Stein has long been in the vanguard: Immersed in radical politics as a young man, he grew into one of the founding fathers of new media (Criterion, Voyager, Institute for Future of the Book). He’s wondering what sorts of new rituals and traditions might emerge as society expands to include increasing numbers of people in their eighties and nineties.

Kifah Shah chats during our residents meetup

Kifah Shah, cofounder of SuKi Se, chats during our residents meetup on September 11, 2017, in New York. Photo: Dian Lofton / TED

Malika Whitley is the Atlanta-based CEO and founder of ChopArt, an organization for homeless teens focused on mentorship, dignity and opportunity through the arts. ChopArt partners with local shelters and homeless organizations to provide multidisciplinary arts programming in Atlanta, New Orleans, Hyderabad and Accra.

Anouk Wipprecht is a Dutch designer and engineer whose work combines fashion and technology in what she calls “technical couture.” Her garments augment everyday interactions, using sensors, machine learning and animatronics; her designs move, breathe and react to the environment around them.

Allison Yarrow is a journalist and documentary producer examining how women recover from childbirth during what’s known as the Fourth Trimester. Particularly in the US, Allison argues, society and healthcare tend to focus on the health of babies, while the well-being of mothers is overlooked.

If you would like to be a part of the Spring 2018 TED Residency (which runs March 12 to June 15, 2018), applications open on November 1, 2017. For more information on requirements, and an advance peek at the application form, please see

Krebs on SecuritySource: Deloitte Breach Affected All Company Email, Admin Accounts

Deloitte, one of the world’s “big four” accounting firms, has acknowledged a breach of its internal email systems, British news outlet The Guardian revealed today. Deloitte has sought to downplay the incident, saying it impacted “very few” clients. But according to a source close to the investigation, the breach dates back to at least the fall of 2016, and involves the compromise of all administrator accounts at the company as well as Deloitte’s entire internal email system.


In a story published Monday morning, The Guardian said a breach at Deloitte involved usernames, passwords and personal data on the accountancy’s top blue-chip clients.

“The Guardian understands Deloitte clients across all of these sectors had material in the company email system that was breached,” The Guardian’s Nick Hopkins wrote. “The companies include household names as well as US government departments. So far, six of Deloitte’s clients have been told their information was ‘impacted’ by the hack.”

In a statement sent to KrebsOnSecurity, Deloitte acknowledged a “cyber incident” involving unauthorized access to its email platform.

“The review of that platform is complete,” the statement reads. “Importantly, the review enabled us to understand precisely what information was at risk and what the hacker actually did and to determine that only very few clients were impacted [and] no disruption has occurred to client businesses, to Deloitte’s ability to continue to serve clients, or to consumers.”

However, information shared by a person with direct knowledge of the incident said the company in fact does not yet know precisely when the intrusion occurred, or for how long the hackers were inside of its systems.

This source, speaking on condition of anonymity, said the team investigating the breach focused their attention on a company office in Nashville known as the “Hermitage,” where the breach is thought to have begun.

The source confirmed The Guardian reporting that current estimates put the intrusion sometime in the fall of 2016, and added that investigators still are not certain that they have completely evicted the intruders from the network.

Indeed, it appears that Deloitte has known something was not right for some time. According to this source, the company sent out a “mandatory password reset” email on Oct. 13, 2016 to all Deloitte employees in the United States. The notice stated that employee passwords and personal identification numbers (PINs) needed to be changed by Oct. 17, 2016, and that employees who failed to do so would be unable to access email or other Deloitte applications. The message also included advice on how to pick complex passwords:

A screen shot of the mandatory password reset email Deloitte sent to all U.S. employees in Oct. 2016, around the time sources say the breach was first discovered.

A screen shot of the mandatory password reset message Deloitte sent to all U.S. employees in Oct. 2016, around the time sources say the breach was first discovered.

The source told KrebsOnSecurity they were coming forward with information about the breach because, “I think it’s unfortunate how we have handled this and swept it under the rug. It wasn’t a small amount of emails like reported. They accessed the entire email database and all admin accounts. But we never notified our advisory clients or our cyber intel clients.”

“Cyber intel” refers to Deloitte’s Cyber Intelligence Centre, which provides 24/7 “business-focused operational security” to a number of big companies, including CSAA Insurance, FedExInvesco, and St. Joseph’s Healthcare System, among others.

This same source said forensic investigators identified several gigabytes of data being exfiltrated to a server in the United Kingdom. The source further said the hackers had free reign in the network for “a long time” and that the company still does not know exactly how much total data was taken.

In its statement about the incident, Deloitte said it responded by “implementing its comprehensive security protocol and initiating an intensive and thorough review which included mobilizing a team of cyber-security and confidentiality experts inside and outside of Deloitte.” Additionally, the company said it contacted governmental authorities immediately after it became aware of the incident, and that it contacted each of the “very few clients impacted.”

“Deloitte remains deeply committed to ensuring that its cyber-security defenses are best in class, to investing heavily in protecting confidential information and to continually reviewing and enhancing cyber security,” the statement concludes.

Deloitte has not yet responded to follow-up requests for comment.  The Guardian reported that Deloitte notified six affected clients, but Deloitte has not said publicly yet when it notified those customers.

Deloitte has a significant cybersecurity consulting practice globally, wherein it advises many of its clients on how best to secure their systems and sensitive data from hackers. In 2012, Deloitte was ranked #1 globally in security consulting based on revenue.

Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a private company based in the United Kingdom. According to the company’s Web site, Deloitte has more than 263,000 employees at member firms delivering services in audit and insurance, tax, consulting, financial advisory, risk advisory, and related services in more than 150 countries and territories. Revenues for the fiscal year 2017 were $38.8 billion.

The breach at the big-four accountancy comes on the heels of a massive breach at big-three consumer credit bureau Equifax. That incident involved several months of unauthorized access in which intruders stole Social Security numbers, birth dates, and addresses on 143 million Americans.

This is a developing story. Any updates will be posted as available, and noted with update timestamps.

Google AdsenseBoost your multi-screen strategy with AdSense Responsive ads

We know one of the biggest challenges publishers currently face is designing websites that adapt to different screen sizes, resolutions and user needs. Responsive ad units helps you deliver the best possible user experience on your pages: You can dynamically control the presentation of your website according to the properties of the screen/device that it’s being viewed on. Responsive ads automatically adapt to the size of your user's screen, meaning publishers can spend more time creating great content, and less time thinking about the size of their ads.

Today we’re happy to share some product updates to complement and strengthen your strategy, with new features for our responsive ad units and a multi-screen optimization score now available.

The new full width ads on mobile devices
Our experiments show that full-width responsive ads perform better on mobile devices in portrait mode. Previously Responsive ads fitted to standard sizes. The new launch will now  automatically expand ads to the full-width of the user's screen when their device is orientated vertically.
Screen Shot 2017-08-16 at 3.02.57 PM.pngScreen Shot 2017-08-16 at 3.02.57 PM.png

To implement the new full-width responsive ads, you can simply create a responsive ad unit in your AdSense account.

Best practices to help improve your mobile performance

We are also happy to share with you other best practices to help improve your mobile performance. Check out this video to get tips on how to create an excellent mobile experience for your users and potentially increase your mobile revenue. Let's get started!

More information on Responsive ad units can also be found in our Help Center.

We look forward to hearing your thoughts on these new features!

Posted by: The AdSense Team

Planet DebianChris Lamb: Lintian: We are all Perl developers now

Lintian is a static analysis tool for Debian packages, reporting on various errors, omissions and general quality-assurance issues to maintainers.

I've previously written about my exploits with Lintian as well as authoring a short tutorial on how to write your own Lintian check.

Anyway, I recently uploaded version 2.5.53 about two months since previous release. The biggest changes you may notice are supporting the latest version of the Debian Policy as well the addition of checks to encourage the migration to Python 3.

Thanks to all who contributed patches, code review and bug reports to this release. The full changelog is as follows:

lintian (2.5.53) unstable; urgency=medium

  The "we are all Perl developers now" release.

  * Summary of tag changes:
    + Added:
      - alternatively-build-depends-on-python-sphinx-and-python3-sphinx
      - build-depends-on-python-sphinx-only
      - dependency-on-python-version-marked-for-end-of-life
      - maintainer-script-interpreter
      - missing-call-to-dpkg-maintscript-helper
      - node-package-install-in-nodejs-rootdir
      - override-file-in-wrong-package
      - package-installs-java-bytecode
      - python-foo-but-no-python3-foo
      - script-needs-depends-on-sensible-utils
      - script-uses-deprecated-nodejs-location
      - transitional-package-should-be-oldlibs-optional
      - unnecessary-testsuite-autopkgtest-header
      - vcs-browser-links-to-empty-view
    + Removed:
      - debug-package-should-be-priority-extra
      - missing-classpath
      - transitional-package-should-be-oldlibs-extra

  * checks/
    + [CL] Fix an apache2-unparsable-dependency false positive by allowing
      periods (".") in dependency names.  (Closes: #873701)
  * checks/
    + [CL] Apply patches from Guillem Jover & Boud Roukema to improve the
      description of the binary-file-built-without-LFS-support tag.
      (Closes: #874078)
  * checks/changes.{pm,desc}:
    + [CL] Ignore DFSG-repacked packages when checking for upstream
      source tarball signatures as they will never match by definition.
      (Closes: #871957)
    + [CL] Downgrade severity of orig-tarball-missing-upstream-signature
      from "E:" to "W:" as many common tools do not make including the
      signatures easy enough right now.  (Closes: #870722, #870069)
    + [CL] Expand the explanation of the
      orig-tarball-missing-upstream-signature tag to include the location
      of where dpkg-source will look. Thanks to Theodore Ts'o for the
  * checks/
    + [CL] Address a number of issues in copyright-year-in-future:
      - Prevent false positives in port numbers, email addresses, ISO
        standard numbers and matching specific and general street
        addresses.  (Closes: #869788)
      - Match all violating years in a line, not just the first (eg.
      - Ignore meta copyright statements such as "Original Author". Thanks
        to Thorsten Alteholz for the bug report.  (Closes: #873323)
      - Expand testsuite.
  * checks/cruft.{pm,desc}:
    + [CL] Downgrade severity of file-contains-fixme-placeholder
      tag from "important" (ie. "E:") to "wishlist" (ie. "I:").
      Thanks to Gregor Herrmann for the suggestion.
    + [CL] Apply patch from Alex Muntada (alexm) to use "substr" instead
      of "substring" in mentions-deprecated-usr-lib-perl5-directory's
      description.  (Closes: #871767)
    + [CL] Don't check copyright_hints file for FIXME placeholders.
      (Closes: #872843)
    + [CL] Don't match quoted "FIXME" variants as they are almost always
      deliberate. Thanks to Adrian Bunk for the report.  (Closes: #870199)
    + [CL] Avoid false positives in missing source checks for "CSS Browser
      Selector".  (Closes: #874381)
  * checks/
    + [CL] Prevent a false positive of
      missing-build-dependency-for-dh_-command that can be exposed by
      following the advice for the recently added
      useless-autoreconf-build-depends tag.  (Closes: #869541)
  * checks/debian-readme.{pm,desc}:
    + [CL] Ensure readme-debian-contains-debmake-template also checks
      for templates "Automatically generated by debmake".
  * checks/description.{desc,pm}:
    + [CL] Clarify explanation of description-starts-with-leading-spaces
      tag. Thanks to Taylor Kline  for the report
      and patch.  (Closes: #849622)
    + [NT] Skip capitalization-error-in-description-synopsis for
      auto-generated packages (such as dbgsym packages).
  * checks/fields.{desc,pm}:
    + [CL] Ensure that python3-foo packages have "Section: python", not
      just python2-foo.  (Closes: #870272)
    + [RG] Do no longer require debug packages to be priority extra.
    + [BR] Use Lintian::Data for name/section mapping
    + [CL] Check for packages including "?rev=0&sc=0" in Vcs-Browser.
      (Closes: #681713)
    + [NT] Transitional packages should now be "oldlibs/optional" rather
      than "oldlibs/extra".  The related tag has been renamed accordingly.
  * checks/
    + [NT] Skip the check on auto-generated binary packages (such as
      dbgsym packages).
  * checks/files.{pm,desc}:
    + [BR] Avoid privacy-breach-generic false positives for legal.xml.
    + [BR] Detect install of node package under /usr/lib/nodejs/[^/]*$
    + [CL] Check for packages shipping compiled Java class files. Thanks
      Carnë Draug .  (Closes: #873211)
    + [BR] Privacy breach is no longer experimental.
  * checks/init.d.desc:
    + [RG] Do not recommend a versioned dependency on lsb-base in
      init.d-script-needs-depends-on-lsb-base.  (Closes: #847144)
  * checks/
    + [CL] Additionally consider .cljc files as code to avoid false-
      positive codeless-jar warnings.  (Closes: #870649)
    + [CL] Drop problematic missing-classpath check.  (Closes: #857123)
  * checks/menu-format.desc:
    + [CL] Prevent false positives in desktop-entry-lacks-keywords-entry
      for "Link" and "Directory" .desktop files.  (Closes: #873702)
  * checks/python.{pm,desc}:
    + [CL] Split out Python checks from "scripts" check to a new, source
      check of type "source".
    + [CL] Check for python-foo without corresponding python3-foo packages
      to assist in Python 2.x deprecation.  (Closes: #870681)
    + [CL] Check for packages that Build-Depend on python-sphinx only.
      (Closes: #870730)
    + [CL] Check for packages that alternatively Build-Depend on the
      Python 2 and Python 3 versions of Sphinx.  (Closes: #870758)
    + [CL] Check for binary packages that depend on Python 2.x.
      (Closes: #870822)
  * checks/
    + [CL] Correct false positives in
      unconditional-use-of-dpkg-statoverride by detecting "if !" as a
      valid shell prefix.  (Closes: #869587)
    + [CL] Check for missing calls to dpkg-maintscript-helper(1) in
      maintainer scripts.  (Closes: #872042)
    + [CL] Check for packages using sensible-utils without declaring a
      dependency after its split from debianutils.  (Closes: #872611)
    + [CL] Warn about scripts using "nodejs" as an interpreter now that
      nodejs provides /usr/bin/node.  (Closes: #873096)
    + [BR] Add a statistic tag giving interpreter.
  * checks/testsuite.{desc,pm}:
    + [CL] Remove recommendations to add a "Testsuite: autopkgtest" field
      to debian/control as it is added when needed by dpkg-source(1)
      since dpkg 1.17.1.  (Closes: #865531)
    + [CL] Warn if we see an unnecessary "Testsuite: autopkgtest" header
      in debian/control.
    + [NT] Recognise "autopkgtest-pkg-go" as a valid test suite.
    + [CL] Recognise "autopkgtest-pkg-elpa" as a valid test suite.
      (Closes: #873458)
    + [CL] Recognise "autopkgtest-pkg-octave" as a valid test suite.
      (Closes: #875985)
    + [CL] Update the description of unknown-testsuite to reflect that
      "autopkgtest" is not the only valid value; the referenced URL
      is out-of-date (filed as #876008).  (Closes: #876003)

  * data/binaries/embedded-libs:
    + [RG] Detect embedded copies of heimdal, libgxps, libquicktime,
      libsass, libytnef, and taglib.
    + [RG] Use an additional string to detect embedded copies of
      openjpeg2.  (Closes: #762956)
  * data/fields/name_section_mappings:
    + [BR] node- package section is javascript.
    + [CL] Apply patch from Guillem Jover to add more section mappings.
      (Closes: #874121)
  * data/fields/obsolete-packages:
    + [MR] Add dh-systemd.  (Closes: #872076)
  * data/fields/perl-provides:
    + [CL] Refresh perl provides.
  * data/fields/virtual-packages:
    + [CL] Update data file from archive. This fixes a false positive for
      "bacula-director".  (Closes: #835120)
  * data/files/obsolete-paths:
    + [CL] Add note to /etc/bash_completion.d entry regarding stricter
      filename requirements.  (Closes: #814599)
  * data/files/privacy-breaker-websites:
    + [BR] Detect custom donation logos like apache.
    + [BR] Detect generic counter website.
  * data/standards-version/release-dates:
    + [CL] Add 4.0.1 and 4.1.0 as known standards versions.
      (Closes: #875509)

  * debian/control:
    + [CL] Mention Debian Policy v4.1.0 in the description.
    + [CL] Add myself to Uploaders.
    + [CL] Drop unnecessary "Testsuite: autopkgtest"; this is implied from
      debian/tests/control existing.

  * commands/
    + [CL] Add a --list-tags option to print all tags Lintian knows about.
      Thanks to Rajendra Gokhale for the suggestion.  (Closes: #779675)
  * commands/
    + [CL] Apply patch from Maia Everett to avoid British spelling when
      using en_US locale.  (Closes: #868897)

  * lib/Lintian/
    + [CL] Stop emitting {maintainer,uploader}-address-causes-mail-loops
      for addresses.  (Closes: #871575)
  * lib/Lintian/Collect/
    + [NT] Introduce an "auto-generated" argument for "is_pkg_class".
  * lib/Lintian/
    + [CL] Modify Lintian::Data's "all" to always return keys in insertion
      order, dropping dependency on libtie-ixhash-perl.

  * helpers/coll/objdump-info-helper:
    + [CL] Apply patch from Steve Langasek to accommodate binutils 2.29
      outputting symbols in a different format on ppc64el.
      (Closes: #869750)

  * t/tests/fields-perl-provides/tags:
    + [CL] Update expected output to match new Perl provides.
  * t/tests/files-privacybreach/*:
    + [CL] Add explicit test for packages including external fonts via
      the Google Font API. Thanks to Ian Jackson for the report.
      (Closes: #873434)
    + [CL] Add explicit test for packages including external fonts via
      the Typekit API via <script/> HTML tags.
  * t/tests/*/desc:
    + [CL] Add missing entries in "Test-For" fields to make
      development/testing workflow less error-prone.

  * private/generate-tag-summary:
    + [CL] git-describe(1) will usually emit 7 hexadecimal digits as the
      abbreviated object name,  However, as this can be user-dependent,
      pass --abbrev=0 to ensure it does not vary between systems.  This
      also means we do not need to strip it ourselves.
  * private/refresh-*:
    + [CL] Use as the default mirror.
    + [CL] Update locations of Contents-<arch> files; they are now
      namespaced by distribution (eg. "main").

 -- Chris Lamb <>  Wed, 20 Sep 2017 09:25:06 +0100

Sociological ImagesUnpacking How House of Cards Represents Sex Workers

Mild Spoiler Alert for Season 3 of House of Cards

Where is Rachel Posner?

Representations of sex workers on popular shows such as Game of Thrones, The Good Wife, and, of course, any version of CSI, are often stereotypical, completely incorrect, and infuriatingly dehumanizing. Like so many of these shows, House of Cards offers more of the same, but it uses a somewhat different narrative for a former sex worker and central character, Rachel Posner. Rachel experiences many moments of sudden empowerment that are just as quickly taken away. She is not entirely disempowered, often physically and emotionally resisting other characters and situations, but her humanization only lasts so long.  

The show follows Rachel for three full seasons, offering some hope to the viewer that her story would not end in her death, dehumanization, or any other number of sensational and tumultuous storylines. So, when she is murdered in the final episode of Season 3, viewers sensitive to her character’s role as a sex worker and invested in a new narrative for current and former sex worker characters on popular TV shows probably felt deeply let down. Her death inspired us to go back and analyze how her role in the series was both intensely invisible and visible.  

Early in the show, we learn that Rachel has information that could reveal murder and corrupt political strategizing orchestrated by the protagonist Frank Underwood.  She is the thread that weaves the entire series together. Despite this, most characters on the show do not value Rachel beyond worrying about how she could harm them. Other characters talk about her when she’s not present at all, often referring to her as “the prostitute” or “some hooker,” rather than by her name or anything else that describes who she is.

The show, too, devalues her. At the beginning of an episode, we watch Rachel making coffee one morning in her small apartment.  Yet, instead of watching her, we watch her body parts; the camera pans over her torso, her breasts in a lace bra, and then her legs before we finally see her entire body and face.  There is not one single scene even remotely like this for any other character on the show. Even the promotional material for Season 1 (pictured above) fails to include a photo of Rachel while including images of a number of other characters who were less central to the storyline and appeared in fewer episodes. Yet, whoever arranged the photoshoot didn’t think she was important enough to include.

Another major way that Rachel is marginalized in the context of the show is that she is not given many scenes or storylines that are about her—her private life, time spent with friends, or what’s important to her. This is in contrast to other characters with a similar status. For instance, the audience is made to feel sympathy for Gavin, a hacker, when an FBI agent threatens the life of his beloved guinea pig. In contrast, it is Rachel’s ninth episode before the audience sees her interact with a friend, and we never really learn what motivates her beyond fear and survival. In this sense, Rachel is almost entirely invisible in her own storyline. She only exists when people want something from her.

Rachel is also made invisible by the way she is represented or discussed in many scenes.  For instance, although she’s present, she has zero lines in her first couple scenes. After appearing (without lines) in Episodes 1 and 2, Rachel reappears in Episode 7, although she’s not really present; she re-emerges in the form of a handwritten note to Doug Stamper (Underwood’s indispensable assistant).  She writes: “I need more money.  And not in my mouth.” These are Rachel’s first two lines in the entire series; however, she’s not actually saying them, she’s asking for something and one of the lines draws attention to a sexualized body part and sexual act that she engaged in with Doug. Without judging the fact that she engaged in a sexual act with a client, what’s notable here is the fact that she isn’t given a voice or her own resources. She is constantly positioned in relation to other characters and often without the resources and ability to survive on her own.

This can clearly be seen in the way Rachel is easily pushed around by other characters in the show, who are able to force their will upon her. When viewers do finally see her in a friendship, one that blossoms into a romance, the meaning that Rachel gives the relationship is overshadowed by the reaction Doug Stamper has to it. Doug has more contact with Rachel than any other character on the show; in the beginning of the series, he acts as a sort of “protector” to Rachel, by finding her a safe place to stay, ensuring that she can work free from sexual harassment in her new job, and getting her an apartment of her own. However, all these actions highlight the fact that she does not have her own resources or connections to be able to function on her own, and they are used to manipulate her. Over Rachel’s growing objections, Doug is able to impose his wishes upon her fairly easily. The moment she is able to overpower him and escape, she disappears from the show for almost a whole season, only to reappear in the episode where she dies. In this episode, we finally see Rachel standing on her own two feet. It seems like a hard life, working lots of double shifts and living in a rundown boardinghouse, but we also see her enjoying herself with friends and building something new for herself. And yet, it is also in this episode where she has leveraged her competence into a new life that she also meets her demise. Unfortunately, after seeing this vision of Rachel on the road to empowerment, more than half of her scenes relate to her death, and in most of them she is begging Doug for her life, once again reduced to powerlessness. 

Every time we begin to see a new narrative for Rachel, one that allows her to begin a life that isn’t entirely tethered to Doug Stamper and her past, she is almost immediately drawn back into his web.  Ultimately, in this final episode, she can no longer grasp her new narrative and immediately loses hold of it.  In her final scenes, after kidnapping her, Doug temporarily lets her go.  She begins to walk in the opposite direction of his van before, only moments later, he flips the van around and heads back in her direction.  The next scene cuts suddenly to her lifeless body in a shallow grave.  The sudden shock of this scene is jarring, yet oddly expected, given how the show has treated Rachel’s character throughout the series.  It’s almost as if the show does not have any use for a sex worker character who can competently manage their own affairs.  Perhaps that idea didn’t even occur to the writers because of the place in our society in which sex workers are currently situated, perhaps it disrupts the fallen woman narrative, or perhaps for some reason, a death seems more “interesting” than a storyline where a sex worker has agency and takes an active role in shaping her own life and affecting those around her.  Whatever the reason, House of Cards ultimately fails Rachel and sex workers, in general.

Paige Connell is an undergraduate sociology student at Chico State University. Her areas of interest include intimate relationships, gender, and pop culture. 

Dr. Danielle Antoinette Hidalgo is an Assistant Professor in Sociology at California State University, Chico, specializing in theory, gender and sexuality, and embodiment studies.

(View original at

Krebs on SecurityCanadian Man Gets 9 Months Detention for Serial Swattings, Bomb Threats

A 19-year-old Canadian man was found guilty of making almost three dozen fraudulent calls to emergency services across North America in 2013 and 2014. The false alarms, two of which targeted this author — involved phoning in phony bomb threats and multiple attempts at “swatting” — a dangerous hoax in which the perpetrator spoofs a call about a hostage situation or other violent crime in progress in the hopes of tricking police into responding at a particular address with deadly force.

Curtis Gervais of Ottawa was 16 when he began his swatting spree, which prompted police departments across the United States and Canada to respond to fake bomb threats and active shooter reports at a number of schools and residences.

Gervais, who taunted swatting targets using the Twitter accounts “ProbablyOnion” and “ProbablyOnion2,” got such a high off of his escapades that he hung out a for-hire shingle on Twitter, offering to swat anyone with the following tweet:


Several Twitter users apparently took him up on that offer. On March 9, 2014, @ProbablyOnion started sending me rude and annoying messages on Twitter. A month later (and several weeks after blocking him on Twitter), I received a phone call from the local police department. It was early in the morning on Apr. 10, and the cops wanted to know if everything was okay at our address.

Since this was not the first time someone had called in a fake hostage situation at my home, the call I received came from the police department’s non-emergency number, and they were unsurprised when I told them that the Krebs manor and all of its inhabitants were just fine.

Minutes after my local police department received that fake notification, @ProbablyOnion was bragging on Twitter about swatting me, including me on his public messages: “You have 5 hostages? And you will kill 1 hostage every 6 times and the police have 25 minutes to get you $100k in clear plastic.” Another message read: “Good morning! Just dispatched a swat team to your house, they didn’t even call you this time, hahaha.”


I told this user privately that targeting an investigative reporter maybe wasn’t the brightest idea, and that he was likely to wind up in jail soon.  On May 7, @ProbablyOnion tried to get the swat team to visit my home again, and once again without success. “How’s your door?” he tweeted. I replied: “Door’s fine, Curtis. But I’m guessing yours won’t be soon. Nice opsec!”

I was referring to a document that had just been leaked on Pastebin, which identified @ProbablyOnion as a 19-year-old Curtis Gervais from Ontario. @ProbablyOnion laughed it off but didn’t deny the accuracy of the information, except to tweet that the document got his age wrong.

A day later, @ProbablyOnion would post his final tweet before being arrested: “Still awaiting for the horsies to bash down my door,” a taunting reference to the Royal Canadian Mounted Police (RCMP).

A Sept. 14, 2017 article in the Ottawa Citizen doesn’t name Gervais because it is against the law in Canada to name individuals charged with or convicted of crimes committed while they are a minor. But the story quite clearly refers to Gervais, who reportedly is now married and expecting a child.

The Citizen says the teenager was arrested by Ottawa police after the U.S. FBI traced his Internet address to his parents’ home. The story notes that “the hacker” and his family have maintained his innocence throughout the trial, and that they plan to appeal the verdict. Gervais’ attorneys reportedly claimed the youth was framed by the hacker collective Anonymous, but the judge in the case was unconvinced.

Apparently, Ontario Court Justice Mitch Hoffman handed down a lenient sentence in part because of more than 900 hours of volunteer service the accused had performed in recent years. From the story:

Hoffman said that troublesome 16-year-old was hard to reconcile with the 19-year-old, recently married and soon-to-be father who stood in court before him, accompanied in court Thursday by his wife, father and mother.

“He has a bright future ahead of him if he uses his high level of computer skills and high intellect in a pro-social way,” Hoffman said. “If he does not, he has a penitentiary cell waiting for him if he uses his skills to criminal ends.”

According to the article, the teen will serve six months of his nine-month sentence at a youth group home and three months at home “under strict restrictions, including the forfeiture of a home computer used to carry out the cyber pranks.” He also is barred from using Twitter or Skype during his 18-month probation period.

Most people involved in swatting and making bomb threats are young males under the age of 18 — the age when kids seem to have little appreciation for or care about the seriousness of their actions. According to the FBI, each swatting incident costs emergency responders approximately $10,000. Each hoax also unnecessarily endangers the lives of the responders and the public.

In February 2017, another 19-year-old — a man from Long Beach, Calif. named Eric “Cosmo the God” Taylor — was sentenced to three year’s probation for his role in swatting my home in Northern Virginia in 2013. Taylor was among several men involved in making a false report to my local police department at the time about a supposed hostage situation at our house. In response, a heavily-armed police force surrounded my home and put me in handcuffs at gunpoint before the police realized it was all a dangerous hoax.

Planet DebianLior Kaplan: Recruiting for Open Source jobs

Part of services of Kaplan open source consulting is recruiting services to help companies find good open source people. In addition, we also try to help the community to find open source friendly businesses to work at.

Expect the “Usual Suspects” (e.g. RedHat), I encounter job descriptions which convince me these companies know the advantages of using open source projects and hiring open source people.

A few recent examples I found in Israel:

  • Advantages: People who like to build stuff (we really like people who maintain/contribute to open source projects) (Wizer Research)
  • You will: Incubate and contribute to open source projects (iguazio)
  • The X factor – significant contribution to an open-source community (unnamed startup)
  • An example open source project our team released is CoreML (Apple)
  • Job Responsibilities: Write open-source tools and contribute to open-source projects. (unnamed startup)
  • We’d like to talk to people who: Appreciate open-source culture and philosophy. (Seeking Alpha)

From the applicant side, the possibility to know which code base he or she is going to work on could help do a better and more educated choice about the offered position. While from the company side, getting “hard” evidence of what are the applicant capabilities and code looks like instead of just describing them or trying to demonstrate them on short tests. Not to mention the applicant’s ability to work as part of a team or community.

For the Israeli readers, you can see the full list at

Filed under: Israeli Community, Open source businesses

CryptogramGPS Spoofing Attacks

Wired has a story about a possible GPS spoofing attack by Russia:

After trawling through AIS data from recent years, evidence of spoofing becomes clear. Goward says GPS data has placed ships at three different airports and there have been other interesting anomalies. "We would find very large oil tankers who could travel at the maximum speed at 15 knots," says Goward, who was formerly director for Marine Transportation Systems at the US Coast Guard. "Their AIS, which is powered by GPS, would be saying they had sped up to 60 to 65 knots for an hour and then suddenly stopped. They had done that several times."

All of the evidence from the Black Sea points towards a co-ordinated attempt to disrupt GPS. A recently published report from NRK found that 24 vessels appeared at Gelendzhik airport around the same time as the Atria. When contacted, a US Coast Guard representative refused to comment on the incident, saying any GPS disruption that warranted further investigation would be passed onto the Department of Defence.

"It looks like a sophisticated attack, by somebody who knew what they were doing and were just testing the system," Bonenberg says. Humphreys told NRK it "strongly" looks like a spoofing incident. Fire Eye's Brubaker, agreed, saying the activity looked intentional. Goward is also confident that GPS were purposely disrupted. "What this case shows us is there are entities out there that are willing and eager to disrupt satellite navigation systems for whatever reason and they can do it over a fairly large area and in a sophisticated way," he says. "They're not just broadcasting a stronger signal and denying service this is worse they're providing hazardously misleading information."

Worse Than FailureCodeSOD: The Strangelet Solution

Chris M works for a “solutions provider”. Mostly, this means taking an off-the-shelf product from Microsoft or Oracle or SAP and customizing it to fit a client’s specific needs. Since many of these clients have in-house developers, the handover usually involves training those developers up on the care and maintenance of the system.

Then, a year or two later, the client comes back, complaining about the system. “It’s broken,” or “performance is terrible,” or “we need a new feature”. Chris then goes back out to their office, and starts taking a look at what has happened to the code in his absence.

It’s things like this:

    var getAdjustType = Xbp.Page.getAttribute("cw_adjustmenttype").getText;

    var reasonCodeControl = Xbp.Page.getControl("cw_reasoncode");
    if (getAdjustType === "Short-pay/Applying Credit" || getAdjustType === "Refund/Return (Credit)") {
        var i;
        var options = (Xbp.Page.getAttribute("cw_reasoncode").getOptions());

        for (i = 0; i < options.length; i++) {
            if (i <= 4) {

            if (i >= 5) {

            if (i >= 5) {

    else {
        var options = (Xbp.Page.getAttribute("cw_reasoncode").getOptions());
        for (var i = 0; i < options.length; i++) {
            if (i >= 4) {

            if (i <= 4) {

            if (i <= 4) {


There are patterns and there are anti-patterns, like there is matter and anti-matter. An anti-pattern would be the “switch loop”, where you have different conditional branches that execute depending on how many times the loop has run. And then there’s this, which is superficially similar to the “switch loop” anti-pattern, but confused. Twisted, with conditional branches that execute on the same condition. It may have once been an anti-pattern, but now it’s turned into a strange pattern, and like strange matter threatens to turn everything it touches into more of itself.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Valerie AuroraRepealing Obamacare will repeal my small business

I emailed this to the U.S. Senate Finance Committee today in response to the weekly Wall-of-Us email call-to-action, and thought it would fit on my blog as well.


I am a small business owner with a pre-existing condition who can’t go without health insurance for even one month. The Affordable Care Act made my small business possible. If ACA is repealed or replaced, I will be forced to go out of business.

Two years ago, I started my own business, Frame Shift Consulting, teaching technology companies how to improve diversity and inclusion. I also have a genetic disease called Ehlers-Danlos Syndrome. If I take about ten prescription drugs every day, see several medical professionals regularly, and exercise carefully, I can live a semi-normal life and even work full-time if I don’t have to go to an office every day. Without access to prescription drugs and medical care, I would be unable to work full-time or even care for myself, and would have to go on disability, SSDI.

Before the Affordable Care Act, no health insurance company would sell me a policy on the individual market. My only option was to get a salaried job at a company large enough to offer health insurance to their employees. If I lost my job, I could buy one or two coverage options under COBRA or HIPAA, but I was always just one missed payment away from losing my access to health insurance at any price. (I once tried to apply for health insurance on the open market; after two questions about my medical history they told me I’d never get approved.) The ACA let me quit my job and start my own small business free from fear of losing my health insurance and becoming unable to work.

At my new small business, I am doing far more innovative and valuable work than I ever did for a big company. I love being my own boss, and the flexibility I have makes it far easier to cope with the bad days of Ehlers-Danlos Syndrome. I love how high impact my work is, and that I am training other people to do the same work. I could never have done work that changed so many people’s lives for the better while working at any other company.

Every time I hear about a new bill to repeal or replace the ACA, I study it to see whether I would still be able to afford health insurance under the new system. So far, the answer has been a resounding no. Without the individual mandate, coverage for pre-existing conditions, price controls, and minimum coverage requirements that states can’t waive, no health insurance company offer me an individual policy at a price I can afford.

I’m one of the luckier ones; if the ACA is repealed or replaced and I lose my health insurance, I can probably get a salaried job at a big company with health insurance benefits. I don’t expect anyone to care about my personal satisfaction in doing work I love, or having the flexibility to stay home when my Ehlers-Danlos is acting up. But I do expect my elected representatives to care that a cutting edge, high-impact small business would go out of business if they passed Graham-Cassidy or any other repeal or replace bill. The ACA is good for business, good for innovation, and good for people. Instead of replacing it with an inferior system that would cover fewer people for more money, let’s work on improving the ACA and filling in the many gaps in its coverage.

Thank you for your time,

Valerie Aurora
Proud small business owner

Tagged: politics


Planet Linux AustraliaOpenSTEM: What Makes Humans Different From Most Other Mammals?

Well, there are several things that makes us different from other mammals – although perhaps fewer than one might think. We are not unique in using tools, in fact we discover more animals that use tools all the time – even fish! We pride ourselves on being a “moral animal”, however fairness, reciprocity, empathy and […]

Planet DebianEnrico Zini: Systemd service units

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.service units

Describe how to start and stop system services, like daemons.

Services are described in a [Service] section. The Type= configuration describes how the service wants to be brought up:

  • Type=simple: a program is started and runs forever, providing the servive. systemd takes care of daemonizing it properly in the background, creating a pidfile, stopping it and so on. The service is considered active as soon as it has started.
  • Type=forking: a traditional daemons that forks itself, creates a pidfile and so on. The server is considered active as soon as the parent process ends.
  • Type=oneshot: a program is run once, and the service is considered started after the program ends. This can be used, for example, to implement a service to do one-off configuration, like checking a file system.
  • Type=dbus: like simple but for D-Bus services: the service is considered active as soon as it appears on the D-Bus bus.
  • Type=notify: like simple, but the service tells sytemd when it has finished initialization and is ready. Notification can happen via the sd_notify C function, or the systemd-notify command.
  • Type=idle: like simple, but it is run after all other services has been started on a transaction. You can use this, for example, to start a shell on a terminal after the boot, so that the prompt doesn't get flooded with boot messages, or to play a happy trumped sound after the system has finished booting.

There are a lot more configuration options to fine-tune how the program should be managed, to limit its resource access or capabilities to harden the system security, to run setup/cleanup scripts before or after it started, and after it gets stopped, to control what signals to send to ask for reload or quit, and quite a lot more.

See: man systemd.service, man systemd.exec, man systemd.resource-control, and man systemd.kill.

See systemctl --all -t service for examples.

Planet DebianJulian Andres Klode: APT 1.5 is out

APT 1.5 is out, after almost 3 months the release of 1.5 alpha 1, and almost six months since the release of 1.4 on April 1st. This release cycle was unusually short, as 1.4 was the stretch release series and the zesty release series, and we waited for the latter of these releases before we started 1.5. In related news, 1.4.8 hit stretch-proposed-updates today, and is waiting in the unapproved queue for zesty.

This release series moves https support from apt-transport-https into apt proper, bringing with it support for https:// proxies, and support for autodetectproxy scripts that return http, https, and socks5h proxies for both http and https.

Unattended updates and upgrades now work better: The dependency on network-online was removed and we introduced a meta wait-online helper with support for NetworkManager, systemd-networkd, and connman that allows us to wait for network even if we want to run updates directly after a resume (which might or might not have worked before, depending on whether update ran before or after network was back up again). This also improves a boot performance regression for systems with rc.local files:

The rc.local.service unit specified, and login stuff was After=rc.local.service, and apt-daily.timer was, causing to be pulled into the boot and the rc.local.service ordering dependency to take effect, significantly slowing down the boot.

An earlier less intrusive variant of that fix is in 1.4.8: It just moves the Want/After from apt-daily.timer to apt-daily.service so most boots are uncoupled now. I hope we get the full solution into stretch in a later point release, but we should gather some experience first before discussing this with the release time.

Balint Reczey also provided a patch to increase the time out before killing the daily upgrade service to 15 minutes, to actually give unattended-upgrades some time to finish an in-progress update. Honestly, I’d have though the machine hung up and force rebooted it after 5 seconds already. (this patch is also in 1.4.8)

We also made sure that unreadable config files no longer cause an error, but only a warning, as that was sort of a regression from previous releases; and we added documentation for /etc/apt/auth.conf, so people actually know the preferred way to place sensitive data like passwords (and can make their sources.list files world-readable again).

We also fixed apt-cdrom to support discs without MD5 hashes for Sources (the Files field), and re-enabled support for udev-based detection of cdrom devices which was accidentally broken for 4 years, as it was trying to load at runtime, but that library had an SONAME change to – we now link against it normally.

Furthermore, if certain information in Release files change, like the codename, apt will now request confirmation from the user, avoiding a scenario where a user has stable in their sources.list and accidentally upgrades to the next release when it becomes stable.

Paul Wise contributed patches to allow configuring the apt-daily intervals more easily – apt-daily is invoked twice a day by systemd but has more fine-grained internal timestamp files. You can now specify the intervals in seconds, minutes, hours, and day units, or specify “always” to always run (that is, up to twice a day on systemd, once per day on non-systemd platforms).

Development for the 1.6 series has started, and I intent to upload a first alpha to unstable in about a week, removing the apt-transport-https package and enabling compressed index files by default (save space, a lot of space, at not much performance cost thanks to lz4). There will also be some small clean ups in there, but I don’t expect any life-changing changes for now.

I think our new approach of uploading development releases directly to unstable instead of parking them in experimental is working out well. Some people are confused why alpha releases appear in unstable, but let me just say one thing: These labels basically just indicate feature-completeness, and not stability. An alpha is just very likely to get a lot more features, a beta is less likely (all the big stuff is in), and the release candidates just fix bugs.

Also, we now have 3 active stable series: The 1.2 LTS series, 1.4 medium LTS, and 1.5. 1.2 receives updates as part of Ubuntu 16.04 (xenial), 1.4 as part of Debian 9.0 (stretch) and Ubuntu 17.04 (zesty); whereas 1.5 will only be supported for 9 months (as part of Ubuntu 17.10). I think the stable release series are working well, although 1.4 is a bit tricky being shared by stretch and zesty right now (but zesty is history soon, so …).

Filed under: Debian, Ubuntu

Planet DebianDirk Eddelbuettel: RcppGSL 0.3.3

A maintenance update RcppGSL 0.3.3 is now on CRAN. It switched the vignette to the our new pinp package and its two-column pdf default.

The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

No user-facing new code or features were added. The NEWS file entries follow below:

Changes in version 0.3.3 (2017-09-24)

  • We also check for gsl-config at package load.

  • The vignette now uses the pinp package in two-column mode.

  • Minor other fixes to package and testing infrastructure.

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecurityEquifax or Equiphish?

More than a week after it said most people would be eligible to enroll in a free year of its TrustedID identity theft monitoring service, big three consumer credit bureau Equifax has begun sending out email notifications to people who were able to take the company up on its offer. But in yet another security stumble, the company appears to be training recipients to fall for phishing scams.

Some people who signed up for the service after Equifax announced Sept. 7 that it had lost control over Social Security numbers, dates of birth and other sensitive data on 143 million Americans are still waiting for the promised notice from Equifax. But as I recently noted on Twitter, other folks have received emails from Equifax over the past few days, and the messages do not exactly come across as having emanated from a company that cares much about trying to regain the public’s trust.

Here’s a redacted example of an email Equifax sent out to one recipient recently:


As we can see, the email purports to have been sent from, a domain that Equifax has owned for almost four years. However, Equifax apparently decided it was time for a new — and perhaps snazzier — name:

The above-pictured message says it was sent from one domain, and then asks the recipient to respond by clicking on a link to a completely different (but confusingly similar) domain.

My guess is the reason Equifax registered was to help people concerned about the breach to see whether they were one of the 143 million people affected (for more on how that worked out for them, see Equifax Breach Response Turns Dumpster Fire). I’d further surmise that Equifax was expecting (and received) so much interest in the service as a result of the breach that all the traffic from the wannabe customers might swamp the site and ruin things for the people who were already signed up for the service before Equifax announced the breach on Sept. 7.

The problem with this dual-domain approach is that the domain is only a few weeks old, so it had very little time to establish itself as a legitimate domain. As a result, in the first few hours after Equifax disclosed the breach the domain was actually flagged as a phishing site by multiple browsers because it was brand new and looked about as professionally designed as a phishing site.

What’s more, there is nothing tying the domain registration records for to Equifax: The domain is registered to a WHOIS privacy service, which masks information about who really owns the domain (again, not exactly something you might expect from an identity monitoring site). Anyone looking for assurances that the site perhaps was hosted on Internet address space controlled by and assigned to Equifax would also be disappointed: The site is hosted at Amazon.

While there’s nothing wrong with that exactly, one might reasonably ask: Why didn’t Equifax just send the email from and host the ID theft monitoring service there as well? Wouldn’t that have considerably lessened any suspicion that this missive might be a phishing attempt?

Perhaps, but you see while TrustedID is technically owned by Equifax Inc., its services are separate from Equifax and its terms of service are different from those provided by Equifax (almost certainly to separate Equifax from any consumer liability associated with its monitoring service).


What’s super-interesting about is that it didn’t always belong to Equifax. According to the site’s Wikipedia page, TrustedID Inc. was purchased by Equifax in 2013, but it was founded in 2004 as an identity protection company which offered a service that let consumers automatically “freeze” their credit file at the major bureaus. A freeze prevents Equifax and the other major credit bureaus from selling an individual’s credit data without first getting consumer consent.

By 2006, some 17 states offered consumers the ability to freeze their credit files, and the credit bureaus were starting to see the freeze as an existential threat to their businesses (in which they make slightly more than a dollar each time a potential creditor — or ID thief — asks to peek at your credit file).

Other identity monitoring firms — such as LifeLock — were by then offering services that automated the placement of identity fraud controls — such as the “fraud alert,” a free service that consumers can request to block creditors from viewing their credit files.

[Author’s note: Fraud alerts only last for 90 days, although you can renew them as often as you like. More importantly, while lenders and service providers are supposed to seek and obtain your approval before granting credit in your name if you have a fraud alert on your file, they are not legally required to do this — and very often don’t.]

Anyway, the era of identity monitoring services automating things like fraud alerts and freezes on behalf of consumers effectively died after a landmark lawsuit filed by big-three bureau Experian (which has its own storied history of data breaches). In 2008, Experian sued LifeLock, arguing its practice of automating fraud alerts violated the Fair Credit Reporting Act.

In 2009, a court found in favor of Experian, and that decision effectively killed such services — mainly because none of the banks wanted to distribute them and sell them as a service anymore.


These days, consumers in all states have a right to freeze their credit files, and I would strongly encourage all readers to do this. Yes, it can be a pain, and the bureaus certainly seem to be doing everything they can at the moment to make this process extremely difficult and frustrating for consumers. As detailed in the analysis section of last week’s story — Equifax Breach: Setting the Record Straight — many of the freeze sites are timing out, crashing or telling consumers just to mail in copies of identity documents and printed-out forms.

Other bureaus, like TransUnion and Experian, are trying mightily to steer consumers away from a freeze and toward their confusingly named “credit lock” services — which claim to be the same thing as freezes only better. The truth is these lock services do not prevent the bureaus from selling your credit reports to anyone who comes asking for them (including ID thieves); and consumers who opt for them over freezes must agree to receive a flood of marketing offers from a myriad of credit bureau industry partners.

While it won’t stop all forms of identity theft (such as tax refund fraud or education loan fraud), a freeze is the option that puts you the consumer in the strongest position to control who gets to monkey with your credit file. In contrast, while credit monitoring services might alert you when someone steals your identity, they’re not designed to prevent crooks from doing so.

That’s not to say credit monitoring services aren’t useful: They can be helpful in recovering from identity theft, which often involves a tedious, lengthy and expensive process for straightening out the phony activity with the bureaus.

The thing is, it’s almost impossible to sign up for credit monitoring services while a freeze is active on your credit file, so if you’re interested in signing up for them it’s best to do so before freezing your credit. But there’s no need to pay for these services: Hundreds of companies — many of which you have probably transacted with at some point in the last year — have disclosed data breaches and are offering free monitoring. California maintains one of the most comprehensive lists of companies that disclosed a breach, and most of those are offering free monitoring.

There’s a small catch with the freezes: Depending on the state in which you live, the bureaus may each be able to charge you for freezing your file (the fee ranges from $5 to $20); they may also be able to charge you for lifting or temporarily thawing your file in the event you need access to credit. Consumers Union has a decent rundown of the freeze fees by state.

In short, sign up for whatever free monitoring is available if that’s of interest, and then freeze your file at the four major bureaus. You can do this online, by phone, or through the mail. Given how unreliable the credit bureau Web sites have been for placing freezes these past few weeks, it may be easiest to do this over the phone. Here are the freeze Web sites and freeze phone numbers for each bureau (note the phone procedures can and likely will change as the bureaus get wise to more consumers learning how to quickly step through their automated voice response systems):

Equifax: 866-349-5191; choose option 3 for a “Security Freeze”

Experian: 888-397-3742;
–Press 2 “To learn about fraud or ADD A
–Press 2 “for security freeze options”
–Press 1 “to place a security freeze”
–Press 2 “…for all others”
–enter your info when prompted

Innovis: 800-540-2505;
–Press 1 for English
–Press 3 “to place or manage an active duty alert
–Press 2 “to place or manage a SECURITY
–enter your info when prompted

Transunion: 888-909-8872, choose option 3

If you still have questions about freezes, fraud alerts, credit monitoring or anything else related to any of the above, check out the lengthy primer/Q&A I published here on Sept. 11, The Equifax Breach: What You Should Know.

Planet Linux AustraliaDave Hall: Drupal Puppies

Over the years Drupal distributions, or distros as they're more affectionately known, have evolved a lot. We started off passing around database dumps. Eventually we moved onto using installations profiles and features to share par-baked sites.

There are some signs that distros aren't working for people using them. Agencies often hack a distro to meet client requirements. This happens because it is often difficult to cleanly extend a distro. A content type might need extra fields or the logic in an alter hook may not be desired. This makes it difficult to maintain sites built on distros. Other times maintainers abandon their distributions. This leaves site owners with an unexpected maintenance burden.

We should recognise how people are using distros and try to cater to them better. My observations suggest there are 2 types of Drupal distributions; starter kits and targeted products.

Targeted products are easier to deal with. Increasingly monetising targeted distro products is done through a SaaS offering. The revenue can funds the ongoing development of the product. This can help ensure the project remains sustainable. There are signs that this is a viable way of building Drupal 8 based products. We should be encouraging companies to embrace a strategy built around open SaaS. Open Social is a great example of this approach. Releasing the distros demonstrates a commitment to the business model. Often the secret sauce isn't in the code, it is the team and services built around the product.

Many Drupal 7 based distros struggled to articulate their use case. It was difficult to know if they were a product, a demo or a community project that you extend. Open Atrium and Commerce Kickstart are examples of distros with an identity crisis. We need to reconceptualise most distros as "starter kits" or as I like to call them "puppies".

Why puppies? Once you take a puppy home it becomes your responsibility. Starter kits should be the same. You should never assume that a starter kit will offer an upgrade path from one release to the next. When you install a starter kit you are responsible for updating the modules yourself. You need to keep track of security releases. If your puppy leaves a mess on the carpet, no one else will clean it up.

Sites build on top of a starter kit should diverge from the original version. This shouldn't only be an expectation, it should be encouraged. Installing a starter kit is the starting point of building a unique fork.

Project pages should clearly state that users are buying a puppy. Prospective puppy owners should know if they're about to take home a little lap dog or one that will grow to the size of a pony that needs daily exercise. Puppy breeders (developers) should not feel compelled to do anything once releasing the puppy. That said, most users would like some documentation.

I know of several agencies and large organisations that are making use of starter kits. Let's support people who are adopting this approach. As a community we should acknowledge that distros aren't working. We should start working out how best to manage the transition to puppies.

Planet DebianIain R. Learmonth: Free Software Efforts (2017W38)

Here’s my weekly report for week 38 of 2017. This week has not been a great week as I saw my primary development machine die in a spectacular reboot loop. Thanks to the wonderful community around Debian and free software (that if you’re reading this, you’re probably part of), I should be back up to speed soon. A replacement workstation is currently moving towards me and I’ve received a number of smaller donations that will go towards video converters and upgrades to get me back to full productivity.

Planet DebianIain R. Learmonth: Free Software Efforts (2017W38)

Here’s my weekly report for week 38 of 2017. This week has not been a great week as I saw my primary development machine die in a spectacular reboot loop. Thanks to the wonderful community around Debian and free software (that if you’re reading this, you’re probably part of), I should be back up to speed soon. A replacement workstation is currently moving towards me and I’ve received a number of smaller donations that will go towards video converters and upgrades to get me back to full productivity.


I’ve prepared and tested backports for 3 packages in the tasktools packaging team: tasksh, bugwarrior and powerline-taskwarrior. Unfortunately I am not currently in the backports ACLs and so I can’t upload these but I’m hoping this to be resolved soon. Once these are uploaded, the latest upstream release for all packages in the tasktools team will be available either in the stable suite or in the stable backports suite.

In preparation for the shutdown of Alioth mailing lists, I’ve set up a new mailing list for the tasktools team and have already updated the maintainer fields for all the team’s packages in git. I’ve subscribed the old mailing list’s user to the new mailing list in DDPO so there will still be a comprehensive view there during the migration. I am currently in the process of reaching out to the admins of with a view to moving our git repositories there.

I’ve also continued to review the scapy package and have closed a couple more bugs that were already fixed in the latest upstream release but had been missed in the changelog.

Bugs closed (fixed/wontfix): #774962, #850570

Tor Project

I’ve deployed a small fix to an update from last week where the platform field on Atlas had been pulled across to the left column. It has now been returned to the right hand column and is not pushed down the page by long family lists.

I’ve been thinking about the merge of Compass functionality into a future Atlas and this is being tracked in #23517.

Tor Project has approved expenses (flights and hotel) for me to attend an in-person meeting of the Metrics Team. This meeting will occur in Berlin on the 28th September and I will write up a report detailing outcomes relevant to my work after the meeting. I have spent some time this week preparing for this meeting.

Bugs closed (fixed/wontfix): #22146, #22297, #23511


I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

The loss of my primary development machine was a setback, however, I have been donated a new workstation which should hopefully arrive soon. The hard drives in my NAS can now also be replaced as I have budget available for this now. I do not see any hardware failures being imminent at this time, however should they occur I would not have budget to replace hardware, I only have funds to replace the hardware that has already failed.

Planet DebianIain R. Learmonth: Onion Services

In the summer 2017 edition of 2600 magazine there is a brilliant article on running onion services as part of a series on censorship resistant services. Onion services provide privacy and security for readers above that which is possible through the use of HTTPS. Since moving my website to Netlify, my onion service died as Netlify doesn’t provide automatic onion services (although they do offer automated Let’s Encrypt certificate provisioning). If anyone from Netlify is reading this, please consider adding a one-click onion service button next to the Let’s Encrypt button.

Planet DebianPetter Reinholdtsen: Easier recipe to observe the cell phones around you

A little more than a month ago I wrote how to observe the SIM card ID (aka IMSI number) of mobile phones talking to nearby mobile phone base stations using Debian GNU/Linux and a cheap USB software defined radio, and thus being able to pinpoint the location of people and equipment (like cars and trains) with an accuracy of a few kilometer. Since then we have worked to make the procedure even simpler, and it is now possible to do this without any manual frequency tuning and without building your own packages.

The gr-gsm package is now included in Debian testing and unstable, and the IMSI-catcher code no longer require root access to fetch and decode the GSM data collected using gr-gsm.

Here is an updated recipe, using packages built by Debian and a git clone of two python scripts:

  1. Start with a Debian machine running the Buster version (aka testing).
  2. Run 'apt install gr-gsm python-numpy python-scipy python-scapy' as root to install required packages.
  3. Fetch the code decoding GSM packages using 'git clone'.
  4. Insert USB software defined radio supported by GNU Radio.
  5. Enter the IMSI-catcher directory and run 'python scan-and-livemon' to locate the frequency of nearby base stations and start listening for GSM packages on one of them.
  6. Enter the IMSI-catcher directory and run 'python' to display the collected information.

Note, due to a bug somewhere the scan-and-livemon program (actually its underlying program grgsm_scanner) do not work with the HackRF radio. It does work with RTL 8232 and other similar USB radio receivers you can get very cheaply (for example from ebay), so for now the solution is to scan using the RTL radio and only use HackRF for fetching GSM data.

As far as I can tell, a cell phone only show up on one of the frequencies at the time, so if you are going to track and count every cell phone around you, you need to listen to all the frequencies used. To listen to several frequencies, use the --numrecv argument to scan-and-livemon to use several receivers. Further, I am not sure if phones using 3G or 4G will show as talking GSM to base stations, so this approach might not see all phones around you. I typically see 0-400 IMSI numbers an hour when looking around where I live.

I've tried to run the scanner on a Raspberry Pi 2 and 3 running Debian Buster, but the grgsm_livemon_headless process seem to be too CPU intensive to keep up. When GNU Radio print 'O' to stdout, I am told there it is caused by a buffer overflow between the radio and GNU Radio, caused by the program being unable to read the GSM data fast enough. If you see a stream of 'O's from the terminal where you started scan-and-livemon, you need a give the process more CPU power. Perhaps someone are able to optimize the code to a point where it become possible to set up RPi3 based GSM sniffers? I tried using Raspbian instead of Debian, but there seem to be something wrong with GNU Radio on raspbian, causing glibc to abort().


Planet DebianEnrico Zini: Systemd unit files

These are the notes of a training course on systemd I gave as part of my work with Truelite.

Writing .unit files

For reference, the global index with all .unit file directives is at man systemd.directives.

All unit files have a [Unit] section with documentation and dependencies. See man systemd.unit for documentation.

It is worth having a look at existing units to see what they are like. Use systemctl --all -t unittype for a list, and systemctl cat unitname to see its content wherever it is installed.

For example: systemctl cat Note that systemctl cat adds a line of comment at the top so one can see where the unit file is installed.

Most unit files also have an [Install] section (also documented in man systemd.unit) that controls what happens when enabling or disabling the unit.

See also:

.target units

.target units only contain [Unit] and [Install] sections, and can be used to give a name to a given set of dependencies.

For example, one could create a unit, that when brought up activates, via dependencies, a set of services, mounts, network sockets, and so on.

See man

See systemctl --all -t target for examples.

special units

man systemd.special has a list of units names that have a standard use associated to them.

For example, is a unit that is started whenever Control+Alt+Del is pressed on the console. By default it is symlinked to, and you can provide your own version in /etc/systemd/system/ to perform another action when Control+Alt+Del is pressed.

User units

systemd can also be used to manage services on a user session, starting them at login and stopping them at logout.

Add --user to the normal systemd commands to have them work with the current user's session instead of the general system.

See systemd/User in the Arch Wiki for a good description of what it can do.

Planet DebianDirk Eddelbuettel: RcppCNPy 0.2.7

A new version of the RcppCNPy package arrived on CRAN yesterday.

RcppCNPy provides R with read and write access to NumPy files thanks to the cnpy library by Carl Rogers.

This version updates internals for function registration, but otherwise mostly switches the vignette over to the shiny new pinp two-page template and package.

Changes in version 0.2.7 (2017-09-22)

  • Vignette updated to Rmd and use of pinp package

  • File src/init.c added for dynamic registration

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the best place to start a discussion may be the GitHub issue tickets page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux AustraliaTim Serong: On Equal Rights

This is probably old news now, but I only saw it this morning, so here we go:

In case that embedded tweet doesn’t show up properly, that’s an editorial in the NT News which says:

Voting papers have started to drop through Territory mailboxes for the marriage equality postal vote and I wanted to share with you a list of why I’ll be voting yes.

1. I’m not an arsehole.

This resulted in predictable comments along the lines of “oh, so if I don’t share your views, I’m an arsehole?”

I suppose it’s unlikely that anyone who actually needs to read and understand what I’m about to say will do so, but just in case, I’ll lay this out as simply as I can:

  • A personal belief that marriage is a thing that can only happen between a man and a woman does not make you an arsehole (it might make you on the wrong side of history, or a lot of other things, but it does not necessarily make you an arsehole).
  • Voting “no” to marriage equality is what makes you an arsehole.

The survey says “Should the law be changed to allow same-sex couples to marry?” What this actually means is, “Should same-sex couples have the same rights under law as everyone else?”

If you believe everyone should have the same rights under law, you need to vote yes regardless of what you, personally, believe the word “marriage” actually means – this is to make sure things like “next of kin” work the way the people involved in a relationship want them to.

If you believe that there are minorities that should not have the same rights under law as everyone else, then I’m sorry, but you’re an arsehole.

(Personally I think the Marriage Act should be ditched entirely in favour of a Civil Unions Act – that way the word “marriage” could go back to simply meaning whatever it means to the individuals being married, and to their god(s) if they have any – but this should in no way detract from the above. Also, this vote shouldn’t have happened in the first place; our elected representatives should have done their bloody jobs and fixed the legislation already.)

TEDHow the ‘Battle of the Sexes’ influenced a generation of men: Billie Jean King’s TEDWomen update

Billie Jean King: “Bobby Riggs — he was the former number one player, he wasn’t just some hacker. He was one of my heroes and I admired him. And that’s the reason I beat him, actually, because I respected him.” She spoke with Pat Mitchell at TEDWomen2015. Photo: Marla Aufmuth/TED

Forty-three years ago this week, the number one tennis star in the world, 29-year-old Billie Jean King, agreed to take on 55-year-old Bobby Riggs, in a match dubbed the “Battle of the Sexes.” The prize was $100,000 — which compared with today’s million-dollar-winning pots wasn’t much — but it was the first time that women and men were offered the same amount of prize money for victory.

The exhibition match, which admittedly was more notable at the time for its spectacle and outrageousness — Billie Jean King entered the Houston Astrodome on a feathery litter carried by shirtless men, for instance — was the most watched tennis match ever, with an estimated worldwide television audience of 90 million people. If you are old enough to remember it, you probably watched it.

Billie Jean King won in straight sets: 6-4, 6-3, 6-3.

This weekend, a new movie based on the true story starring Emma Stone as Billie Jean King and Steve Carell as Bobby Riggs hits theaters. With the election of Donald Trump — and all the sexism and misogyny that the 2016 election entailed just behind us — the story is sadly relevant today. As Lynn Sherr wrote in her review of the movie today at, “It’s all frustratingly familiar, but this time, the over-the-hill clown won.”

I interviewed Billie Jean King at TEDWomen in 2015 about her tennis career and lifelong fight for gender parity in sports and in the workplace. She talked about the match with Riggs and the intense pressure she felt on every stroke to win for women. She recalled, “I thought, ‘If I lose, it’s going to put women back 50 years, at least.’”

After she won, many women told her that her victory empowered them to finally get up the nerve to ask for a raise at work. “Some women had waited 10, 15 years to ask. I said, ‘More importantly, did you get it?’” (They did.)

As for men, the reaction was delayed. Many years later, she came to realize that the match had made an impact on the generation of men who were children at the time – an impact that they themselves didn’t realize until they were older. She told me, “Most times, the men are the ones who have tears in their eyes, it’s very interesting.” They say, ‘Billie, I was very young when I saw that match, and now I have a daughter. And I am so happy I saw that as a young man.’”

One of those young men was President Obama.

He said: “You don’t realize it, but I saw that match at 12. And now I have two daughters, and it has made a difference in how I raise them.”

Watch my interview with Billie Jean King if you haven’t seen it:

A common refrain of those working to improve diversity and representation in media is that if you can’t see it, you can’t be it. And that’s true in sports, government and in the workplace as well. If leaders don’t represent the diversity of our globalizing world, fresh ideas, diverse talent and an inclusive society can’t flourish. Through the Billie Jean King Leadership Initiative, King works to level the playing field for all people of all backgrounds so that everyone can “achieve their maximum potential and contribute to building a better society for all.” (Full disclosure: I am a member of the BJKLI advisory council.)

Emma Stone told USA Today earlier this month, she’s proud to play a part in showing some of King’s story to a younger audience. “The nice thing about doing a film like this,” she said, “is that there’s a whole generation of people who weren’t born before the Battle of the Sexes who are going to learn about this incredible period in history and all the things that have come since, so I’m grateful for that.”

“It wasn’t about tennis,” says King. “It was about history and social change.”

TEDWomen 2017 happens November 1–3 in New Orleans, and you’re invited. Learn more!

Billie Jean King: “I started thinking about my sport and how everybody who played wore white shoes, white clothes, played with white balls — everybody who played was white. And I said to myself, at 12 years old, “Where is everyone else?” And that just kept sticking in my brain. And that moment, I promised myself I’d fight for equal rights and opportunities for boys and girls, men and women, the rest of my life.” Photo: Marla Aufmuth/TED

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV October 2017 Workshop

Oct 21 2017 12:30
Oct 21 2017 16:30
Oct 21 2017 12:30
Oct 21 2017 16:30
Infoxchange, 33 Elizabeth St. Richmond

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

October 21, 2017 - 12:30

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main October 2017 Meeting: The Tor software and network

Oct 3 2017 18:30
Oct 3 2017 20:30
Oct 3 2017 18:30
Oct 3 2017 20:30
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000


Tuesday, October 3, 2017
6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000


  • Russell Coker, Tor

Tor is free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security.

Russell Coker has done lots of Linux development over the years, mostly involved with Debian.

Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

October 3, 2017 - 18:30


TEDCassini’s final dive, and more news from TED speakers

As usual, the TED community has lots of news to share this week. Below, some highlights.

Farewell to Cassini — and here’s to the continuing search for life beyond Earth. In mid-August, PBS released a digital short featuring Carolyn Porco, a planetary scientist and the leader of the imaging team for the Cassini mission to Saturn. In the short, Porco discusses what is required for life to exist on a planet, and how Saturn’s moon Enceladus seems a promising place to look for life outside Earth. This coincides with Cassini’s final dive on September 15, 2017. After 20 years in space, the Cassini spacecraft ended its seven-year observation of Saturn by diving into its atmosphere, where it burned and disintegrated. (Watch Porco’s TED Talk)

How old is zero really? The Bakshali manuscript is a 70-page birch bark manuscript thought to have been used by merchants in India to practice arithmetic. Notably, it contains the number zero, represented by a small dot. After carbon-dating the manuscript, scientists from the University of Oxford, including mathematics professor Marcus du Sautoy, determined that the manuscript likely dates from 200–400 A.D., much earlier than previously thought. If the carbon dating is correct, Bakshali may be the first known usage of zero as a symbol for nothing. (Watch du Sautoy’s TED Talk)

The power of taking time off. In 2009, Stefan Sagmeister took the TED stage by storm as he shared his vision of time off. In his talk, he explains that every seven years, he embarks on a sabbatical year to recharge, be creative, and feel inspired. Fast forward to 2017, and Neil Pasricha teamed up with the CEO of SimpliFlying, a global aviation strategy firm, to test Sagmeister’s approach within the company. Instead of every seven years, employees took vacation every seven weeks. Despite a few pain points, workers’ creativity, productivity and happiness increased, and the firm’s economic performance improved, Pasricha reports in the Harvard Business Review. It seems as though it pays to relax. (Watch Sagmeister’s TED Talk and Neil Pasricha’s TED Talk)

What’s wrong with US democracy — and how to fix it. In this time of divisive politics, Michael Porter and colleague Katherine Gehl released new research describing the causes of the U.S political system’s failure to serve the public interest. Their detailed report explains how the system changed over the years to benefit political parties and industry allies, and offers strategies for how we can reinvigorate our democracy. (Watch Michael Porter’s TED Talk)

The worst flag in North America gets a reboot. In Roman Mars’ TED Talk on awful city flag designs, he calls Pocatello, Idaho’s flag the worst in North America. The city’s residents didn’t stand for that; they called on local officials to create a new flag. In 2016, a flag design committee was formed, discussions were open to the public, and 709 submissions poured in. Mars even traveled to Pocatello to consult on the design process. Now, Pocatello’s flag has been transformed from what the North American Vexillological Association rated as the worst flag in North America into a flag that attempts to capture the beauty and history of Pocatello. (Watch Roman Mars’ TED Talk)  

Community Health Academy: Phase one. The news may be regularly alarming, but around the world, things are on an upward trajectory. At Goalkeepers, held September 19 and 20 in New York City, the Bill & Melinda Gates Foundation set out to celebrate the “quiet progress” being made toward the UN’s Sustainable Development Goals. Amid a speaker lineup that included Malala Yousafzai, Justin Trudeau and Barack Obama, 2017 TED Prize winner Raj Panjabi stepped up to share his vision for bringing health care to the billion people who lack it by empowering community health workers. He shared the latest on his TED Prize wish: the Community Health Academy. The project now has 15 partners and phase one, launching next year, will be a free, open-education platform for policy makers and nonprofit leaders interested in community health models. “We cannot achieve the Global Goals without investing in hiring, training and equipping community health workers,” said Panjabi. “We’re working to make sure community health workers are no longer an informal, unrecognized group but become a renowned, empowered profession like nurses and doctors.” (Watch Panjabi’s TED Talk)

Have a news item to share? Write us at and you may see it included in this biweekly round-up.

Featured Image Credit: NASA.



CryptogramFriday Squid Blogging: Using Squid Ink to Detect Gum Disease

A new dental imagery method, using squid ink, light, and ultrasound.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet Linux Australiasthbrx - a POWER technical blog: Stupid Solutions to Stupid Problems: Hardcoding Your SSH Key in the Kernel

The "problem"

I'm currently working on firmware and kernel support for OpenCAPI on POWER9.

I've recently been allocated a machine in the lab for development purposes. We use an internal IBM tool running on a secondary machine that triggers hardware initialisation procedures, then loads a specified skiboot firmware image, a kernel image, and a root file system directly into RAM. This allows us to get skiboot and Linux running without requiring the usual hostboot initialisation and gives us a lot of options for easier tinkering, so it's super-useful for our developers working on bringup.

When I got access to my machine, I figured out the necessary scripts, developed a workflow, and started fixing my code... so far, so good.

One day, I was trying to debug something and get logs off the machine using ssh and scp, when I got frustrated with having to repeatedly type in our ultra-secret, ultra-secure root password, abc123. So, I ran ssh-copy-id to copy over my public key, and all was good.

Until I rebooted the machine, when strangely, my key stopped working. It took me longer than it should have to realise that this is an obvious consequence of running entirely from an initrd that's reloaded every boot...

The "solution"

I mentioned something about this to Jono, my housemate/partner-in-stupid-ideas, one evening a few weeks ago. We decided that clearly, the best way to solve this problem was to hardcode my SSH public key in the kernel.

This would definitely be the easiest and most sensible way to solve the problem, as opposed to, say, just keeping my own copy of the root filesystem image. Or asking Mikey, whose desk is three metres away from mine, whether he could use his write access to add my key to the image. Or just writing a wrapper around sshpass...

One Tuesday afternoon, I was feeling bored...

The approach

The SSH daemon looks for authorised public keys in ~/.ssh/authorized_keys, so we need to have a read of /root/.ssh/authorized_keys return a specified hard-coded string.

I did a bit of investigation. My first thought was to put some kind of hook inside whatever filesystem driver was being used for the root. After some digging, I found out that the filesystem type rootfs, as seen in mount, is actually backed by the tmpfs filesystem. I took a look around the tmpfs code for a while, but didn't see any way to hook in a fake file without a lot of effort - the tmpfs code wasn't exactly designed with this in mind.

I thought about it some more - what would be the easiest way to create a file such that it just returns a string?

Then I remembered sysfs, the filesystem normally mounted at /sys, which is used by various kernel subsystems to expose configuration and debugging information to userspace in the form of files. The sysfs API allows you to define a file and specify callbacks to handle reads and writes to the file.

That got me thinking - could I create a file in /sys, and then use a bind mount to have that file appear where I need it in /root/.ssh/authorized_keys? This approach seemed fairly straightforward, so I decided to give it a try.

First up, creating a pseudo-file. It had been a while since the last time I'd used the sysfs API...


The sysfs pseudo file system was first introduced in Linux 2.6, and is generally used for exposing system and device information.

Per the sysfs documentation, sysfs is tied in very closely with the kobject infrastructure. sysfs exposes kobjects as directories, containing "attributes" represented as files. The kobject infrastructure provides a way to define kobjects representing entities (e.g. devices) and ksets which define collections of kobjects (e.g. devices of a particular type).

Using kobjects you can do lots of fancy things such as sending events to userspace when devices are hotplugged - but that's all out of the scope of this post. It turns out there's some fairly straightforward wrapper functions if all you want to do is create a kobject just to have a simple directory in sysfs.

#include <linux/kobject.h>

static int __init ssh_key_init(void)
        struct kobject *ssh_kobj;
        ssh_kobj = kobject_create_and_add("ssh", NULL);
        if (!ssh_kobj) {
                pr_err("SSH: kobject creation failed!\n");
                return -ENOMEM;

This creates and adds a kobject called ssh. And just like that, we've got a directory in /sys/ssh/!

The next thing we have to do is define a sysfs attribute for our authorized_keys file. sysfs provides a framework for subsystems to define their own custom types of attributes with their own metadata - but for our purposes, we'll use the generic bin_attribute attribute type.

#include <linux/sysfs.h>

const char key[] = "PUBLIC KEY HERE...";

static ssize_t show_key(struct file *file, struct kobject *kobj,
                        struct bin_attribute *bin_attr, char *to,
                        loff_t pos, size_t count)
        return memory_read_from_buffer(to, count, &pos, key, bin_attr->size);

static const struct bin_attribute authorized_keys_attr = {
        .attr = { .name = "authorized_keys", .mode = 0444 },
        .read = show_key,
        .size = sizeof(key)

We provide a simple callback, show_key(), that copies the key string into the file's buffer, and we put it in a bin_attribute with the appropriate name, size and permissions.

To actually add the attribute, we put the following in ssh_key_init():

int rc;
rc = sysfs_create_bin_file(ssh_kobj, &authorized_keys_attr);
if (rc) {
        pr_err("SSH: sysfs creation failed, rc %d\n", rc);
        return rc;

Woo, we've now got /sys/ssh/authorized_keys! Time to move on to the bind mount.


Now that we've got a directory with the key file in it, it's time to figure out the bind mount.

Because I had no idea how any of the file system code works, I started off by running strace on mount --bind ~/tmp1 ~/tmp2 just to see how the userspace mount tool uses the mount syscall to request the bind mount.

execve("/bin/mount", ["mount", "--bind", "/home/ajd/tmp1", "/home/ajd/tmp2"], [/* 18 vars */]) = 0


mount("/home/ajd/tmp1", "/home/ajd/tmp2", 0x18b78bf00, MS_MGC_VAL|MS_BIND, NULL) = 0

The first and second arguments are the source and target paths respectively. The third argument, looking at the signature of the mount syscall, is a pointer to a string with the file system type. Because this is a bind mount, the type is irrelevant (upon further digging, it turns out that this particular pointer is to the string "none").

The fourth argument is where we specify the flags bitfield. MS_MGC_VAL is a magic value that was required before Linux 2.4 and can now be safely ignored. MS_BIND, as you can probably guess, signals that we want a bind mount.

(The final argument is used to pass file system specific data - as you can see it's ignored here.)

Now, how is the syscall actually handled on the kernel side? The answer is found in fs/namespace.c.

SYSCALL_DEFINE5(mount, char __user *, dev_name, char __user *, dir_name,
                char __user *, type, unsigned long, flags, void __user *, data)
        int ret;

        /* ... copy parameters from userspace memory ... */

        ret = do_mount(kernel_dev, dir_name, kernel_type, flags, options);

        /* ... cleanup ... */

So in order to achieve the same thing from within the kernel, we just call do_mount() with exactly the same parameters as the syscall uses:

rc = do_mount("/sys/ssh", "/root/.ssh", "sysfs", MS_BIND, NULL);
if (rc) {
        pr_err("SSH: bind mount failed, rc %d\n", rc);
        return rc;

...and we're done, right? Not so fast:

SSH: bind mount failed, rc -2

-2 is ENOENT - no such file or directory. For some reason, we can't find /sys/ssh... of course, that would be because even though we've created the sysfs entry, we haven't actually mounted sysfs on /sys.

rc = do_mount("sysfs", "/sys", "sysfs",
              MS_NOSUID | MS_NOEXEC | MS_NODEV, NULL);

At this point, my key worked!

Note that this requires that your root file system has an empty directory created at /sys to be the mount point. Additionally, in a typical Linux distribution environment (as opposed to my hardware bringup environment), your initial root file system will contain an init script that mounts your real root file system somewhere and calls pivot_root() to switch to the new root file system. At that point, the bind mount won't be visible from children processes using the new root - I think this could be worked around but would require some effort.


The final piece of the puzzle is building our new code into the kernel image.

To allow us to switch this important functionality on and off, I added a config option to fs/Kconfig:

config SSH_KEY
        bool "Andrew's dumb SSH key hack"
        default y
          Hardcode an SSH key for /root/.ssh/authorized_keys.

          This is a stupid idea. If unsure, say N.

This will show up in make menuconfig under the File systems menu.

And in fs/Makefile:

obj-$(CONFIG_SSH_KEY)           += ssh_key.o

If CONFIG_SSH_KEY is set to y, obj-$(CONFIG_SSH_KEY) evaluates to obj-y and thus ssh-key.o gets compiled. Conversely, obj-n is completely ignored by the build system.

I thought I was all done... then Andrew suggested I make the contents of the key configurable, and I had to oblige. Conveniently, Kconfig options can also be strings:

        string "Value for SSH key"
        depends on SSH_KEY
          Enter in the content for /root/.ssh/authorized_keys.

Including the string in the C file is as simple as:

const char key[] = CONFIG_SSH_KEY_VALUE;

And there we have it, a nicely configurable albeit highly limited kernel SSH backdoor!


I've put the full code up on GitHub for perusal. Please don't use it, I will be extremely disappointed in you if you do.

Thanks to Jono for giving me stupid ideas, and the rest of OzLabs for being very angry when they saw the disgusting things I was doing.

Comments and further stupid suggestions welcome!

Sociological ImagesPunk Rock Resisting Islamophobia

Originally posted at Discoveries

Punk rock has a long history of anti-racism, and now a new wave of punk bands are turning it up to eleven to combat Islamophobia. For a recent research article, sociologist Amy D. McDowell  immersed herself into the “Taqwacore” scene — a genre of punk rock that derives its name from the Arabic word “Taqwa.” While inspired by the Muslim faith, this genre of punk is not strictly religious — Taqwacore captures the experience of the “brown kids,” Muslims and non-Muslims alike who experience racism and prejudice in the post-9/11 era. This music calls out racism and challenges stereotypes.

Through a combination of interviews and many hours of participant observation at Taqwacore events, McDowell brings together testimony from musicians and fans, describes the scene, and analyzes materials from Taqwacore forums and websites. Many participants, Muslim and non-Muslim alike, describe processes of discrimination where anti-Muslim sentiments and stereotypes have affected them. Her research shows how Taqwacore is a multicultural musical form for a collective, panethnic “brown” identity that spans multiple nationalities and backgrounds. Pushing back against the idea that Islam and punk music are incompatible, Taqwacore artists draw on the essence of punk to create music to that empowers marginalized youth.

Neeraj Rajasekar is a Ph.D. student in sociology at the University of Minnesota.

(View original at

CryptogramBoston Red Sox Caught Using Technology to Steal Signs

The Boston Red Sox admitted to eavesdropping on the communications channel between catcher and pitcher.

Stealing signs is believed to be particularly effective when there is a runner on second base who can both watch what hand signals the catcher is using to communicate with the pitcher and can easily relay to the batter any clues about what type of pitch may be coming. Such tactics are allowed as long as teams do not use any methods beyond their eyes. Binoculars and electronic devices are both prohibited.

In recent years, as cameras have proliferated in major league ballparks, teams have begun using the abundance of video to help them discern opponents' signs, including the catcher's signals to the pitcher. Some clubs have had clubhouse attendants quickly relay information to the dugout from the personnel monitoring video feeds.

But such information has to be rushed to the dugout on foot so it can be relayed to players on the field -- a runner on second, the batter at the plate -- while the information is still relevant. The Red Sox admitted to league investigators that they were able to significantly shorten this communications chain by using electronics. In what mimicked the rhythm of a double play, the information would rapidly go from video personnel to a trainer to the players.

This is ridiculous. The rules about what sorts of sign stealing are allowed and what sorts are not are arbitrary and unenforceable. My guess is that the only reason there aren't more complaints is because everyone does it.

The Red Sox responded in kind on Tuesday, filing a complaint against the Yankees claiming that the team uses a camera from its YES television network exclusively to steal signs during games, an assertion the Yankees denied.

Boston's mistake here was using a very conspicuous Apple Watch as a communications device. They need to learn to be more subtle, like everyone else.

Worse Than FailureError'd: Choose Wisely

"I'm not sure how I can give feedback on this course, unless, figuring out this matrix is actually a final exam," wrote Mads.


Brian W. writes, "Sorry that you're not happy with our spam, but before you go...just one more."


"I was looking forward to getting this Gerber Dime, but I guess I'll have to wait till they port it to OS X," wrote Peter G.


"Deleting 7 MB frees up 6.66 GB? I smell a possible unholy alliance," Mike W. writes.


Bill W. wrote, "I wonder if they're wanting to know to what degree I'm 'not at all likely' to recommend Best Buy to friends and family?"


"So, is this a new way for the folks at WebEx to make sure that you don't get bad answers?" writes Andy B.


[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet Linux Australiasthbrx - a POWER technical blog: NCSI - Nice Network You've Got There

A neat piece of kernel code dropped into my lap recently, and as a way of processing having to inject an entire network stack into by brain in less-than-ideal time I thought we'd have a look at it here: NCSI!

NCSI - Not the TV Show

NCSI stands for Network Controller Sideband Interface, and put most simply it is a way for a management controller (eg. a BMC like those found on our OpenPOWER machines) to share a single physical network interface with a host machine. Instead of two distinct network interfaces you plug in a single cable and both the host and the BMC have network connectivity.

NCSI-capable network controllers achieve this by filtering network traffic as it arrives and determining if it is host- or BMC-bound. To know how to do this the BMC needs to tell the network controller what to look out for, and from a Linux driver perspective this the focus of the NCSI protocol.

NCSI Overview

Hi My Name Is 70:e2:84:14:24:a1

The major components of what NCSI helps facilitate are:

  • Network Controllers, known as 'Packages' in this context. There may be multiple separate packages which contain one or more Channels.
  • Channels, most easily thought of as the individual physical network interfaces. If a package is the network card, channels are the individual network jacks. (Somewhere a pedant's head is spinning in circles).
  • Management Controllers, or our BMC, with their own network interfaces. Hypothetically there can be multiple management controllers in a single NCSI system, but I've not come across such a setup yet.

NCSI is the medium and protocol via which these components communicate.

NCSI Packages

The interface between Management Controller and one or more Packages carries both general network traffic to/from the Management Controller as well as NCSI traffic between the Management Controller and the Packages & Channels. Management traffic is differentiated from regular traffic via the inclusion of a special NCSI tag inserted in the Ethernet frame header. These management commands are used to discover and configure the state of the NCSI packages and channels.

If a BMC's network interface is configured to use NCSI, as soon as the interface is brought up NCSI gets to work finding and configuring a usable channel. The NCSI driver at first glance is an intimidating combination of state machines and packet handlers, but with enough coffee it can be represented like this:

NCSI State Diagram

Without getting into the nitty gritty details the overall process for configuring a channel enough to get packets flowing is fairly straightforward:

  • Find available packages.
  • Find each package's available channels.
  • (At least in the Linux driver) select a channel with link.
  • Put this channel into the Initial Config State. The Initial Config State is where all the useful configuration occurs. Here we find out what the selected channel is capable of and its current configuration, and set it up to recognise the traffic we're interested in. The first and most basic way of doing this is configuring the channel to filter traffic based on our MAC address.
  • Enable the channel and let the packets flow.

At this point NCSI takes a back seat to normal network traffic, transmitting a "Get Link Status" packet at regular intervals to monitor the channel.

AEN Packets

Changes can occur from the package side too; the NCSI package communicates these back to the BMC with Asynchronous Event Notification (AEN) packets. As the name suggests these can occur at any time and the driver needs to catch and handle these. There are different types but they essentially boil down to changes in link state, telling the BMC the channel needs to be reconfigured, or to select a different channel. These are only transmitted once and no effort is made to recover lost AEN packets - another good reason for the NCSI driver to periodically monitor the channel.


Each channel can be configured to filter traffic based on MAC address, broadcast traffic, multicast traffic, and VLAN tagging. Associated with each of these filters is a filter table which can hold a finite number of entries. In the case of the VLAN filter each channel could match against 15 different VLAN IDs for example, but in practice the physical device will likely support less. Indeed the popular BCM5718 controller supports only two!

This is where I dived into NCSI. The driver had a lot of the pieces for configuring VLAN filters but none of it was actually hooked up in the configure state, and didn't have a way of actually knowing which VLAN IDs were meant to be configured on the interface. The bulk of that work appears in this commit where we take advantage of some useful network stack callbacks to get the VLAN configuration and set them during the configuration state. Getting to the configuration state at some arbitrary time and then managing to assign multiple IDs was the trickiest bit, and is something I'll be looking at simplifying in the future.

NCSI! A neat way to give physically separate users access to a single network controller, and if it works right you won't notice it at all. I'll surely be spending more time here (fleshing out the driver's features, better error handling, and making the state machine a touch more readable to start, and I haven't even mentioned HWA), so watch this space!


LongNowCassini Ends, but the Search for Life in the Solar System Continues

On September 15 02017, the Cassini-Huygens probe, which spent the last 13 years of a 20-year space mission studying Saturn, plummeted as planned into the ringed planet’s atmosphere, catching fire and becoming a meteor.

Cassini’s final moments, dubbed “The Grand Finale” by NASA, elicited reactions of wonder around the world. The stunning photographs Cassini captured of Saturn over the course of its mission were shared widely on social media. While the images understandably received most of the attention, the discoveries the probe made in its search for life in the solar system, especially on the Saturnian moons of Enceladus and Titan, will perhaps be its enduring legacy.

The atmosphere of Titan, a moon of Saturn. NASA/JPL-Caltech/Space Science Institute

Planetary scientist Carolyn Porco, who led the imaging team for the Cassini mission, spoke at Long Now in July 02017. In the Q&A, Stewart Brand asked Porco about what the impact of finding life in the solar system would be:

As the Cassini mission came to an end, Porco shared her reflections on the mission in a final captain’s log:

Captain’s Log

September 15, 2017

The end is now upon us. Within hours of the posting of this entry, Cassini will have burned up in the atmosphere of Saturn … a kiloton explosion, spread out against the sky in a meteoric display of light and fire, a dazzling flash to signal the dying essence of a lone emissary from another world. As if the myths of old had foretold the future, the great patriarch will consume his child. At that point, that golden machine, so dutiful and strong, will enter the realm of history, and the toils and triumphs of this long march will be done.

For those of us appointed long ago to embark on this journey, it has been a taxing 3 decades, requiring a level of dedication that I could not have predicted, and breathless times when we sprinted for the duration of a marathon. But in return, we were blessed to spend our lives working and playing in that promised land beyond the Sun.

My imaging team members and I were especially blessed to serve as the documentarians of this historic epoch and return a stirring visual record of our travels around Saturn and the glories we found there. This is our gift to the citizens of planet Earth.

So, it is with both wistful, sentimental reflection and a boundless sense of pride, in a commitment met and a job well done, that I now turn to face this looming, abrupt finality.

It is doubtful we will soon see a mission as richly suited as Cassini return to this ringed world and shoulder a task as colossal as we have borne over the last 27 years.

To have served on this mission has been to live the rewarding life of an explorer of our time, a surveyor of distant worlds. We wrote our names across the sky. We could not have asked for more.

I sign off now, grateful in knowing that Cassini’s legacy, and ours, will include our mutual roles as authors of a tale that humanity will tell for a very long time to come.

Carolyn Porco
Cassini Imaging Team Leader
Director, CICLOPS
Boulder, CO

A few hours before its mission came to an end, Cassini took a final photograph of the planet it spent the last thirteen years exploring.

NASA/JPL-Caltech/Space Science Institute

The topic of space invites long-term thinking. Some recent Long Now talks:

Cory DoctorowBoring, complex and important: the deadly mix that blew up the open web

On Monday, the World Wide Web Consortium published EME, a standard for locking up video on the web with DRM, allowing large corporate members to proceed without taking any steps to protect accessibility work, security research, archiving or innovation.

I spent years working to get people to pay attention to the ramifications of the effort, but was stymied by the deadly combination of an issue that was super-technical and complicated, as well as kind of boring (standards-making is a slow-moving, legalistic process).

This is really the worst kind of problem, an issue that matters but that requires a lot of technical knowledge and sustained attention to engage with. I wrote up a postmortem on the effort for Wired.

The W3C is a multistakeholder body based on consensus, and that means that members are expected to compromise to find common ground. So we returned with a much milder proposal: we’d stand down on objecting to EME, provided that the consortium promised only to invoke laws such as the DMCA in tandem with some other complaint, like copyright infringement. That meant studios and their technology partners could always sue when someone infringed copyright, or stole trade secrets, or interfered with contractual arrangements, but they would not be able to abuse the W3C process to claim the right to sue over otherwise legal activities, such as automatically analysing videos to prevent strobe effects from triggering seizures in people with photosensitive epilepsy.

This proposal was a way to get at the leadership’s objection: if the law was making the mischief, then let us take the law off the table (EFF is also suing the US government to get the law overturned, but that could take years, far too long in web-time). More importantly, if EME’s advocates refused to negotiate on this point, it would suggest that they planned on using the law to enforce “rights” that they really shouldn’t have, such as the right to decide who could adapt video for people with disabilities, or whether national archives could exercise their statutory rights to make deposit copies of copyrighted works.

But EME’s proponents – a collection of browser vendors, entertainment industry trade bodies, and companies selling products based on EME – refused to negotiate. After 90 days of desultory participation, the W3C leaders allowed the process to die. Despite this intransigence, the W3C executive renewed the EME working group’s charter and allowed it to continue its work, even as the cracks among the W3C’s membership on the standard’s fate deepened.

By the time EME was ready to publish, those cracks had deepened further. The poll results on EME showed the W3C was more divided on this matter than on any in its history. Again, the W3C leadership put its thumbs on the scales for the entertainment industry’s wish-lists over the open web’s core requirements, and overrode every single objection raised by the members.

Boring, complex and important: a recipe for the web’s dire future
[Cory Doctorow/Wired]

Krebs on SecurityExperian Site Can Give Anyone Your Credit Freeze PIN

An alert reader recently pointed my attention to a free online service offered by big-three credit bureau Experian that allows anyone to request the personal identification number (PIN) needed to unlock a consumer credit file that was previously frozen at Experian.

Experian's page for retrieving someone's credit freeze PIN requires little more information than has already been leaked by big-three bureau Equifax and a myriad other breaches.

Experian’s page for retrieving someone’s credit freeze PIN requires little more information than has already been leaked by big-three bureau Equifax and a myriad other breaches.

The first hurdle for instantly revealing anyone’s freeze PIN is to provide the person’s name, address, date of birth and Social Security number (all data that has been jeopardized in breaches 100 times over — including in the recent Equifax breach — and that is broadly for sale in the cybercrime underground).

After that, one just needs to input an email address to receive the PIN and swear that the information is true and belongs to the submitter. I’m certain this warning would deter all but the bravest of identity thieves!

The final authorization check is that Experian asks you to answer four so-called “knowledge-based authentication” or KBA questions. As I have noted in countless stories published here previously, the problem with relying on KBA questions to authenticate consumers online is that so much of the information needed to successfully guess the answers to those multiple-choice questions is now indexed or exposed by search engines, social networks and third-party services online — both criminal and commercial.

What’s more, many of the companies that provide and resell these types of KBA challenge/response questions have been hacked in the past by criminals that run their own identity theft services.

“Whenever I’m faced with KBA-type questions I find that database tools like Spokeo, Zillow, etc are my friend because they are more likely to know the answers for me than I am,” said Nicholas Weaver, a senior researcher in networking and security for the International Computer Science Institute (ICSI).

The above quote from Mr. Weaver came in a story from May 2017 which looked at how identity thieves were able to steal financial and personal data for over a year from TALX, an Equifax subsidiary that provides online payroll, HR and tax services. Equifax says crooks were able to reset the 4-digit PIN given to customer employees as a password and then steal W-2 tax data after successfully answering KBA questions about those employees.

In short: Crooks and identity thieves broadly have access to the data needed to reliably answer KBA questions on most consumers. That is why this offering from Experian completely undermines the entire point of placing a freeze. 

After discovering this portal at Experian, I tried to get my PIN, but the system failed and told me to submit the request via mail. That’s fine and as far as I’m concerned the way it should be. However, I also asked my followers on Twitter who have freezes in place at Experian to test it themselves. More than a dozen readers responded in just a few minutes, and most of them reported success at retrieving their PINs on the site and via email after answering the KBA questions.

Here’s a sample of the KBA questions the site asked one reader:

1. Please select the city that you have previously resided in.

2. According to our records, you previously lived on (XXTH). Please choose the city from the following list where this street is located.

3. Which of the following people live or previously lived with you at the address you provided?

4. Please select the model year of the vehicle you purchased or leased prior to July 2017 .

Experian will display the freeze PIN on its site, and offer to send it to an email address of your choice.

Experian will display the freeze PIN on its site, and offer to send it to an email address of your choice. Image: Rob Jacques.

I understand if people who place freezes on their credit files are prone to misplacing the PIN provided by the bureaus that is needed to unlock or thaw a freeze. This is human nature, and the bureaus should absolutely have a reliable process to recover this PIN. However, the information should be sent via snail mail to the address on the credit record, not via email to any old email address.

This is yet another example of how someone or some entity other than the credit bureaus needs to be in put in charge of rethinking and rebuilding the process by which consumers apply for and manage credit freezes. I addressed some of these issues — as well as other abuses by the credit reporting bureaus — in the second half of a long story published Wednesday evening.

Experian has not yet responded to requests for comment.

While this service is disappointing, I stand by my recommendation that everyone should place a freeze on their credit files. I published a detailed Q&A a few days ago about why this is so important and how you can do it. For those wondering about whether it’s possible and advisable to do this for their kids or dependents, check out The Lowdown on Freezing Your Kid’s Credit.

CryptogramISO Rejects NSA Encryption Algorithms

The ISO has decided not to approve two NSA-designed block encryption algorithms: Speck and Simon. It's because the NSA is not trusted to put security ahead of surveillance:

A number of them voiced their distrust in emails to one another, seen by Reuters, and in written comments that are part of the process. The suspicions stem largely from internal NSA documents disclosed by Snowden that showed the agency had previously plotted to manipulate standards and promote technology it could penetrate. Budget documents, for example, sought funding to "insert vulnerabilities into commercial encryption systems."

More than a dozen of the experts involved in the approval process for Simon and Speck feared that if the NSA was able to crack the encryption techniques, it would gain a "back door" into coded transmissions, according to the interviews and emails and other documents seen by Reuters.

"I don't trust the designers," Israeli delegate Orr Dunkelman, a computer science professor at the University of Haifa, told Reuters, citing Snowden's papers. "There are quite a lot of people in NSA who think their job is to subvert standards. My job is to secure standards."

I don't trust the NSA, either.

Worse Than FailureTales from the Interview: The In-House Developer

James was getting anxious to land a job that would put his newly-minted Computer Science degree to use. Six months had come to pass since he graduated and being a barista barely paid the bills. Living in a small town didn't afford him many local opportunities, so when he saw a developer job posting for an upstart telecom company, he decided to give it a shot.

Lincoln Log Cabin 2

We do everything in-house! the posting for CallCom emphasized, piquing James' interest. He hoped that meant there would be a small in-house development team that built their systems from the ground up. Surely he could learn the ropes from them before becoming a key contributor. He filled out the online application and happily clicked Submit.

Not 15 minutes later, his phone rang with a number he didn't recognize. Usually he just ignored those calls but he decided to answer. "Hi, is James available?" a nasally female voice asked, almost sounding disinterested. "This is Janine with CallCom, you applied for the developer position."

Caught off guard by the suddenness of their response, James wasn't quite ready for a phone screening. "Oh, yeah, of course I did! Just now. I am very interested."

"Great. Louis, the owner, would like to meet with you," Janine informed him.

"Ok, sure. I'm pretty open, I usually work in the evenings so I can make most days work," he replied, checking his calendar.

"Can you be here in an hour?" she asked. James managed to hide the fact he was freaking out about how to make it in time while assuring her he could be.

He arrived at the address Janine provided after a dangerous mid-drive shave. He felt unprepared but eager to rock the interview. The front door of their suite gave way to a lobby that seemed more like a walk-in closet. Janine was sitting behind a small desk reading a trashy tabloid and barely looked up to greet him. "Louis will see you now," she motioned toward a door behind the desk and went back to reading barely plausible celebrity rumors.

James stepped through the door into what could have been a walk-in closet for the first walk-in closet. A portly, sweaty man presumed to be Louis jumped up to greet him. "John! Glad you could make it on short notice. Have a seat!"

"Actually, it's James..." he corrected Louis, while also forgiving the mixup. "Nice to meet you. I was eager to get here to learn about this opportunity."

"Well James, you were right to apply! We are a fast growing company here at CallCom and I need eager young talent like you to really drive it home!" Louis was clearly excited about his company, growing sweatier by the minute.

"That sounds good to me! I may not have any real-world experience yet, but I assure you that I am eager to learn from your more senior members," James replied, trying to sell his potential.

Louis let out a hefty chuckle at James' mention of senior members. "Oh you mean stubborn old developers who are set in their ways? You won't be finding those around here! I believe in fresh young minds like yours, unmolded and ready to take the world by storm."

"I see..." James said, growing uneasy. "I suppose then I could at least learn how your code is structured from your junior developers? The ones who do your in-house development?"

Louis wiped his glistening brow with his suit coat before making the big revelation. "There are no other developers, James. It would just be you, building our fantastic new computer system from scratch! I have all the confidence in the world that you are the man for the job!"

James sat for a moment and pondered what he had just heard. "I'm sorry but I don't feel comfortable with that arrangement, Louis. I thought that by saying you do everything in-house, that implied there was already a development team."

"What? Oh, heavens no! In-house development means we let you work from home. Surely you can tell we don't have much office space here. So that's what it means. In. House. Got it?

James quickly thanked Louis for his time and left the interconnected series of closets. In a way, James was glad for the experience. It motivated him to move out of his one horse town to a bigger city where he eventually found employment with a real in-house dev team.

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Krebs on SecurityEquifax Breach: Setting the Record Straight

Bloomberg published a story this week citing three unnamed sources who told the publication that Equifax experienced a breach earlier this year which predated the intrusion that the big-three credit bureau announced on Sept. 7. To be clear, this earlier breach at Equifax is not a new finding and has been a matter of public record for months. Furthermore, it was first reported on this Web site in May 2017.

equihaxIn my initial Sept. 7 story about the Equifax breach affecting more than 140 million Americans, I noted that this was hardly the first time Equifax or another major credit bureau has experienced a breach impacting a significant number of Americans.

On May 17, KrebsOnSecurity reported that fraudsters exploited lax security at Equifax’s TALX payroll division, which provides online payroll, HR and tax services.

That story was about how Equifax’s TALX division let customers who use the firm’s payroll management services authenticate to the service with little more than a 4-digit personal identification number (PIN).

Identity thieves who specialize in perpetrating tax refund fraud figured out that they could reset the PINs of payroll managers at various companies just by answering some multiple-guess questions — known as “knowledge-based authentication” or KBA questions — such as previous addresses and dates that past home or car loans were granted.

On Tuesday, Sept. 18, Bloomberg ran a piece with reporting from no fewer than five journalists there who relied on information provided by three anonymous sources. Those sources reportedly spoke in broad terms about an earlier breach at Equifax, and told the publication that these two incidents were thought to have been perpetrated by the same group of hackers.

The Bloomberg story did not name TALX. Only post-publication did Bloomberg reporters update the piece to include a statement from Equifax saying the breach was unrelated to the hack announced on Sept. 7, and that it had to do with a security incident involving a payroll-related service during the 2016 tax year.

I have thus far seen zero evidence that these two incidents are related. Equifax has said the unauthorized access to customers’ employee tax records (we’ll call this “the March breach” from here on) happened between April 17, 2016 and March 29, 2017.

The criminals responsible for unauthorized activity in the March breach were participating in an insidious but common form of cybercrime known as tax refund fraud, which involves filing phony tax refund requests with the IRS and state tax authorities using the personal information from identity theft victims.

My original report on the March breach was based on public breach disclosures that Equifax was required by law to file with several state attorneys general.

Because the TALX incident exposed the tax and payroll records of its customers’ employees, the victim customers were in turn required to notify their employees as well. That story referenced public breach disclosures from five companies that used TALX, including defense contractor giant Northrop Grumman; staffing firm Allegis GroupSaint-Gobain Corp.; Erickson Living; and the University of Louisville.

When asked Tuesday about previous media coverage of the March breach, Equifax pointed National Public Radio (NPR) to coverage in KrebsonSecurity.

One more thing before I move on to the analysis. For more information on why KBA is a woefully ineffective method of stopping fraudsters, see this story from 2013 about how some of the biggest vendors of these KBA questions were all hacked by criminals running an identity theft service online.

Or, check out these stories about how tax refund fraudsters used weak KBA questions to steal personal data on hundreds of thousands of taxpayers directly from the Internal Revenue Service‘s own Web site. It’s probably worth mentioning that Equifax provided those KBA questions as well.


Over the past two weeks, KrebsOnSecurity has received an unusually large number of inquiries from reporters at major publications who were seeking background interviews so that they could get up to speed on Equifax’s spotty security history (sadly, Bloomberg was not among them).

These informational interviews — in which I agree to provide context and am asked to speak mainly on background — are not unusual; I sometimes field two or three of these requests a month, and very often more when time permits. And for the most part I am always happy to help fellow journalists make sure they get the facts straight before publishing them.

But I do find it slightly disturbing that there appear to be so many reporters on the tech and security beats who apparently lack basic knowledge about what these companies do and their roles in perpetuating — not fighting — identity theft.

It seems to me that some of the world’s most influential publications have for too long given Equifax and the rest of the credit reporting industry a free pass — perhaps because of the complexities involved in succinctly explaining the issues to consumers. Indeed, I would argue the mainstream media has largely failed to hold these companies’ feet to the fire over a pattern of lax security and a complete disregard for securing the very sensitive consumer data that drives their core businesses.

To be sure, Equifax has dug themselves into a giant public relations hole, and they just keep right on digging. On Sept. 8, I published a story equating Equifax’s breach response to a dumpster fire, noting that it could hardly have been more haphazard and ill-conceived.

But I couldn’t have been more wrong. Since then, Equifax’s response to this incident has been even more astonishingly poor.


On Tuesday, the official Equifax account on Twitter replied to a tweet requesting the Web address of the site that the company set up to give away its free one-year of credit monitoring service. That site is, but the company’s Twitter account told users to instead visit securityequifax2017[dot]com, which is currently blocked by multiple browsers as a phishing site.



Under intense public pressure from federal lawmakers and regulators, Equifax said that for 30 days it would waive the fee it charges for placing a security freeze on one’s credit file (for more on what a security freeze entails and why you and your family should be freezing their files, please see The Equifax Breach: What You Should Know).

Unfortunately, the free freeze offer from Equifax doesn’t mean much if consumers can’t actually request one via the company’s freeze page; I have lost count of how many comments have been left here by readers over the past week complaining of being unable to load the site, let alone successfully obtain a freeze. Instead, consumers have been told to submit the requests and freeze fees in writing and to include copies of identity documents to validate the requests.

Sen. Elizabeth Warren (D-Mass) recently introduced a measure that would force the bureaus to eliminate the freeze fees and to streamline the entire process. To my mind, that bill could not get passed soon enough.

Understand that each credit bureau has a legal right to charge up to $20 in some states to freeze a credit file, and in many states they are allowed to charge additional fees if consumers later wish to lift or temporarily thaw a freeze. This is especially rich given that credit bureaus earn roughly $1 every time a potential creditor (or identity thief) inquires about your creditworthiness, according to Avivah Litan, a fraud analyst with Gartner Inc.

In light of this, it’s difficult to view these freeze fees as anything other than a bid to discourage consumers from filing them.

The Web sites where consumers can go to file freezes at the other major bureaus — including TransUnion and Experian — have hardly fared any better since Equifax announced the breach on Sept. 7. Currently, if you attempt to freeze your credit file at TransUnion, the company’s site is relentless in trying to steer you away from a freeze and toward the company’s free “credit lock” service.

That service, called TrueIdentity, claims to allow consumers to lock or unlock their credit files for free as often as they like with the touch of a button. But readers who take the bait probably won’t notice or read the terms of service for TrueIdentity, which has the consumer agree to a class action waiver, a mandatory arbitration clause, and something called ‘targeted marketing’ from TransUnion and their myriad partners.

The agreement also states TransUnion may share the data with other companies:

“If you indicated to us when you registered, placed an order or updated your account that you were interested in receiving information about products and services provided by TransUnion Interactive and its marketing partners, or if you opted for the free membership option, your name and email address may be shared with a third party in order to present these offers to you. These entities are only allowed to use shared information for the intended purpose only and will be monitored in accordance with our security and confidentiality policies. In the event you indicate that you want to receive offers from TransUnion Interactive and its marketing partners, your information may be used to serve relevant ads to you when you visit the site and to send you targeted offers.  For the avoidance of doubt, you understand that in order to receive the free membership, you must agree to receive targeted offers.

TransUnion then encourages consumers who are persuaded to use the “free” service to subscribe to “premium” services for a monthly fee with a perpetual auto-renewal.

In short, TransUnion’s credit lock service (and a similarly named service from Experian) doesn’t prevent potential creditors from accessing your files, and these dubious services allow the credit bureaus to keep selling your credit history to lenders (or identity thieves) as they see fit.

As I wrote in a Sept. 11 Q&A about the Equifax breach, I take strong exception to the credit bureaus’ increasing use of the term “credit lock” to divert people away from freezes. Their motives for saddling consumers with even more confusing terminology are suspect, and I would not count on a credit lock to take the place of a credit freeze, regardless of what these companies claim (consider the source).

Experian’s freeze Web site has performed little better since Sept. 7. Several readers pinged KrebsOnSecurity via email and Twitter to complain that while Experian’s freeze site repeatedly returned error messages stating that the freeze did not go through, these readers’ credit cards were nonetheless charged $15 freeze fees multiple times.

If the above facts are not enough to make your blood boil, consider that Equifax and other bureaus have been lobbying lawmakers in Congress to pass legislation that would dramatically limit the ability of consumers to sue credit bureaus for sloppy security, and cap damages in related class action lawsuits to $500,000.

If ever there was an industry that deserved obsolescence or at least more regulation, it is the credit bureaus. If either of those outcomes are to become reality, it is going to take much more attentive and relentless coverage on the part of the world’s top news publications. That’s because there’s a lot at stake here for an industry that lobbies heavily (and successfully) against any new laws that may restrict their businesses.

Here’s hoping the media can get up to speed quickly on this vitally important topic, and help lead the debate over legal and regulatory changes that are sorely needed.


TEDHurricanes, monsoons and the human rights of climate change: TEDWomen chats with Mary Robinson

Mary Robinson speaks at TEDWomen 2015 at the Monterey Conference Center. Photo: Marla Aufmuth/TED

Two years ago, former president of Ireland Mary Robinson graced the TEDWomen stage with a moving talk about why climate change is not only a threat to our environment, but also a threat to the human rights of many poor and marginalized people around the world.

Mary is an incredible person who inspires me greatly. Besides being the first woman president of Ireland, she also served as the UN High Commissioner for Human Rights from 1997 to 2002. She now leads a foundation devoted to climate justice. She received the Presidential Medal of Freedom from President Obama, is a member of the Elders, a former Chair of the Council of Women World Leaders and a member of the Club of Madrid.

“I came to [be concerned about] climate change not as a scientist or an environmental lawyer,” she told the TEDWomen crowd in California. “It was because of the impact on people, and the impact on their rights — their rights to food and safe water, health, education and shelter.”

She told stories of the people she met in her work with the United Nations and later on in her foundation work. When explaining the challenges they faced, she said they kept repeating the same pervasive sentence: “Oh, but things are so much worse now, things are so much worse.” She came to realize that they were talking about the same phenomenon — climate shocks and changes in the weather that were threatening their crops, their livelihood and their survival.

In the wake of Hurricanes Harvey and Irma in the United States, and extreme monsoons in South Asia, I reached out to Mary to get an update on her work and where things stand now in terms of climate justice and the global fight to curb climate change. Despite a busy week attending this week’s United Nations General Assembly and other events, she took the time to answer my questions via email.

Horrific hurricanes like Harvey, Irma and now Maria are bringing the issue of climate change to the doorsteps of a country that recently dropped out of the Paris Climate agreement. What would you say to Americans about climate change and the actions of their government in 2017?

Mary Robinson: In the past few weeks alone, we have seen the physical, social and economic devastation wrought on some American cities and vulnerable communities across the Caribbean by Hurricanes Harvey and Irma, and the death and destruction caused by monsoons across South Asia. The American people know from previous experience, such as Hurricane Katrina in 2005, that some people affected will be displaced from their homes forever. Many of these displaced people are drawn to cities, but the capacity to integrate these new arrivals in a manner consistent with their human rights and dignity is often woefully inadequate — reflecting an equally inadequate response from political leaders.

The profound injustice of climate change is that those who are most vulnerable in society, no matter the level of development of the country in question, will suffer most. People who are marginalised or poor, women, and indigenous communities are being disproportionately affected by climate impacts.

And yet, in the US the debate as to whether climate change is real or not continues in mainstream discourse. Throughout the world, baseless climate denial has largely disappeared into the fringes of public debate as the focus has shifted to how countries should act to avoid the potentially disastrous consequences of unchecked climate change. For many years, the US has positioned itself as a global leader in science and technology and yet in seeking to leave or renegotiate the Paris Agreement, the current administration is taking a giant leap backwards, both in terms of science-based policy making and in terms of international solidarity and cooperation.

However, while the national government is going backwards, we are seeing citizens and leaders across the country picking up the slack. I see many American people who remain determined to ensure the US plays its role in the fight against climate change. For Americans who are rightly concerned about the administration’s direction on climate change, I would say that there are still many reasons to be optimistic. The “We’re Still In” initiative offers a tangible demonstration of that desire on the part of concerned citizens to ensure that the US emerges as a leader on climate action, regardless of the approach of the current administration. States, cities, universities and businesses are committing to ambitious action to tackle climate change, to ensure clean and efficient energy services and uphold US commitments under the Paris Agreement.

As you pointed out in your TED Talk, the people who are suffering the most from climate change are those who don’t have the means to escape catastrophic events or rebuild after they have occurred. Can you talk a bit about efforts your organization and others are involved in to help those who are the most affected by climate change, but often are the least responsible for the human actions that have caused it?

As with many of the most severe storms to impact communities in recent years – including in the US with Katrina, Sandy and Ike – it is the poorest people who have suffered the worst impacts from Harvey and Irma. The people who the climate justice movement is for are the people who have the least capacity to protect themselves, their families, their homes and their incomes from the impacts of climate change, and indeed climate action policies that are not grounded in human rights. These are also the people who have the hardest time rebuilding their lives in the wake of these more frequent and intense disasters as they do not have adequate access to insurance, savings or other livelihood options necessary to provide resilience. In many cases, families lose everything.

If we then consider the devastation wrought by Irma in the Caribbean, where poverty rates are much higher than the US, we begin to understand the great injustice of climate change. People living around the world, in communities which have never seen the benefits of industrialization or even electrification, face the harshest impacts of climate change and have the most limited capacity to recover.

In seeking to advance climate justice, my foundation and other organizations which share our concerns, seek to ensure that the voices of these communities are heard and understood by those crafting the global and national response to the climate crisis to ensure that decisions are participatory, transparent and respond to the needs of the most vulnerable people in our communities. We must enable all people to realize their right to development and to benefit from the global transition to a sustainable, cleaner and more equitable future. Solutions to the climate crisis that are good for the planet but cause further suffering for people living in poverty must not be implemented.

What is the number one issue involving climate change that we should all be focused on right now as regards human rights and climate justice in the world?

There are many pressing issues which must be addressed to advance climate justice. For instance, over one billion people today live in energy poverty. The global community must ensure that appropriate financing and renewable technologies are available to allow all people to enjoy the benefits of electrification sustainably. Similarly, a compendium of evidence-based climate solutions published this summer highlighted that the most effective approach to reducing greenhouse gas emissions is through educating girls and providing for family planning*. Climate change impacts women differently to men and exacerbates existing inequalities. Empowering women and girls in the global response to climate change will result in a fairer world and better climate outcomes. This must begin by ensuring women are enabled to meaningfully participate in decision-making processes related to climate action throughout the world.

Given the recent storms and resulting devastation, one of the most pressing issues to be addressed regarding the rights of those most vulnerable to climate change is the need to ensure the necessary protections are in place for people displaced by worsening climate impacts. There can be no doubt that climate change is a driver of migration and migration owing to climate impacts will increase in the coming years. Increasingly severe and frequent catastrophic storms or slow onset events like recurrent drought, sea level rise or ocean acidification, will result in people’s livelihoods collapsing, forcing them to seek better futures elsewhere. The scale of potential future migration as a result of climate change must not be underestimated. In order to ensure that the global community is prepared to protect the wellbeing and dignity of people displaced by climate change, concrete steps must be taken now. It would be very important that the Global Compact on Migration and Refugees, currently being negotiated at the UN, recognizes the challenge of addressing displacement resulting from climate change.

In a speech earlier this month, you talked about some of the innovative ideas that are being broached around the world to address climate change and you said, “The existential threat of climate change confronts us with our global interdependence. It cannot be seen as anything other than a global problem, and each nation must play an appropriate part to tackle it.” What do you think is the most important thing the US must do to address the problem?

The US must continue to support international action on climate change. No country alone can protect its citizens from the impacts of climate change – it will only be through unprecedented international solidarity, backed up by financial and technological support, that some of the most vulnerable countries will be able to chart a sustainable development pathway for their country. It is in the interests of the US provide this support.

Without it, developing countries are faced with a choice between prohibitively expensive sustainable development and readily accessible fossil fuel based development. They will choose the latter and who would blame them – they need to lift large numbers of their people out of poverty and provide essential services like health care, education and fresh water – without international support, they will have no choice but to use fossil fuels. This would result in even more intense Atlantic hurricanes, longer and more severe drought across the western US and the inundation of coastal cities from sea level rise. In order to protect American citizens, the US must play their role as a global citizen. Solidarity and interdependence are not new ideas, but in the current climate of rising nationalism, they are innovative and potentially transformative.

What are some of the innovative solutions that you are seeing around the world that we should know about? 

When we think about innovation, we usually focus on technology. However, most of the technologies we need to avert the climate crisis are already available to us. What is lacking is the political will to enact the necessary global transition to a safer and fairer future for all. Perhaps we should be more focused on innovation in terms of global governance.

For instance, in some countries like Wales and Hungary there is an office that represents the interests of future generations in national decision making. When viewed through an intergenerational lens, the urgent need to ensure sustainable development for all people and stabilize the climate becomes clear. Decisions taken today that undermine the wellbeing of future generations become inexcusable. Intergenerational equity can help to inform decision making at the international level as well, and provide a unifying focus for international negotiations. It is a universal principle that informs constitutions, international treaties, economies, religious beliefs, traditions and customs. Putting this principle into action and allowing it to inform how we negotiate and govern would be a very innovative change.

What can regular people do to fight climate change and work for environmental justice?

I believe the most important thing a person can do is to appreciate their role as a global citizen. Ultimately, the fight against climate change will not be won by a technological silver bullet or a mass recycling campaign, but rather by an appreciation among all people that we have to live sustainably with the Earth and with each other. We need empathy for those communities on the front lines of climate change, and for those seeking to realise their right to development in the midst of a changing climate, and this empathy must help to guide how we act, how we consume and how we vote.

Watch Mary’s TED Talk and visit her website to find out more about her work and how you can get involved.

I also want to mention that registration for TEDWomen 2017 is open, so if you haven’t registered yet, please click this link and apply today — space is limited and I don’t want you to miss out. This year, TEDWomen will be held November 1–3 in New Orleans. The theme is Bridges: We build them, we cross them, and sometimes we even burn them. We’ll explore the many aspects of this year’s theme through curated TED Talks, community dinners and activities.

Join us!

– Pat

* Hawking, P. (2017) Drawdown: The most comprehensive plan ever proposed to reverse global warming

TEDStanding for art and truth: A chat with Sethembile Msezane

Standing for four hours on a platform in the scorching sun, Sethembile Msezane embodied the bird spirit of Chapungu, raising and lowering her wings, as a statue of Cecil Rhodes was lifted by crane off its own platform behind her. The work is based in her research and scholarship, while the imagery of Chapungu first came to her in a dream. “By the time I came down, I was shaking and experiencing sunstroke. But I also felt a burst of life inside.”

Sethembile Msezane’s sculptures are not made of clay, granite or marble. She is the sculpture, as you will see in her talk — which you can watch right now before you read this Q&A. We’ll wait.

The fragility of the medium combined with the power of her messages make for performances that literally stop people in their tracks and elicit strong reactions. I ask Msezane about what goes into her productions and the practical realities of physically embodying her artwork that is a powerful and often uncomfortable commentary on the reality of being a black woman in post-apartheid South Africa.

That was a great and moving talk — congratulations! How do you feel?

Thank you! It’s been a positively overwhelming experience. To have an idea, allow it to manifest through various experiments and for other people to identify with it even if it’s years later after its inception is encouraging.

The crowd at TED conferences is a fairly progressive one, but how would you describe the broader reception of your art, both on the site of your performance and off it?

Well, there’s always different responses to my work. Sometimes people focus on only scraping the surface of my practice by focusing on the female body, choosing to exoticise, sexualise or even moralise it. But then something interesting begins to happen when they start to ‘see’ the person inside the body in relation to symbols in the landscape and in dress. At times, their own insecurities become revealed to them. They start to comment on the society we live in and the effects of symbols such as statues living among us.

Putting your body out there as vessel for your messages is incredibly brave. Have you ever felt like you were in physical danger during any of your performances?

Yes, there’s always an anxiety just being a regular woman walking down the street. So when my body is standing on a plinth in public spaces, this is not a foreign feeling. Sometimes I’m surrounded by crowds, and there’s movement that could cause me to fall off. At times people touch my body, which of course is not welcome. This speaks to how we, particularly men, have been socialised to think they are entitled to women’s bodies.

I remember one time, however, when I was more scared for a colleague and friend of mine who was filming my performance The Charter. A man was passing by and noticed the performance. He started spewing out all kinds of hatred in relation to my body and the symbolic gestures being performed in that space. His hatred grew and he started displaying his prejudice and homophobia by insulting my friend. He didn’t physically harm us, but he used his words as a weapon, and that cut deep.

An image from The Charter (2016).

Could you describe what goes into each performance? Conceptualisation? Writing? Research? Staking out the location? Help with pictures and video?

My process is never constant; various circumstances come into play in formulating the performance.

I guess in the beginning I’d get fixated on an idea and start doing more research about it…online, books, films, magazines, music etc. Concurrently, I begin to source materials and costumes to construct wearable sculptures in my studio. In between sourcing materials, I make site visits, interview people and write my observations to formulate a solid concept.

I think now I realise not all of it was based solely on research — some of it was intuitive or came about in my dreams. I’d try connect with the figure I’d be embodying on the day of the performance. This happened at home in front of the mirror. This process would be carried out from the beginning of thinking about ‘her’ towards the very end on the day of the performance.

Which was the most difficult performance to enact?

I’d have to say it’s between Untitled (Youth Day) 2014 and Chapungu: The Day Rhodes Fell (2015). Untitled (Youth Day) 2014 was just over an hour, but the books stacked on my head were compressing my vertebrae, which really hurt, and I couldn’t take breaks in between.

Chapungu: The Day Rhodes Fell (2015) on the other hand was longer nearly 4 hours. Standing on 6-inch stilettos that long can’t be healthy. My toes were blue, they didn’t feel like my own. The plinth I was standing on was placed on a set of stairs, and people were standing around the plinth. The positioning was quite precarious.

It was scorching that day (I think it was 32 degrees celsius), and a lot of my body was exposed. I kept my arms outstretched about 10 minutes at a time and rested for about 5 minutes. I went between many states of consciousness being Chapungu; but also being myself, Sethembile, I was deeply in pain, fatigued, dehydrated and more. Meditating, remembering why I was there and allowing the spirit of Chapungu to be present kept me going. By the time I came down, I was shaking and experiencing sunstroke. But I also felt a burst of life inside.

For Untitled (Heritage Day) in 2014, Sethembile created a character based on her own Zulu traditions, and posed silently in front of a statue of Louis Botha, creating a rich dialogue between South Africa’s colonial, apartheid-era history and her own. For Untitled (Youth Day) 2014, at right, she stood for just over an hour with books stacked on her head, her face masked.

Which performance has affected you the most?

That’s like asking which one of my children is my favourite haha. I can’t really say, they all have contributed to my thinking and where I am in my outlook and career right now. I’ve learned valuable lessons in each performance, because in essence they comment on the societies I’ve found myself in; these spaces and people can be complex. Ultimately, I learned more about being a woman in physical space (both public and private) but also within the spiritual realm, which is very present in my daily life.

What more can we look forward to from Sethembile?

I’m looking forward to the opening of Zeitz Museum of Contemporary Art Africa (MOCAA) this September, where select pieces of my work that are part of their collection will be showing. One of my favorite pieces, Signal Her Return I (2015–2016), a living sound installation with a sea of lit candles, an 18th-century bell and long braid of hair, will also be featuring. After that I’m headed to Finland for the ANTI Festival International Prize for Live Art award ceremony where I’m one of four nominees.

That’s as much as I’m willing to reveal for now. Keep following, you won’t be disappointed …

Sociological ImagesWhat’s Trending? The Crime Drop

Over at Family Inequality, Phil Cohen has a list of demographic facts you should know cold. They include basic figures like the US population (326 million), and how many Americans have a BA or higher (30%). These got me thinking—if we want to have smarter conversations and fight fake news, it is also helpful to know which way things are moving. “What’s Trending?” is a post series at Sociological Images with quick looks at what’s up, what’s down, and what sociologists have to say about it.

The Crime Drop

You may have heard about a recent spike in the murder rate across major U.S. cities last year. It was a key talking point for the Trump campaign on policing policy, but it also may be leveling off. Social scientists can also help put this bounce into context, because violent and property crimes in the U.S. have been going down for the past twenty years.

You can read more on the social sources of this drop in a feature post at The Society Pages. Neighborhood safety is a serious issue, but the data on crime rates doesn’t always support the drama.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

CryptogramWhat the NSA Collects via 702

New York Times reporter Charlie Savage writes about some bad statistics we're all using:

Among surveillance legal policy specialists, it is common to cite a set of statistics from an October 2011 opinion by Judge John Bates, then of the FISA Court, about the volume of internet communications the National Security Agency was collecting under the FISA Amendments Act ("Section 702") warrantless surveillance program. In his opinion, declassified in August 2013, Judge Bates wrote that the NSA was collecting more than 250 million internet communications a year, of which 91 percent came from its Prism system (which collects stored e-mails from providers like Gmail) and 9 percent came from its upstream system (which collects transmitted messages from network operators like AT&T).

These numbers are wrong. This blog post will address, first, the widespread nature of this misunderstanding; second, how I came to FOIA certain documents trying to figure out whether the numbers really added up; third, what those documents show; and fourth, what I further learned in talking to an intelligence official. This is far too dense and weedy for a New York Times article, but should hopefully be of some interest to specialists.

Worth reading for the details.

Worse Than FailureCodeSOD: A Dumbain Specific Language

I’ve had to write a few domain-specific-languages in the past. As per Remy’s Law of Requirements Gathering, it’s been mostly because the users needed an Excel-like formula language. The danger of DSLs, of course, is that they’re often YAGNI in the extreme, or at least a sign that you don’t really understand your problem.

XML, coupled with schemas, is a tool for building data-focused DSLs. If you have some complex structure, you can convert each of its features into an XML attribute. For example, if you had a grammar that looked something like this:

The Source specification obeys the following syntax

source = ( Feature1+Feature2+... ":" ) ? steps

Feature1 = "local" | "global"

Feature2 ="real" | "virtual" | "ComponentType.all"

Feature3 ="self" | "ancestors" | "descendants" | "Hierarchy.all"

Feature4 = "first" | "last" | "DayAllocation.all"

If features are specified, the order of features as given above has strictly to be followed.

steps = oneOrMoreNameSteps | zeroOrMoreNameSteps | componentSteps

oneOrMoreNameSteps = nameStep ( "." nameStep ) *

zeroOrMoreNameSteps = ( nameStep "." ) *

nameStep = "#" name

name is a string of characters from "A"-"Z", "a"-"z", "0"-"9", "-" and "_". No umlauts allowed, one character is minimum.

componentSteps is a list of valid values, see below.

Valid 'componentSteps' are:

- GlobalValue
- Product
- Product.Brand
- Product.Accommodation
- Product.Accommodation.SellingAccom
- Product.Accommodation.SellingAccom.Board
- Product.Accommodation.SellingAccom.Unit
- Product.Accommodation.SellingAccom.Unit.SellingUnit
- Product.OnewayFlight
- Product.OnewayFlight.BookingClass
- Product.ReturnFlight
- Product.ReturnFlight.BookingClass
- Product.ReturnFlight.Inbound
- Product.ReturnFlight.Outbound
- Product.Addon
- Product.Addon.Service
- Product.Addon.ServiceFeature

In addition to that all subsequent steps from the paths above are permitted, that is 'Board', 
'Accommodation.SellingAccom' or 'SellingAccom.Unit.SellingUnit'.
'Accommodation.Unit' in the contrary is not permitted, as here some intermediate steps are missing.

You could turn that grammar into an XML document by converting syntax elements to attributes and elements. You could do that, but Stella’s predecessor did not do that. That of course, would have been work, and they may have had to put some thought on how to relate their homebrew grammar to XSD rules, so instead they created an XML schema rule for SourceAttributeType that verifies that the data in the field is valid according to the grammar… using regular expressions. 1,310 characters of regular expressions.

    <xs:restriction base="xs:string">
            <xs:pattern value="(((Scope.)?(global|local|current)\+?)?((((ComponentType.)?

There’s a bug in that regex that Stella needed to fix. As she put it: “Every time you evaluate it a few little kitties die because you shouldn’t use kitties to polish your car. I’m so, so sorry, little kitties…”

The full, unexcerpted code is below, so… at least it has documentation. In two languages!

<xs:simpleType name="SourceAttributeType">
                        <xs:documentation xml:lang="de">
                Die Source Angabe folgt folgender Syntax

                        source = ( Eigenschaft1+Eigenschaft2+... ":" ) ? steps

                        Eigenschaft1 = "local" | "global"

                        Eigenschaft2 ="real" | "virtual" | "ComponentType.all"

                        Eigenschaft3 ="self" | "ancestors" | "descendants" | "Hierarchy.all"

                        Eigenschaft4 = "first" | "last" | "DayAllocation.all"

                        Falls Eigenschaften angegeben werden muss zwingend die oben angegebene Reihenfolge der Eigenschaften eingehalten werden.

                        steps = oneOrMoreNameSteps | zeroOrMoreNameSteps | componentSteps

                        oneOrMoreNameSteps = nameStep ( "." nameStep ) *

                        zeroOrMoreNameSteps = ( nameStep "." ) *

                        nameStep = "#" name

                        name ist eine Folge von Zeichen aus der Menge "A"-"Z", "a"-"z", "0"-"9", "-" und "_". Keine Umlaute. Mindestens ein Zeichen

                        componentSteps ist eine Liste gültiger Werte, siehe im folgenden

                Gültige 'componentSteps' sind zunächst:

                        - GlobalValue
                        - Product
                        - Product.Brand
                        - Product.Accommodation
                        - Product.Accommodation.SellingAccom
                        - Product.Accommodation.SellingAccom.Board
                        - Product.Accommodation.SellingAccom.Unit
                        - Product.Accommodation.SellingAccom.Unit.SellingUnit
                        - Product.OnewayFlight
                        - Product.OnewayFlight.BookingClass
                        - Product.ReturnFlight
                        - Product.ReturnFlight.BookingClass
                        - Product.ReturnFlight.Inbound
                        - Product.ReturnFlight.Outbound
                        - Product.Addon
                        - Product.Addon.Service
                        - Product.Addon.ServiceFeature

                Desweiteren sind alle Unterschrittfolgen aus obigen Pfaden erlaubt, also 'Board', 'Accommodation.SellingAccom' oder 'SellingAccom.Unit.SellingUnit'.
                'Accommodation.Unit' hingegen ist nicht erlaubt, da in diesem Fall einige Zwischenschritte fehlen.

                        <xs:documentation xml:lang="en">
                                The Source specification obeys the following syntax

                                source = ( Feature1+Feature2+... ":" ) ? steps

                                Feature1 = "local" | "global"

                                Feature2 ="real" | "virtual" | "ComponentType.all"

                                Feature3 ="self" | "ancestors" | "descendants" | "Hierarchy.all"

                                Feature4 = "first" | "last" | "DayAllocation.all"

                                If features are specified, the order of features as given above has strictly to be followed.

                                steps = oneOrMoreNameSteps | zeroOrMoreNameSteps | componentSteps

                                oneOrMoreNameSteps = nameStep ( "." nameStep ) *

                                zeroOrMoreNameSteps = ( nameStep "." ) *

                                nameStep = "#" name

                                name is a string of characters from "A"-"Z", "a"-"z", "0"-"9", "-" and "_". No umlauts allowed, one character is minimum.

                                componentSteps is a list of valid values, see below.

                                Valid 'componentSteps' are:

                                - GlobalValue
                                - Product
                                - Product.Brand
                                - Product.Accommodation
                                - Product.Accommodation.SellingAccom
                                - Product.Accommodation.SellingAccom.Board
                                - Product.Accommodation.SellingAccom.Unit
                                - Product.Accommodation.SellingAccom.Unit.SellingUnit
                                - Product.OnewayFlight
                                - Product.OnewayFlight.BookingClass
                                - Product.ReturnFlight
                                - Product.ReturnFlight.BookingClass
                                - Product.ReturnFlight.Inbound
                                - Product.ReturnFlight.Outbound
                                - Product.Addon
                                - Product.Addon.Service
                                - Product.Addon.ServiceFeature

                                In addition to that all subsequent steps from the paths above are permitted, that is 'Board', 'Accommodation.SellingAccom' or 'SellingAccom.Unit.SellingUnit'.
                                'Accommodation.Unit' in the contrary is not permitted, as here some intermediate steps are missing.

                                <xs:restriction base="xs:string">
                                        <xs:pattern value="(((Scope.)?(global|local|current)\+?)?((((ComponentType.)?(real|virtual))|ComponentType.all)\+?)?((((Hierarchy.)?(self|ancestors|descendants))|Hierarchy.all)\+?)?((((DayAllocation.)?(first|last))|DayAllocation.all)\+?)?:)?(#[A-Za-z0-9\-_]+(\.(#[A-Za-z0-9\-_]+))*|(#[A-Za-z0-9\-_]+\.)*(ThisComponent|GlobalValue|Product|Product\.Brand|Product\.Accommodation|Product\.Accommodation\.SellingAccom|Product\.Accommodation\.SellingAccom\.Board|Product\.Accommodation\.SellingAccom\.Unit|Product\.Accommodation\.SellingAccom\.Unit\.SellingUnit|Product\.OnewayFlight|Product\.OnewayFlight\.BookingClass|Product\.ReturnFlight|Product\.ReturnFlight\.BookingClass|Product\.ReturnFlight\.Inbound|Product\.ReturnFlight\.Outbound|Product\.Addon|Product\.Addon\.Service|Product\.Addon\.ServiceFeature|Brand|Accommodation|Accommodation\.SellingAccom|Accommodation\.SellingAccom\.Board|Accommodation\.SellingAccom\.Unit|Accommodation\.SellingAccom\.Unit\.SellingUnit|OnewayFlight|OnewayFlight\.BookingClass|ReturnFlight|ReturnFlight\.BookingClass|ReturnFlight\.Inbound|ReturnFlight\.Outbound|Addon|Addon\.Service|Addon\.ServiceFeature|SellingAccom|SellingAccom\.Board|SellingAccom\.Unit|SellingAccom\.Unit\.SellingUnit|BookingClass|Inbound|Outbound|Service|ServiceFeature|Board|Unit|Unit\.SellingUnit|SellingUnit))"/>
[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!


CryptogramApple's FaceID

This is a good interview with Apple's SVP of Software Engineering about FaceID.

Honestly, I don't know what to think. I am confident that Apple is not collecting a photo database, but not optimistic that it can't be hacked with fake faces. I dislike the fact that the police can point the phone at someone and have it automatically unlock. So this is important:

I also quizzed Federighi about the exact way you "quick disabled" Face ID in tricky scenarios -- like being stopped by police, or being asked by a thief to hand over your device.

"On older phones the sequence was to click 5 times [on the power button], but on newer phones like iPhone 8 and iPhone X, if you grip the side buttons on either side and hold them a little while -- we'll take you to the power down [screen]. But that also has the effect of disabling Face ID," says Federighi. "So, if you were in a case where the thief was asking to hand over your phone -- you can just reach into your pocket, squeeze it, and it will disable Face ID. It will do the same thing on iPhone 8 to disable Touch ID."

That squeeze can be of either volume button plus the power button. This, in my opinion, is an even better solution than the "5 clicks" because it's less obtrusive. When you do this, it defaults back to your passcode.


It's worth noting a few additional details here:

  • If you haven't used Face ID in 48 hours, or if you've just rebooted, it will ask for a passcode.

  • If there are 5 failed attempts to Face ID, it will default back to passcode. (Federighi has confirmed that this is what happened in the demo onstage when he was asked for a passcode -- it tried to read the people setting the phones up on the podium.)

  • Developers do not have access to raw sensor data from the Face ID array. Instead, they're given a depth map they can use for applications like the Snap face filters shown onstage. This can also be used in ARKit applications.

  • You'll also get a passcode request if you haven't unlocked the phone using a passcode or at all in 6.5 days and if Face ID hasn't unlocked it in 4 hours.

Also be prepared for your phone to immediately lock every time your sleep/wake button is pressed or it goes to sleep on its own. This is just like Touch ID.

Federighi also noted on our call that Apple would be releasing a security white paper on Face ID closer to the release of the iPhone X. So if you're a researcher or security wonk looking for more, he says it will have "extreme levels of detail" about the security of the system.

Here's more about fooling it with fake faces:

Facial recognition has long been notoriously easy to defeat. In 2009, for instance, security researchers showed that they could fool face-based login systems for a variety of laptops with nothing more than a printed photo of the laptop's owner held in front of its camera. In 2015, Popular Science writer Dan Moren beat an Alibaba facial recognition system just by using a video that included himself blinking.

Hacking FaceID, though, won't be nearly that simple. The new iPhone uses an infrared system Apple calls TrueDepth to project a grid of 30,000 invisible light dots onto the user's face. An infrared camera then captures the distortion of that grid as the user rotates his or her head to map the face's 3-D shape­ -- a trick similar to the kind now used to capture actors' faces to morph them into animated and digitally enhanced characters.

It'll be harder, but I have no doubt that it will be done.

More speculation.

I am not planning on enabling it just yet.

Worse Than FailurePoor Shoe


"So there's this developer who is the end-all, be-all try-hard of the year. We call him Shoe. He's the kind of over-engineering idiot that should never be allowed near code. And, to boot, he's super controlling."

Sometimes, you'll be talking to a friend, or reading a submission, and they'll launch into a story of some crappy thing that happened to them. You expect to sympathize. You expect to agree, to tell them how much the other guy sucks. But as the tale unfolds, something starts to feel amiss.

They start telling you about the guy's stand-up desk, how it makes him such a loser, such a nerd. And you laugh nervously, recalling the article you read just the other day about the health benefits of stand-up desks. But sure, they're pretty nerdy. Why not?

"But then, get this. So we gave Shoe the task to minify a bunch of JavaScript files, right?"

You start to feel relieved. Surely this is more fertile ground. There's a ton of bad ways to minify and concatenate files on the server-side, to save bandwidth on the way out. Is this a premature optimization story? A story of an idiot writing code that just doesn't work? An over-engineered monstrosity?

"So he fires up gulp.js and gets to work."

Probably over-engineered. Gulp.js lets you write arbitrary JavaScript to do your processing. It has the advantage of being the same language as the code being minified, so you don't have to switch contexts when reading it, but the disadvantage of being JavaScript and thus impossible to read.

"He asks how to concat JavaScript, and the room tells him the right answer: find javascripts/ -name '*.js' -exec cat {} \; > main.js"

Wait, what? You blink. Surely that's not how Gulp.js is meant to work. Just piping out to shell commands? But you've never used it. Maybe that's the right answer; you don't know. So you nod along, making a sympathetic noise.

"Of course, this moron can't just take the advice. Shoe has to understand how it works. So he starts googling on the Internet, and when he doesn't find a better answer, he starts writing a shell script he can commit to the repo for his 'jay es minifications.'"

That nagging feeling is growing stronger. But maybe the punchline is good. There's gotta be a payoff here, right?

"This guy, right? Get this: he discovers that most people install gulp via npm.js. So he starts shrieking, 'This is a dependency of mah script!' and adds node.js and npm installation to the shell script!"

Stronger and stronger the feeling grows, refusing to be shut out. You swallow nervously, looking for an excuse to flee the conversation.

"We told him, just put it in the damn readme and move on! Don't install anything on anyone else's machines! But he doesn't like this solution, either, so he finally just echoes out in the shell script, requires npm. Can you believe it? What a n00b!"

That's it? That's the punchline? That's why your friend has worked himself into a lather, foaming and frothing at the mouth? Try as you might to justify it, the facts are inescapable: your friend is TRWTF.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!


Sociological ImagesWhen Bros Hug

In February, CBS Sunday Morning aired a short news segment on the bro hug phenomenon: a supposedly new way heterosexual (white) men (i.e., bros) greet each other. According to this news piece, the advent of the bro hug can be attributed to decreased homophobia and is a sign of social progress.

I’m not so sure.

To begin, bro-ness isn’t really about any given individuals, but invokes a set of cultural norms, statuses, and meanings. A stereotypical bro is a white middle-class, heterosexual male, especially one who frequents strongly masculinized places like fraternities, business schools, and sport events. (The first part of the video, in fact, focused on fraternities and professional sports.) The bro, then, is a particular kind of guy, one that frequents traditionally male spaces with a history of homophobia and misogyny and is invested in maleness and masculinity.

The bro hug reflects this investment in masculinity and, in particular, the masculine performance in heterosexuality. To successfully complete a bro hug, the two men clasp their right hands and firmly pull their bodies towards each other until they are or appear to be touching whilst their left hands swing around to forcefully pat each other on the back. Men’s hips and chests never make full contact. Instead, the clasped hands pull in, but also act as a buffer between the men’s upper bodies, while the legs remain firmly rooted in place, maintaining the hips at a safe distance. A bro hug, in effect, isn’t about physical closeness between men, but about limiting bodily contact.

Bro hugging, moreover, is specifically a way of performing solidarity with heterosexual men. In the CBS program, the bros explain that a man would not bro hug a woman since a bro hug is, by its forcefulness, designed to be masculinity affirming. Similarly, a bro hug is not intended for gay men, lesbians, or queer people. The bro hug performs and reinforce bro identity within an exclusively bro domain. For bros, by bros. As such, the bro hug does little to signal a decrease in homophobia. Instead, it affirms men’s identities as “real” men and their difference from both women and non-heterosexual men.

In this way, the bro-hug functions similarly to the co-masturbation and same-sex sexual practices of heterosexually identified white men, documented by the sociologist Jane Ward in her book, Not Gay. Ward argues that when straight white men have sex with other straight white men they are not necessarily blurring the boundaries between homo- and heterosexuality. Instead, they are shifting the line separating what is considered normal from what is considered queer.  Touching another man’s anus during a fraternity hazing ritual is normal (i.e., straight) while touching another man’s anus in a gay porn is queer.  In other words, the white straight men can have sex with each other because it is not “real” gay sex. 

Similarly, within the context of a bro hug, straight white men can now bro hug each other because they are heterosexual. Bro hugging will not diminish either man’s heterosexual capital. In fact, it might increase it. When two bros hug, they signal to others their unshakable strength of and comfort in their heterosexuality. Even though they are touching other men in public, albeit minimally, the act itself reinforces their heterosexuality and places it beyond reproach.

Hubert Izienicki, PhD, is a professor of sociology at Purdue University Northwest. 

(View original at

CryptogramBluetooth Vulnerabilities

A bunch of Bluetooth vulnerabilities are being reported, some pretty nasty.

BlueBorne concerns us because of the medium by which it operates. Unlike the majority of attacks today, which rely on the internet, a BlueBorne attack spreads through the air. This works similarly to the two less extensive vulnerabilities discovered recently in a Broadcom Wi-Fi chip by Project Zero and Exodus. The vulnerabilities found in Wi-Fi chips affect only the peripherals of the device, and require another step to take control of the device. With BlueBorne, attackers can gain full control right from the start. Moreover, Bluetooth offers a wider attacker surface than WiFi, almost entirely unexplored by the research community and hence contains far more vulnerabilities.

Airborne attacks, unfortunately, provide a number of opportunities for the attacker. First, spreading through the air renders the attack much more contagious, and allows it to spread with minimum effort. Second, it allows the attack to bypass current security measures and remain undetected, as traditional methods do not protect from airborne threats. Airborne attacks can also allow hackers to penetrate secure internal networks which are "air gapped," meaning they are disconnected from any other network for protection. This can endanger industrial systems, government agencies, and critical infrastructure.

Finally, unlike traditional malware or attacks, the user does not have to click on a link or download a questionable file. No action by the user is necessary to enable the attack.

Fully patched Windows and iOS systems are protected; Linux coming soon.

Worse Than FailureCodeSOD: Mutex.js

Just last week, I was teaching a group of back-end developers how to use Angular to develop front ends. One question that came up, which did suprise me a bit, was how to deal with race conditions and concurrency in JavaScript.

I’m glad they asked, because it’s a good question that never occurred to me. The JavaScript runtime, of course, is single-threaded. You might use Web Workers to get multiple threads, but they use an Actor model, so there’s no shared state, and thus no need for any sort of locking.

Chris R’s team did have a need for locking. Specifically, their .NET backend needed to run a long-ish bulk operation against their SqlServer. It would be triggered by an HTTP request from the client-side, AJAX-style, but only one user should be able to run it at a time.

Someone, for some reason, decided that they would implement this lock in front-end JavaScript, since that’s where the AJAX calls were coming from..

var myMutex = true; //global (as in page wide, global) variable
function onClickHandler(element) {
    if (myMutex == true) {
        myMutex = false;
        // snip...
        if ($(element).hasClass("addButton") == true) {
            // snip...
            $.get(url).done(function (r) {
                // snip... this code is almost identical to the branch below
                setTimeout("myMutex = true;", 100);
        } else {
            if ($(element).hasClass("removeButton") == true) {
                // snip...
                $.get(url).done(function (r) {
                    // snip... this code is almost identical to the branch above
                    setTimeout("myMutex = true;", 100);

You may be shocked to learn that this solution didn’t work, and the developer responsible never actually tested it with multiple users. Obviously, a client side variable isn’t going to work as a back-end lock. Honestly, I’m not certain that’s the worst thing about this code.

First, they reinvented the mutex badly. They seem to be using CSS classes to hold application state. They have (in the snipped code) duplicate branches of code that vary only by a handful of flags. They aren’t handling errors on the request- which, when this code started failing, made it that much harder to figure out why.

But it’s the setTimeout("myMutex = true;", 100); that really gets me. Why? Why the 100ms lag? What purpose does that serve?

Chris threw this code away and put a mutex in the backend service.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!


Planet Linux AustraliaOpenSTEM: Those Dirty Peasants!

It is fairly well known that many Europeans in the 17th, 18th and early 19th centuries did not follow the same routines of hygiene as we do today. There are anecdotal and historical accounts of people being dirty, smelly and generally unhealthy. This was particularly true of the poorer sections of society. The epithet “those […]


TED5 reasons to convince your boss to send you to TEDWomen this year

Inspiration, challenge, community — when we listen to great ideas together, great things can happen. Photo: Stacie McChesney / TED

Every year at TEDWomen, we gather to talk about issues that matter, to learn and bond and get energized. This year, we will be reconvening on November 1–3 in New Orleans — and we would love for you, and your amazing perspective and ideas, to join us and become part of this diverse, welcoming group that’s growing every year.

Join us at TEDWomen 2017 >>

However, there’s a challenge we’re hearing from some of you — especially those who’d like to attend in a professional capacity. And it’s this: It’s hard to explain to your boss how this conference can contribute to your professional success and development.

What we know from past attendees is, TEDWomen is an extraordinary professional development event — sending people back to work refreshed, connected and full of ideas. We’d love to encourage more people to attend with professional growth in mind. So, if you’re interested in attending TEDWomen, here are some talking points to support you when you ask for your share of the staff-development budget:

1. At TEDWomen, you’ll learn tools to craft better messages, to listen and connect more deeply, to problem-solve and spark new ideas. What you hear onstage — and from fellow attendees — will spark new thinking that you can bring back to your team. (Many TEDsters, in fact, schedule a team meeting for the week after TED to download what they learned.) As one attendee wrote: “Amazing and inspiring overall. I’m leaving a better person because of it.”

Join an audience of curious and enthusiastic lifelong learners and doers. Photo: Marla Aufmuth / TED

2. TEDWomen is where some of the boldest conversations are happening — which can help you kickstart the conversations your organization needs to have. You’ll hear about new markets and new power structures, learn how people are engaging with diversity internally and externally, and get new ideas for leveraging technology. Because you never know where your company’s next great idea may come from. As one attendee told us: “I am a VP at a Fortune 500 company and this conference was life-changing for me. There are so many execs who have the experience, money and resources to help drive the causes that were discussed.”

3. The TEDWomen community is a powerful network, offering connections across many fields and in many countries. VC Chris Fralic once described the benefit of attending TED in four words: “permission to follow up.” TEDWomen is not a place for high-pitched networking — it’s designed to be a place to connect over conversations that matter, to plant seeds for collaborations and real relationships. As one attendee said: “I connected with so many people with whom I am able to help grow their work and they are going to work with me to grow mine. I think it is terrific that TED provides such meaningful resources for attendees to connect and converse.”

Well, we make no promises that you too will get a selfie with Sandi Toksvig, left, host of the Great British Bake Off, but yes, connections like this happen at TEDWomen all the time. The audience and speakers are all part of the same amazing community. Photo: Stacie McChesney / TED

4. Finally, it’s just a great conference — offering TED’s legendary high quality, brilliant content and attention to detail at every turn, at a more approachable price. Attendees tell us things like: “Single best and most diverse event that I’ve been to” and “It was a truly immersive, brilliant experience that left me feeling mentally refreshed and inspired. This was my first TED, and I can see why people get addicted to coming back year upon year.”

5. You don’t have to wait to be invited. In fact, consider this blog post your invitation to TEDWomen. We truly want to diversify and grow the audience for this conference, to increase the network effect that happens when great people get together. Come join us for what one attendee calls “a truly transformative conference and experience. TED has become a very important part of my CEO/executive life in feeding my soul!”

Apply to attend TEDWomen 2017 — we can’t wait to meet you!

We hope to see you at TEDWomen, where our awesome audience is as vital to the magic as any speaker on stage. Photo: Marla Aufmuth / TED

Planet Linux AustraliaDave Hall: Trying Drupal

While preparing for my DrupalCamp Belgium keynote presentation I looked at how easy it is to get started with various CMS platforms. For my talk I used Contentful, a hosted content as a service CMS platform and contrasted that to the "Try Drupal" experience. Below is the walk through of both.

Let's start with Contentful. I start off by visiting their website.

Contentful homepage

In the top right corner is a blue button encouraging me to "try for free". I hit the link and I'm presented with a sign up form. I can even use Google or GitHub for authentication if I want.

Contentful signup form

While my example site is being installed I am presented with an overview of what I can do once it is finished. It takes around 30 seconds for the site to be installed.

Contentful installer wait

My site is installed and I'm given some guidance about what to do next. There is even an onboarding tour in the bottom right corner that is waving at me.

Contentful dashboard

Overall this took around a minute and required very little thought. I never once found myself thinking come on hurry up.

Now let's see what it is like to try Drupal. I land on d.o. I see a big prominent "Try Drupal" button, so I click that.

Drupal homepage

I am presented with 3 options. I am not sure why I'm being presented options to "Build on Drupal 8 for Free" or to "Get Started Risk-Free", I just want to try Drupal, so I go with Pantheon.

Try Drupal providers

Like with Contentful I'm asked to create an account. Again I have the option of using Google for the sign up or completing a form. This form has more fields than contentful.

Pantheon signup page

I've created my account and I am expecting to be dropped into a demo Drupal site. Instead I am presented with a dashboard. The most prominent call to action is importing a site. I decide to create a new site.

Pantheon dashboard

I have to now think of a name for my site. This is already feeling like a lot of work just to try Drupal. If I was a busy manager I would have probably given up by this point.

Pantheon create site form

When I submit the form I must surely be going to see a Drupal site. No, sorry. I am given the choice of installing WordPress, yes WordPress, Drupal 8 or Drupal 7. Despite being very confused I go with Drupal 8.

Pantheon choose application page

Now my site is deploying. While this happens there is a bunch of items that update above the progress bar. They're all a bit nerdy, but at least I know something is happening. Why is my only option to visit my dashboard again? I want to try Drupal.

Pantheon site installer page

I land on the dashboard. Now I'm really confused. This all looks pretty geeky. I want to try Drupal not deal with code, connection modes and the like. If I stick around I might eventually click "Visit Development site", which doesn't really feel like trying Drupal.

Pantheon site dashboard

Now I'm asked to select a language. OK so Drupal supports multiple languages, that nice. Let's select English so I can finally get to try Drupal.

Drupal installer, language selection

Next I need to chose an installation profile. What is an installation profile? Which one is best for me?

Drupal installer, choose installation profile

Now I need to create an account. About 10 minutes I already created an account. Why do I need to create another one? I also named my site earlier in the process.

Drupal installer, configuration form part 1
Drupal installer, configuration form part 2

Finally I am dropped into a Drupal 8 site. There is nothing to guide me on what to do next.

Drupal site homepage

I am left with a sense that setting up Contentful is super easy and Drupal is a lot of work. For most people wanting to try Drupal they would have abandoned someway through the process. I would love to see the conversion stats for the try Drupal service. It must miniscule.

It is worth noting that Pantheon has the best user experience of the 3 companies. The process with 1&1 just dumps me at a hosting sign up page. How does that let me try Drupal?

Acquia drops onto a page where you select your role, then you're presented with some marketing stuff and a form to request a demo. That is unless you're running an ad blocker, then when you select your role you get an Ajax error.

The Try Drupal program generates revenue for the Drupal Association. This money helps fund development of the project. I'm well aware that the DA needs money. At the same time I wonder if it is worth it. For many people this is the first experience they have using Drupal.

The previous attempt to have added to the try Drupal page ultimately failed due to the financial implications. While this is disappointing I don't think is necessarily the answer either.

There needs to be some minimum standards for the Try Drupal page. One of the key item is the number of clicks to get from d.o to a working demo site. Without this the "Try Drupal" page will drive people away from the project, which isn't the intention.

If you're at DrupalCon Vienna and want to discuss this and other ways to improve the marketing of Drupal, please attend the marketing sprints.

try-contentful-1.png342.82 KB
try-contentful-2.png214.5 KB
try-contentful-3.png583.02 KB
try-contentful-5.png826.13 KB
try-drupal-1.png1.19 MB
try-drupal-2.png455.11 KB
try-drupal-3.png330.45 KB
try-drupal-4.png239.5 KB
try-drupal-5.png203.46 KB
try-drupal-6.png332.93 KB
try-drupal-7.png196.75 KB
try-drupal-8.png333.46 KB
try-drupal-9.png1.74 MB
try-drupal-10.png1.77 MB
try-drupal-11.png1.12 MB
try-drupal-12.png1.1 MB
try-drupal-13.png216.49 KB

CryptogramFriday Squid Blogging: Another Giant Squid Caught off the Coast of Kerry

The Flannery family have caught four giant squid, two this year.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.


Sociological ImagesResearch Finds Obesity is in the Eye of the Beholder

In an era of body positivity, more people are noting the way American culture stigmatizes obesity and discriminates by weight. One challenge for studying this inequality is that a common measure for obesity—Body Mass Index (BMI), a ratio of height to weight—has been criticized for ignoring important variation in healthy bodies. Plus, the basis for weight discrimination is what other people see as “too fat,” and that’s a standard with a lot of variation.

Recent research in Sociological Science from Vida Maralani and Douglas McKee gives us a picture of how the relationship between obesity and inequality changes with social context. Using data from the National Longitudinal Surveys of Youth (NLSY), Maralani and McKee measure BMI in two cohorts, one in 1981 and one in 2003. They then look at social outcomes seven years later, including wages, the probability of a person being married, and total family income.

The figure below shows their findings for BMI and 2010 wages for each group in the study. The dotted lines show the same relationships from 1988 for comparison.

For White and Black men, wages actually go up as their BMI increases from the “Underweight” to “Normal” ranges, then levels off and slowly decline as they cross into the “Obese” range. This pattern is fairly similar to 1988, but check out the “White Women” graph in the lower left quadrant. In 1988, the authors find a sharp “obesity penalty” in which women over a BMI of 30 reported a steady decline in wages. By 2010, this has largely leveled off, but wage inequality didn’t go away. Instead, that spike near the beginning of the graph suggests people perceived as skinny started earning more. The authors write:

The results suggest that perceptions of body size may have changed across cohorts differently by race and gender in ways that are consistent with a normalizing of corpulence for black men and women, a reinforcement of thin beauty ideals for white women, and a status quo of a midrange body size that is neither too thin nor too large for white men (pgs. 305-306).

This research brings back an important lesson about what sociologists mean when they say something is “socially constructed”—patterns in inequality can change and adapt over time as people change the way they interpret the world around them.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

CryptogramAnother iPhone Change to Frustrate the Police

I recently wrote about the new ability to disable the Touch ID login on iPhones. This is important because of a weirdness in current US law that protects people's passcodes from forced disclosure in ways it does not protect actions: being forced to place a thumb on a fingerprint reader.

There's another, more significant, change: iOS now requires a passcode before the phone will establish trust with another device.

In the current system, when you connect your phone to a computer, you're prompted with the question "Trust this computer?" and you can click yes or no. Now you have to enter in your passcode again. That means if the police have an unlocked phone, they can scroll through the phone looking for things but they can't download all of the contents onto a another computer without also knowing the passcode.

More details:

This might be particularly consequential during border searches. The "border search" exception, which allows Customs and Border Protection to search anything going into the country, is a contentious issue when applied electronics. It is somewhat (but not completely) settled law, but that the U.S. government can, without any cause at all (not even "reasonable articulable suspicion", let alone "probable cause"), copy all the contents of my devices when I reenter the country sows deep discomfort in myself and many others. The only legal limitation appears to be a promise not to use this information to connect to remote services. The new iOS feature means that a Customs office can browse through a device -- a time limited exercise -- but not download the full contents.

Worse Than FailureError'd: Have it Your Way!

"You can have any graphics you want, as long as it's Intel HD Graphics 515," Mark R. writes.


"You know, I'm pretty sure that I've been living there for a while now," writes Derreck.


Sven P. wrote, "Usually, I blame production outages on developers who, I swear, have trouble counting to five. After seeing this, I may want to blame the compiler too."


"Whenever I hear someone complaining about their device battery life, I show them this picture," wrote Renan.


"Prepaying for gas, my credit card was declined," Rand H. writes, "I was worried some thief must've maxed it out, but then I saw how much I was paying in taxes."


Brett A. wrote, "Yo Dawg I heard you like zips, so you should zip your zips to send your zips."


[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!


CryptogramSecuring a Raspberry Pi

A Raspberry Pi is a tiny computer designed for makers and all sorts of Internet-of-Things types of projects. Make magazine has an article about securing it. Reading it, I am struck by how much work it is to secure. I fear that this is beyond the capabilities of most tinkerers, and the result will be even more insecure IoT devices.

Krebs on SecurityEquifax Hackers Stole 200k Credit Card Accounts in One Fell Swoop

Visa and MasterCard are sending confidential alerts to financial institutions across the United States this week, warning them about more than 200,000 credit cards that were stolen in the epic data breach announced last week at big-three credit bureau Equifax. At first glance, the private notices obtained by KrebsOnSecurity appear to suggest that hackers initially breached Equifax starting in November 2016. But Equifax says the accounts were all stolen at the same time — when hackers accessed the company’s systems in mid-May 2017.


Both Visa and MasterCard frequently send alerts to card-issuing financial institutions with information about specific credit and debit cards that may have been compromised in a recent breach. But it is unusual for these alerts to state from which company the accounts were thought to have been pilfered.

In this case, however, Visa and MasterCard were unambiguous, referring to Equifax specifically as the source of an e-commerce card breach.

In a non-public alert sent this week to sources at multiple banks, Visa said the “window of exposure” for the cards stolen in the Equifax breach was between Nov. 10, 2016 and July 6, 2017. A similar alert from MasterCard included the same date range.

“The investigation is ongoing and this information may be amended as new details arise,” Visa said in its confidential alert, linking to the press release Equifax initially posted about the breach on Sept. 7, 2017.

The card giant said the data elements stolen included card account number, expiration date, and the cardholder’s name. Fraudsters can use this information to conduct e-commerce fraud at online merchants.

It would be tempting to conclude from these alerts that the card breach at Equifax dates back to November 2016, and that perhaps the intruders then managed to install software capable of capturing customer credit card data in real-time as it was entered on one of Equifax’s Web sites.

Indeed, that was my initial hunch in deciding to report out this story. But according to a statement from Equifax, the hacker(s) downloaded the data in one fell swoop in mid-May 2017.

“The attacker accessed a storage table that contained historical credit card transaction related information,” the company said. “The dates that you provided in your e-mail appear to be the transaction dates. We have found no evidence during our investigation to indicate the presence of card harvesting malware, or access to the table before mid-May 2017.”

Equifax did not respond to questions about how it was storing credit card data, or why only card data collected from customers after November 2016 was stolen.

In its initial breach disclosure on Sept. 7, Equifax said it discovered the intrusion on July 29, 2017. The company said the hackers broke in through a vulnerability in the software that powers some of its Web-facing applications.

In an update to its breach disclosure published Wednesday evening, Equifax confirmed reports that the application flaw in question was a weakness disclosed in March 2017 in a popular open-source software package called Apache Struts (CVE-2017-5638)

“Equifax has been intensely investigating the scope of the intrusion with the assistance of a leading, independent cybersecurity firm to determine what information was accessed and who has been impacted,” the company wrote. “We know that criminals exploited a U.S. website application vulnerability. The vulnerability was Apache Struts CVE-2017-5638. We continue to work with law enforcement as part of our criminal investigation, and have shared indicators of compromise with law enforcement.”

The Apache flaw was first spotted around March 7, 2017, when security firms began warning that attackers were actively exploiting a “zero-day” vulnerability in Apache Struts. Zero-days refer to software or hardware flaws that hackers find and figure out how to use for commercial or personal gain before the vendor even knows about the bugs.

By March 8, Apache had released new versions of the software to mitigate the vulnerability. But by that time exploit code that would allow anyone to take advantage of the flaw was already published online — making it a race between companies needing to patch their Web servers and hackers trying to exploit the hole before it was closed.

Screen shots apparently taken on March 10, 2017 and later posted to the vulnerability tracking site xss[dot]cx indicate that the Apache Struts vulnerability was present at the time on — the only web site mandated by Congress where all Americans can go to obtain a free copy of their credit reports from each of the three major bureaus annually.

In another screen shot apparently made that same day and uploaded to xss[dot]cx, we can see evidence that the Apache Struts flaw also was present in Experian’s Web properties.

Equifax has said the unauthorized access occurred from mid-May through July 2017, suggesting either that the company’s Web applications were still unpatched in mid-May or that the attackers broke in earlier but did not immediately abuse their access.

It remains unclear when exactly Equifax managed to fully eliminate the Apache Struts flaw from their various Web server applications. But one thing we do know for sure: The hacker(s) got in before Equifax closed the hole, and their presence wasn’t discovered until July 29, 2017.

Update, Sept. 15, 12:31 p.m. ET: Visa has updated their advisory about these 200,000+ credit cards stolen in the Equifax breach. Visa now says it believes the records also included the cardholder’s Social Security number and address, suggesting that (ironically enough) the accounts were stolen from people who were signing up for credit monitoring services through Equifax.

Equifax also clarified the breach timeline to note that it patched the Apache Struts flaw in its Web applications only after taking the hacked system(s) offline on July 30, 2017. Which means Equifax left its systems unpatched for more than four months after a patch (and exploit code to attack the flaw) was publicly available.

CryptogramHacking Robots

Researchers have demonstrated hacks against robots, taking over and controlling their camera, speakers, and movements.

News article.

Worse Than FailureCodeSOD: string isValidArticle(string article)

Anonymous sends us this little blob of code, which is mildly embarassing on its own:

    static StringBuilder vsb = new StringBuilder();
    internal static string IsValidUrl(string value)
        if (value == null)
            return "\"\"";

        vsb.Length= 0;

        for (int i=0; i<value.Length; i++)
            if (value[i] == '\"')

        return vsb.ToString();

I’m willing to grant that re-using the same static StringBuilder object is a performance tuning thing, but everything else about this is… just plain puzzling.

The method is named IsValidUrl, but it returns a string. It doesn’t do any validation! All it appears to do is take any arbitrary string and return that string wrapped as if it were a valid C# string literal. At best, this method is horridly misnamed, but if its purpose is to truly generate valid C# strings, it has a potential bug: it doesn’t handle new-lines. Now, I’m sure that won’t be a problem that comes back up before the end of this article.

The code, taken on its own, is just bad. But when placed into context, it gets worse. This isn’t just code. It’s part of .NET’s System.Runtime.Remoting package. Still, I know, you’re saying to yourself, ‘In all the millions of lines in .NET, this is really the worst you’ve come up with?’

Well, it comes up because remember that bug with new-lines? Well, guess what. That exact flaw was a zero-day that allowed code execution… in RTF files.

Now, skim through some of the other code in wsdlparser.cs, and you'll see the real horror. This entire file has one key job: generating a class capable of parsing data according to an input WSDL file… by using string concatenation.

The real WTF is the fact that you can embed SOAP links in RTF files and Word will attempt to use them, thus running the WSDL parser against the input data. This is code that’s a little bad, used badly, creating an exploited zero-day.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Don Martianother 2x2 chart

What to do about different kinds of user data interchange:

Data collected without permission Data collected with permission
Good dataBuild tools and norms to reduce the amount of reliable data that is available without permission. Develop and test new tools and norms that enable people to share data that they choose to share.
Bad data Report on and show errors in low-quality data that was collected without permission. Offer users incentives and tools that help them choose to share accurate data and correct errors in voluntarily shared data.

Most people who want data about other people still prefer data that's collected without permission, and collaboration is something that they'll settle for. So most voluntary user data sharing efforts will need a defense side as well. Freedom-loving technologists have to help people reduce the amount of data that they allow to be taken from them without permission in order for data listen to people about sharing data.

Planet Linux AustraliaOpenSTEM: New Dates for Human Relative + ‘Explorer Classroom’ Resources

During September, National Geographic is featuring the excavations of Homo naledi at Rising Star Cave in South Africa in their Explorer Classroom, in tune with new discoveries and the publishing of dates for this enigmatic little hominid. A Teacher’s Guide and Resources are available and classes can log in to see live updates from the […]


TED“World peace will come from sitting around the table”: Chef Pierre Thiam chats with food blogger Ozoz Sokoh

Chef and cookbook author Pierre Thiam, left, sits down with food blogger Ozoz Sokoh to talk about the West African rice dish jollof — beloved in Nigeria, Senegal, Ghana and around the world. But who makes it best? They spoke during TEDGlobal 2017 in Arusha, Tanzania. Photo: Callie Giovanna / TED

Two African cooks walk into a bar; 30 seconds later they are arguing over whose country’s jollof rice is better. Or so the corny joke would go. The truth is, I really had no idea what would happen if we got Senegal-born chef Pierre Thiam (TED Talk: A Forgotten Ancient Grain That Could Help Africa Prosper) and Nigerian jollof promoter Ozoz Sokoh to sit down together for a friendly chat.

Based in New York, Pierre is a world-renowned chef who grew up in Senegal and is known for his exquisite dishes and his passion for spreading African cuisine across the world. He informed me that my interview request was the third jollof-related one he had granted in a week, the previous ones coming from the BBC and Wall Street Journal. It totally makes sense that in the heat of the jollof wars that now erupt every few weeks, mostly on Twitter, usually between Nigerians and Ghanaians, pundits are turning to a Senegalese chef for their take on the dispute. Jollof, after all, is named for the Wolof people, the largest ethnic group in Senegal; the country does have some claim.

Ozoz for her own part is an accomplished cook (she declined to be called a chef because it’s like a professional certification, apparently), food blogger and photographer, and probably one of the biggest promoters of jollof rice in Africa right now, an obsession that has since burst out of her Twitter timeline into a dedicated blog and the well-attended World Jollof Day festival. Was she down to interview Pierre about the jollof controversy? Of course. In fact, Ozoz had come from Lagos armed with homemade Nigerian spices, snacks and a jollof T-shirt for Pierre.

I apologize in advance to everyone who was spoiling for some sort of fiery showdown; this isn’t it. And I will admit to influencing their conversation slightly, by suggesting to them that the jollof question was merely an interesting pretext for a broader and infinitely more useful conversation about African cuisine that both of them were incredibly suited to have. What you are about to read is what happened next.

Ozoz: I think that it’s amazing that we’ve had all these ingredients for centuries but our preference is to default to what isn’t homegrown. You were talking about fonio yesterday, and I think there is an appreciation that we need to develop for homegrown products. Apart from fonio, what other things do to think we should be going crazy about? That are locally grown and could have transformative effects on food security.

Pierre: There are countless, you see. Millet is one of them. Sorghum is another one. The leaves too, especially in Nigeria where there are so many interesting leaf vegetables that are highly recommended for diets, and many cultures don’t know them as much as Nigeria does. So there is an opportunity there to share this knowledge. People talk about moringa, but moringa is just one of them.

Ozoz: One of my concerns is how do we get people in remote, non-urban areas to realise the value of what they have around them.

Pierre: Actually I don’t think it’s people in rural areas who have this problem. It’s people in urban areas who like to mimic the westerners’ way of eating and look down on the rural way of eating. Take fonio, for instance — you find it in Northern Nigeria and the Southern part of Senegal a lot, but in Lagos, Abuja, Dakar, you have to look for it. So the rural areas, they have it because there is a tradition. That’s what they have. And they can’t even afford the food that comes from the west. But us, we prefer to import from the west, and this is terrible for our economy. It’s terrible for our sense of pride, which is affected every day.

“I think there are many rituals that we’ve lost,” Ozoz says, “but sitting around the table with family and friends is one that we need to reintroduce into our way of life.” She’s speaking with Pierre Thiam at TEDGlobal 2017. Photo: Callie Giovanna / TED

Ozoz: I feel like the attitude to homegrown is changing. Nok by Alara for instance, it has an amazing menu that is tribute to homegrown, just an amazing mixture of local flavours and textures. But what other things do you think we can do to grow the whole new Nigerian or West African-style cuisine — in addition to cooking, what other ways beyond the kitchen?

Pierre: It’s a very good question, because it goes beyond the kitchen. It’s not only chefs who can wage that battle. It takes many, many levels. The media is important because information is key. Many people don’t know: We have wonderful ingredients. We have superfoods. If you look at our DNA, our background, our ancestors were strong people and they were eating that food, and because of that they were taken, because of their strength. We today want to say that that food is not good enough, and we import diseases. Many of the diseases that you see today in Nigeria or Dakar are imported. Diabetes, high cholesterol, high blood pressure, hypertension … all of which are directly connected with your diet. We use a lot of cubes now in our diet, and that is directly linked to why there is a lot of hypertension, because there is a lot of sodium in them. It’s a mind shift, we have to get back to what we have.

Ozoz: You are right, the media plays a really important role. So jollof rice. Obviously, everyone says Nigerian Jollof is the best :) what do you think?

Pierre: I hear you. When I’m in Nigeria, I eat Nigerian jollof, that’s for sure. And I enjoy it. When I’m in Ghana, I love Ghanaian jollof too. This is the great thing about jollof, jollof is a dish that’s like all these different cultures and countries just owning it. Jollof means Senegal [ed: the name derives from “Wolof“], but that doesn’t mean we own it. That is the way Africa is, food transcends borders, you know, and jollof has obviously transcended borders in a way that is powerful. This war is beautiful.

Ozoz: So you think Jollof can promote world peace?

Pierre: Absolutely. I think world peace will come from sitting around the table.

Pierre Thiam says: “When I’m in Nigeria, I eat Nigerian jollof, that’s for sure. And I enjoy it. When I’m in Ghana, I love Ghanaian jollof too. That is the way Africa is: food transcends borders.” Photo: Callie Giovanna / TED

Ozoz: I think there are many rituals that we’ve lost, but sitting around the table with family and friends is one that we need to reintroduce into our way of life.

Pierre: It is key. Simple moments like this on a daily basis can make a huge difference. And jollof rice is a symbolic dish that it’s great that everyone claims

Ozoz: it’s so refreshing to hear you say that — it’s a testament to your open and giving nature

Pierre: That’s what food is about: sharing. In Africa you go to a household and people offer you food. Food is something we don’t keep to ourselves, we have to share it. If you go to a household in Lagos, you will be offered something to drink, zobo, it’s a symbolic thing.

Ozoz: I was really, really fascinated to read modern recipes in Senegal, modern recipes from the source to the bowl. I was really intrigued by the palm oil recipes, particularly the palm oil ice cream. Really, really intrigued, it looks really amazing and it’s on my list of things to make once I get back and I settle down. I’m gonna get organic palm oil, the best quality that I can find

Pierre: That’s the best ice cream I’ve ever had.

Ozoz: It looks the part.

Pierre: I want to hear what you have to say when you make it.

Ozoz: Tell about how you developed this recipe. Were you sleeping? Was it midnight? How did it come to you?

Pierre: At first I wanted to have something vegan, something without dairy — as you can see, there is no dairy in that recipe. But when you eat it, you don’t taste that there is no dairy, it’s got the richness of the palm oil. There’s coconut milk, there is palm oil, and there is lime zest, which really brings the acidity. So you have a perfect balance, which is what you are really looking for. Creating new recipes is like chemistry. Your kitchen is your lab, and you just get creative and have fun with it.

Ozoz: I find myself thinking a lot about my memory bank…my taste bank. There are certain things I eat that transport me to a time, a place…what are some of the things that are in your memory bank, and can you share a bit about why they are there?

Pierre: Well, it usually goes back to childhood. The memories of food are powerful, and it can come from anything. Like a whiff that takes you back to your grandmother’s, the dishes that she would cook for you when you were a kid. So for me, I’m gonna come back to palm oil and okro, those are the ingredients that are very powerful to me and take me back to those moments of innocence. It’s very emotional when I get into that zone. A lot of my creations come from there, and those traditions. And that is why traditions are important. I think that any African chef before looking to the future has to go back into the past and remember what was served to them in their childhood — or do some research into the traditions and get a better grasp of the future.

Ozoz: If you were a spice, what would you be?

Pierre: Probably ginger, because I like the heat of it. Especially Nigerian ginger. I like it because it can bring the sensation of heat without being too overpowering like pepper.

Ozoz: If you were a fruit, what would your be?

Pierre: A fruit, huh? I love papaya, because I can use it as a dessert, or as a tenderiser when I’m cooking meat. I love green papaya that I can put in a salad, with red onions and chili and lime juice, that becomes a snack. It’s very versatile.

Ozoz: I think the future of food in Africa has a lot to do with collaboration. How do we grow this collective of voices around it, writers, food photographers, chefs… In the US, for instance, there are associations, foundations, but I’m not sure if those constructs would suit African needs. What should we thinking about if we are to take the appreciation of our food history and practice of the culture to the next level?

Pierre: I think that this conversation is important to have…like chef’s meetings. It could be around events. For instance, this November I’m inviting chefs to Saint-Louis, in Senegal. And they are coming from across Africa, from Cameroon, Morocco, Cote d’Ivoire, South Africa, and they are coming to this event as part of the Saint-Louis Forum. Each of us will come with our own traditions and approach to food.

Ozoz: You are absolutely right, that coming together, exchange of ideas, discussions …

Bankole in the background: blogging, food festivals…

Ozoz: Yes. We talked about the role of media earlier. Writing, podcasts, videos, how-tos, documentaries, it’s a whole range.

Pierre: And it’s the right time, right now, we have a lot of tools at our disposal. We don’t need big networks to broadcast this, we can do it ourselves and reach millions of people. As Africans, we have a unique opportunity to tell our story. African cuisine is ready to be explored, we’ve got so much to offer from each country and so many different cultures with different flavors.

Surrounded by mounds of fresh ingredients, Pierre Thiam preps fonio sushi rolls to share onstage at TEDGlobal 2017. Photo: Ryan Lash / TED


Ozoz: Quick fire round. Zobo or tamarind?

Pierre: Zobo.

Ozoz: What do you always have in your fridge?

Pierre: Oh boy…I don’t have much in my fridge…

Ozoz: What food can’t you live without?

Pierre: Uh? This is going to sound clichéd but I really love my fonio on a regular basis.

Ozoz: I don’t mind that. Foraging or fishing?

Pierre: Fishing.

Ozoz: Cumin or coriander seeds?

Pierre: Cumin.

Ozoz: Rain or sun?

Pierre: Sun.

Ozoz: Pancakes or French toast?

Pierre: French toast.

Ozoz: Food writing or photography?

Pierre: Both. Actually photography is very important, but good food writing can transport you to places in your imagination, which is more difficult to capture with photography.

Ozoz: Cilantro or parsley?

Pierre: Cilantro.

Ozoz: Last one. Nigerian jollof or Ghanaian jollof?

Pierre: Senegalese …

To share with Pierre, Ozoz brought a package of homemade spice mixes from Nigeria, including yaji spice, a peanut-based mixture of smoky and spicy aromatics that’s traditionally used to make suja, a popular street food. Photo: Callie Giovanna / TED

CryptogramOn the Equifax Data Breach

Last Thursday, Equifax reported a data breach that affects 143 million US customers, about 44% of the population. It's an extremely serious breach; hackers got access to full names, Social Security numbers, birth dates, addresses, driver's license numbers -- exactly the sort of information criminals can use to impersonate victims to banks, credit card companies, insurance companies, and other businesses vulnerable to fraud.

Many sites posted guides to protecting yourself now that it's happened. But if you want to prevent this kind of thing from happening again, your only solution is government regulation (as unlikely as that may be at the moment).

The market can't fix this. Markets work because buyers choose between sellers, and sellers compete for buyers. In case you didn't notice, you're not Equifax's customer. You're its product.

This happened because your personal information is valuable, and Equifax is in the business of selling it. The company is much more than a credit reporting agency. It's a data broker. It collects information about all of us, analyzes it all, and then sells those insights.

Its customers are people and organizations who want to buy information: banks looking to lend you money, landlords deciding whether to rent you an apartment, employers deciding whether to hire you, companies trying to figure out whether you'd be a profitable customer -- everyone who wants to sell you something, even governments.

It's not just Equifax. It might be one of the biggest, but there are 2,500 to 4,000 other data brokers that are collecting, storing, and selling information about you -- almost all of them companies you've never heard of and have no business relationship with.

Surveillance capitalism fuels the Internet, and sometimes it seems that everyone is spying on you. You're secretly tracked on pretty much every commercial website you visit. Facebook is the largest surveillance organization mankind has created; collecting data on you is its business model. I don't have a Facebook account, but Facebook still keeps a surprisingly complete dossier on me and my associations -- just in case I ever decide to join.

I also don't have a Gmail account, because I don't want Google storing my e-mail. But my guess is that it has about half of my e-mail anyway, because so many people I correspond with have accounts. I can't even avoid it by choosing not to write to addresses, because I have no way of knowing if is hosted at Gmail.

And again, many companies that track us do so in secret, without our knowledge and consent. And most of the time we can't opt out. Sometimes it's a company like Equifax that doesn't answer to us in any way. Sometimes it's a company like Facebook, which is effectively a monopoly because of its sheer size. And sometimes it's our cell phone provider. All of them have decided to track us and not compete by offering consumers privacy. Sure, you can tell people not to have an e-mail account or cell phone, but that's not a realistic option for most people living in 21st-century America.

The companies that collect and sell our data don't need to keep it secure in order to maintain their market share. They don't have to answer to us, their products. They know it's more profitable to save money on security and weather the occasional bout of bad press after a data loss. Yes, we are the ones who suffer when criminals get our data, or when our private information is exposed to the public, but ultimately why should Equifax care?

Yes, it's a huge black eye for the company -- this week. Soon, another company will have suffered a massive data breach and few will remember Equifax's problem. Does anyone remember last year when Yahoo admitted that it exposed personal information of a billion users in 2013 and another half billion in 2014?

This market failure isn't unique to data security. There is little improvement in safety and security in any industry until government steps in. Think of food, pharmaceuticals, cars, airplanes, restaurants, workplace conditions, and flame-retardant pajamas.

Market failures like this can only be solved through government intervention. By regulating the security practices of companies that store our data, and fining companies that fail to comply, governments can raise the cost of insecurity high enough that security becomes a cheaper alternative. They can do the same thing by giving individuals affected by these breaches the ability to sue successfully, citing the exposure of personal data itself as a harm.

By all means, take the recommended steps to protect yourself from identity theft in the wake of Equifax's data breach, but recognize that these steps are only effective on the margins, and that most data security is out of your hands. Perhaps the Federal Trade Commission will get involved, but without evidence of "unfair and deceptive trade practices," there's nothing it can do. Perhaps there will be a class-action lawsuit, but because it's hard to draw a line between any of the many data breaches you're subjected to and a specific harm, courts are not likely to side with you.

If you don't like how careless Equifax was with your data, don't waste your breath complaining to Equifax. Complain to your government.

This essay previously appeared on

EDITED TO ADD: In the early hours of this breach, I did a radio interview where I minimized the ramifications of this. I didn't know the full extent of the breach, and thought it was just another in an endless string of breaches. I wondered why the press was covering this one and not many of the others. I don't remember which radio show interviewed me. I kind of hope it didn't air.

TEDThis is how to make Pierre Thiam’s fonio sushi

Pierre Thiam’s fonio sushi recipe wraps chunks of fresh vegetables in a mixture of the ancient fonio grain and sweet potato.  Photo: Ryan Lash / TED

If you’ve seen Pierre Thiam’s TED Talk about fonio, then you saw that part when he actually handed food out to the audience, yes? For those who didn’t know to sit in the front rows to receive that blessing (or couldn’t be there in the first place), and don’t mind rolling up their sleeves in the kitchen, Pierre has shared the recipe and cooking instructions for anyone who would like to re-create his fonio sushi.

No, I haven’t tried it yet, but if you can procure all the ingredients, especially the fonio, obviously, it looks super easy to make! Here we go.

To make Fonio Sweet Potato and Okra Sushi, you are going to need:

1 cup cooked fonio
1 cooked and mashed sweet potato
1 tbsp. rice vinegar
Salt to taste
1 carrot, cut into sticks and blanched
1 cucumber, seeded and cut into sticks
2 cups young okra, trimmed on both ends, blanched and shocked in iced water
1 package nori seaweed sheets, toasted

In a large bowl, combine cooked fonio, sweet potato and rice vinegar. Season with salt. Lay a bamboo sushi mat on a smooth surface, and lay out seaweed sheet on sushi mat. Using a paddle or your hands, lay out the fonio-sweet potato mixture evenly and thinly, leaving about 2 inches of the seaweed edge farthest from you uncovered.

Lay out the fonio mixture evenly on top of the nori rice sheet, leaving space at the far end for rolling. Photo: Ryan Lash / TED

Lay cucumber sticks in a row at the edge nearest you. Lay out a row of carrot sticks next, then a row of okra. Moisten the far edge of the nori with fingers dipped in water. Take the edge closest to you and roll the nori sheet as tightly as possible until you have one complete roll.

Lay out a row of cucumber, a row of carrot and a row of okra, then carefully roll everything together, using the bamboo mat for support. Photo: Ryan Lash / TED

Press the moistened edge against the roll to seal, and place the roll seam side down. Run your knife under warm water, to prevent sticking, and carefully slice the roll into 6–8 pieces.

Neaten up the edges, then slice the roll, using a damp knife to prevent sticking. Photo: Ryan Lash / TED

Serve with soy sauce and wasabi, and garnish with spice if you like; when preparing the sushi for the TEDGlobal audience, Pierre used dehydrated dawadawa. This recipe serves four. Enjoy.

Pierre Thiam garnishes his fonio sushi with dehydrated dawadawa for spice and color. Photo: Ryan Lash / TED

Krebs on SecurityAdobe, Microsoft Plug Critical Security Holes

Adobe and Microsoft both on Tuesday released patches to plug critical security vulnerabilities in their products. Microsoft’s patch bundles fix close to 80 separate security problems in various versions of its Windows operating system and related software — including two vulnerabilities that already are being exploited in active attacks. Adobe’s new version of its Flash Player software tackles two flaws that malware or attackers could use to seize remote control over vulnerable computers with no help from users.


Of the two zero-day flaws being fixed this week, the one in Microsoft’s ubiquitous .NET Framework (CVE-2017-8759) is perhaps the most concerning. Despite this flaw being actively exploited, it is somehow labeled by Microsoft as “important” rather than “critical” — the latter being the most dire designation.

More than two dozen flaws Microsoft remedied with this patch batch come with a “critical” warning, which means they could be exploited without any assistance from Windows users — save for perhaps browsing to a hacked or malicious Web site.

Regular readers here probably recall that I’ve often recommended installing .NET updates separately from any remaining Windows updates, mainly because in past instances in which I’ve experienced problems installing Windows updates, a .NET patch was usually involved.

For the most part, Microsoft now bundles all security updates together in one big patch ball for regular home users — no longer letting people choose which patches to install. One exception is patches for the .NET Framework, and I stand by my recommendation to install the patch roll-ups separately, reboot, and then tackle the .NET updates. Your mileage may vary.

Another vulnerability Microsoft fixed addresses “BlueBorne” (CVE-2017-8628), which is a flaw in the Bluetooth wireless data transmission standard that attackers could use to snarf data from Bluetooth-enabled devices that are physically nearby and with Bluetooth turned on.

For more on this month’s Patch Tuesday from Microsoft, check out Microsoft’s security update guide, as well as this blog from Ivanti (formerly Shavlik).

brokenflash-aAdobe’s newest Flash version — v. for Windows, Mac and Linx systems — corrects two critical bugs in Flash. For those of you who still have and want Adobe Flash Player installed in a browser, it’s time to update and/or restart your browser.

Windows users who browse the Web with anything other than Internet Explorer may need to apply the Flash patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates and/or restart the browser to get the latest Flash version). Chrome users may need to restart the browser to install or automatically download the latest version. When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then. Chrome will replace that three dot icon with an up-arrow inside of a circle when updates are ready to install).

Better yet, consider removing or at least hobbling Flash Player, which is a perennial target of malware attacks. Most sites have moved away from requiring Flash, and Adobe itself is sunsetting this product (albeit not for another long two more years).

Windows users can get rid of Flash through the Add/Remove Programs menu, unless they’re using Chrome, which bundles its own version of Flash Player. To get to the Flash settings page, type or cut and paste “chrome://settings/content” into the address bar, and click on the Flash result.

Sociological ImagesThe Cost of Sexual Harassment

Originally posted at Gender & Society

Last summer, Donald Trump shared how he hoped his daughter Ivanka might respond should she be sexually harassed at work. He said“I would like to think she would find another career or find another company if that was the case.” President Trump’s advice reflects what many American women feel forced to do when they’re harassed at work: quit their jobs. In our recent Gender & Society article, we examine how sexual harassment, and the job disruption that often accompanies it, affects women’s careers.

How many women quit and why?  Our study shows how sexual harassment affects women at the early stages of their careers. Eighty percent of the women in our survey sample who reported either unwanted touching or a combination of other forms of harassment changed jobs within two years. Among women who were not harassed, only about half changed jobs over the same period. In our statistical models, women who were harassed were 6.5 times more likely than those who were not to change jobs. This was true after accounting for other factors – such as the birth of a child – that sometimes lead to job change. In addition to job change, industry change and reduced work hours were common after harassing experiences.

Percent of Working Women Who Change Jobs (2003–2005)

In interviews with some of these survey participants, we learned more about how sexual harassment affects employees. While some women quit work to avoid their harassers, others quit because of dissatisfaction with how employers responded to their reports of harassment.

Rachel, who worked at a fast food restaurant, told us that she was “just totally disgusted and I quit” after her employer failed to take action until they found out she had consulted an attorney. Many women who were harassed told us that leaving their positions felt like the only way to escape a toxic workplace climate. As advertising agency employee Hannah explained, “It wouldn’t be worth me trying to spend all my energy to change that culture.”

The Implications of Sexual Harassment for Women’s Careers  Critics of Donald Trump’s remarks point out that many women who are harassed cannot afford to quit their jobs. Yet some feel they have no other option. Lisa, a project manager who was harassed at work, told us she decided, “That’s it, I’m outta here. I’ll eat rice and live in the dark if I have to.

Our survey data show that women who were harassed at work report significantly greater financial stress two years later. The effect of sexual harassment was comparable to the strain caused by other negative life events, such as a serious injury or illness, incarceration, or assault. About 35 percent of this effect could be attributed to the job change that occurred after harassment.

For some of the women we interviewed, sexual harassment had other lasting effects that knocked them off-course during the formative early years of their career. Pam, for example, was less trusting after her harassment, and began a new job, for less pay, where she “wasn’t out in the public eye.” Other women were pushed toward less lucrative careers in fields where they believed sexual harassment and other sexist or discriminatory practices would be less likely to occur.

For those who stayed, challenging toxic workplace cultures also had costs. Even for women who were not harassed directly, standing up against harmful work environments resulted in ostracism, and career stagnation. By ignoring women’s concerns and pushing them out, organizational cultures that give rise to harassment remain unchallenged.

Rather than expecting women who are harassed to leave work, employers should consider the costs of maintaining workplace cultures that allow harassment to continue. Retaining good employees will reduce the high cost of turnover and allow all workers to thrive—which benefits employers and workers alike.

Heather McLaughlin is an assistant professor in Sociology at Oklahoma State University. Her research examines how gender norms are constructed and policed within various institutional contexts, including work, sport, and law, with a particular emphasis on adolescence and young adulthood. Christopher Uggen is Regents Professor and Martindale chair in Sociology and Law at the University of Minnesota. He studies crime, law, and social inequality, firm in the belief that good science can light the way to a more just and peaceful world. Amy Blackstone is a professor in Sociology and the Margaret Chase Smith Policy Center at the University of Maine. She studies childlessness and the childfree choice, workplace harassment, and civic engagement. 

(View original at

CryptogramHacking Voice Assistant Systems with Inaudible Voice Commands

Turns out that all the major voice assistants -- Siri, Google Now, Samsung S Voice, Huawei
HiVoice, Cortana and Alexa -- listen at audio frequencies the human ear can't hear. Hackers can hijack those systems with inaudible commands that their owners can't hear.

News articles.

Worse Than FailureCodeSOD: You Absolutely Don't Need It

The progenitor of this story prefers to be called Mr. Syntax, perhaps because of the sins his boss committed in the name of attempting to program a spreadsheet-loader so generic that it could handle any potential spreadsheet with any data arranged in any conceivable format.

The boss had this idea that everything should be dynamic, even things that should be relatively straightforward to do, such as doing a web-originated bulk load of data from a spreadsheet into the database. Although only two such spreadsheet formats were in use, the boss wrote it to handle ANY spreadsheet. As you might imagine, this spawned mountains of uncommented and undocumented code to keep things generic. Sin was tasked with locating and fixing the cause of a NullPointerException that should simply never have occurred. There was no stack dump. There were no logs. It was up to Sin to seek out and destroy the problem.

Just to make it interesting, this process was slow, so the web service would spawn a job that would email the user with the status of the job. Of course, if there was an error, there would inevitably be no email.

It took an entire day to find and then debug through this simple sheet-loader and the mountain of unrelated embedded code, just to find that the function convertExcelSheet blindly assumed that every cell would exist in all spreadsheets, regardless of potential format differences.

[OP: in the interest of brevity, I've omitted all of the methods outside the direct call-chain...]

  public class OperationsController extends BaseController {
    private final JobService jobService;

    public OperationsController(final JobService jobService) {
      this.jobService = jobService;

    @RequestMapping(value = ".../bulk", method = RequestMethod.POST)
    public @ResponseBody SaveResponse bulkUpload(@AuthenticationPrincipal final User               activeUser,
                                                 @RequestParam("file")    final MultipartFile      file, 
                                                                          final WebRequest         web, 
                                                                          final HttpServletRequest request){
      SaveResponse response = new SaveResponse();
      try {
          if (getSystemAdmin(activeUser)) {
             final Map<String,Object> customParams = new HashMap<>();
             response = jobService.runJob((CustomUserDetails)activeUser,ThingBulkUpload.JOB_NAME, customParams);
          } else {
             response.addError("ACCESS_ERROR","Only Administrators can run bulk upload");
      } catch (final Exception e) {
        logger.error("Unable to process file",e);
      return response;

  public class JobServiceImpl implements JobService {
    private static final Logger logger = LoggerFactory.getLogger(OperationsService.class);
    private final JobDAO jobDao;

    public JobServiceImpl(final JobDAO dao){
      this.jobDao = dao;

    public SaveResponse runJob(final @NotNull CustomUserDetails user, 
                               final @NotNull String            jobName, 
                               final Map<String,Object>         customParams) {
      SaveResponse response = new SaveResponse();
      try {
          Job job = (Job) jobDao.findFirstByProperty("Job","name",jobName);
          if (job == null || job.getJobId() == null || job.getJobId() <= 0) {
             response.addError("Unable to find Job for name '"+jobName+"'");
          } else {
            JobInstance ji = new JobInstance();
            ji.setJobStatus((JobStatus) jobDao.findFirstByProperty("JobStatus", "jobStatusId", JobStatus.KEY_INITIALZING) );
            Boolean created = jobDao.saveHibernateEntity(ji);
            if (created) {
               String className = job.getJobType().getJavaClass();
               Class<?> c = Class.forName(className);
               Constructor<?> cons = c.getConstructor(JobDAO.class,CustomUserDetails.class,JobInstance.class,Map.class);
               BaseJobImpl baseJob = (BaseJobImpl) cons.newInstance(jobDao,user,ji,customParams);
               ji.setJobStatus((JobStatus) jobDao.findFirstByProperty("JobStatus", "jobStatusId", JobStatus.KEY_IN_PROCESS) );
               StringBuffer successMessage = new StringBuffer();
               successMessage.append("Job '").append(jobName).append("' has been started. ");
               successMessage.append("An email will be sent to '").append(user.getUsername()).append("' when the job is complete. ");
               String url = baseJob.generateCheckBackURL();
               successMessage.append("You can also check the detailed status here: <a href=\"").append(url).append("\">").append(url).append("</a>");
            } else {
               response.addError("Unable to create JobInstance for Job name '"+jobName+"'");
      } catch (Exception e) {
        String message = "Unable to runJob. Please contact support";
      return response;

  public class ThingBulkUpload extends BaseJobImpl {
    public static final String JOB_NAME = "Thing Bulk Upload";
    public static final String KEY_FILE = "file";

    public ThingBulkUpload(final JobDAO             jobDAO, 
                           final CustomUserDetails  user, 
                           final JobInstance        jobInstance, 
                           final Map<String,Object> customParams) {

        public void run() {
                SaveResponse response = new SaveResponse();
                try {
                        final InputStream inputStream = (InputStream) getCustomParam(KEY_FILE);
                        if(inputStream == null) {
                                response.addError("Unable to run ThingBulkUpload; file is NULL");
                        } else {
                                final AnotherThingImporter cri = new AnotherThingImporter(customParams);
                                response = cri.importThingData(user);
                } catch (final Exception e) {
                        final String message = "Unable to finish ThingBulkUpload";
                        response.addError(message + ": " + e.getMessage());
                } finally {

public class AnotherThingImporter {

        // Op: Instantiated this way, even though the impls are annotated with Spring's @Repository.
        private final LocationDAO locationDAO = new LocationDAOImpl();
        private final ContactDAO contactDAO = new ContactDAOImpl();
        private final EntityDAO entityDAO = new EntityDAOImpl();
        private final BaseHibernateDAO baseDAO = new BaseHibernateDAOImpl();
        // Op: snip a few dozen more DAOs

        private       InputStream         workbookStream = null;
        private final Map<String, Object> customParams;

        public ClientRosterImporter(final Map<String, Object> customParams) {
                this.customParams = customParams;

        public void changeFileStream(final InputStream fileStream) {
                workbookStream = fileStream;

        public SaveResponse importThingData(final CustomUserDetails adminUser) {
                final SaveResponse response = new SaveResponse();
                if (workbookStream == null) {
                        throw new ThreeWonException("MISSING_FILE", "ClientRosterImporter was improperly created. No file found.");
                try {
                        final XSSFWorkbook workbook = new XSSFWorkbook(workbookStream);

                        for (int i = 0; i < workbook.getNumberOfSheets(); i++) {

                                final XSSFSheet sheet = workbook.getSheetAt(i);
                                final String sheetName = sheet.getSheetName();

                                // Op: snip 16 unrelated else ifs...
                                } else if (sheetName.equalsIgnoreCase("History")) {
                                        populateHistory(adminUser, response, sheet);
                                // Op: snip 3 more unrelated else ifs...
                } catch (final IOException e) {
                        throw new ThreeWonException("BAD_EXCEL_FILE", "Unable to open excel workbook.");
                if (response.getErrors() == null || response.getErrors().size() <= 0) {
                return response;

        // Op: snip 19 completely unrelated methods
        private void populateEducationHistory(final CustomUserDetails adminUser, final SaveResponse response,
                                              final XSSFSheet sheet) {
                final ThingDataConverter converter = new ThingDataConverterImpl(entityDAO, locationDAO,
                converter.convertExcelSheet(adminUser, response, sheet, customParams);

public class ThingChildAssocConverter extends ThingDataConverter {
        public void convertExcelSheet(final CustomUserDetails adminUser, final SaveResponse response, final XSSFSheet sheet,
                final Map<String, Object> customParams) {
                final int rowCount = sheet.getPhysicalNumberOfRows();
                Integer numCreated = 0;

                for (int rowIndex = DEFAULT_HEADER_ROW_COUNT; rowIndex < rowCount; rowIndex++) {

                    final XSSFRow currentRow = sheet.getRow(rowIndex);

                    // Op: Null pointer thrown from row.getCell(...)
                    //final String name = df.formatCellValue(currentRow.getCell(COL_INST_NUM)); 
                    final String name = getValue(currentRow, COL_INST_NUM);
                    // Op: creation of the record here
        protected String getValue(final XSSFRow row, final Integer column) {
		// Op: We can not assume that any given cell will exist on all spreadsheets
                try {
                        return df.formatCellValue(row.getCell(column)).trim();
                } catch (final Exception e) {
                        // avoid NullPointers by returning "" instead of null
                        return "";

As opposed to two simple methods that just retrieved the cells, in order, from each specific spreadsheet format.

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Don MartiTracking protection defaults on trusted and untrusted sites

(I work for Mozilla. None of this is secret. None of this is official Mozilla policy. Not speaking for Mozilla here.)

Setting tracking protection defaults for a browser is hard. Some activities that the browser might detect as third-party tracking are actually third-party services such as single sign-on—so when the browser sets too high of a level of protection it can break something that the user expects to work.

Meanwhile, new research from Pagefair shows that The very large majority (81%) of respondents said they would not consent to having their behaviour tracked by companies other than the website they are visiting. A tracking protection policy that leans too far in the other direction will also fail to meet the user's expectations.

So you have to balance two kinds of complaints.

  • "your dumbass browser broke a site that was working before"

  • "your dumbass browser let that stupid site do stupid shit"

Maybe, though, if the browser can figure out which sites the user trusts, you can keep the user happy by taking a moderate tracking protection approach on the trusted sites, and a more cautious approach on less trusted sites.

Apple Intelligent Tracking Prevention allows third-party tracking by domains that the user interacts with.

If the user has not interacted with in the last 30 days, website data and cookies are immediately purged and continue to be purged if new data is added. However, if the user interacts with as the top domain, often referred to as a first-party domain, Intelligent Tracking Prevention considers it a signal that the user is interested in the website and temporarily adjusts its behavior (More...)

But it looks like this could give large companies an advantage—if the same domain has both a service that users will visit and third-party tracking, then the company that owns it can track users even on sites that the users don't trust. Russell Brandom: Apple's new anti-tracking system will make Google and Facebook even more powerful.

It might makes more sense to set the trust level, and the browser's tracking protection defaults, based on which site the user is on. Will users want a working "Tweet® this story" button on a news site they like, and a "Log in with Google" feature on a SaaS site they use, but prefer to have third-party stuff blocked on random sites that they happen to click through to?

How should the browser calculate user trust level? Sites with bookmarks would look trusted, or sites where the user submits forms (especially something that looks like an email address). More testing is needed, and setting protection policies is still a hard problem.

Bonus link: Proposed Principles for Content Blocking.


Krebs on SecurityAyuda! (Help!) Equifax Has My Data!

Equifax last week disclosed a historic breach involving Social Security numbers and other sensitive data on as many as 143 million Americans. The company said the breach also impacted an undisclosed number of people in Canada and the United Kingdom. But the official list of victim countries may not yet be complete: According to information obtained by KrebsOnSecurity, Equifax can safely add Argentina — if not also other Latin American nations where it does business — to the list as well.

equihaxEquifax is one of the world’s three-largest consumer credit reporting bureaus, and a big part of what it does is maintain records on consumers that businesses can use to learn how risky it might be to loan someone money or to extend them new lines of credit. On the flip side, Equifax is somewhat answerable to those consumers, who have a legal right to dispute any information in their credit report which may be inaccurate.

Earlier today, this author was contacted by Alex Holden, founder of Milwaukee, Wisc.-based Hold Security LLC. Holden’s team of nearly 30 employees includes two native Argentinians who spent some time examining Equifax’s South American operations online after the company disclosed the breach involving its business units in North America.

It took almost no time for them to discover that an online portal designed to let Equifax employees in Argentina manage credit report disputes from consumers in that country was wide open, protected by perhaps the most easy-to-guess password combination ever: “admin/admin.”

We’ll speak about this Equifax Argentina employee portal — known as Veraz or “truthful” in Spanish — in the past tense because the credit bureau took the whole thing offline shortly after being contacted by KrebsOnSecurity this afternoon. The specific Veraz application being described in this post was dubbed Ayuda or “help” in Spanish on internal documentation.

The landing page for the internal administration page of Equifax’s Veraz portal. Click to enlarge.

Once inside the portal, the researchers found they could view the names of more than 100 Equifax employees in Argentina, as well as their employee ID and email address. The “list of users” page also featured a clickable button that anyone authenticated with the “admin/admin” username and password could use to add, modify or delete user accounts on the system. A search on “Equifax Veraz” at Linkedin indicates the unit currently has approximately 111 employees in Argentina.

A partial list of active and inactive Equifax employees in Argentina. This page also let anyone add or remove users at will, or modify existing user accounts.

Each employee record included a company username in plain text, and a corresponding password that was obfuscated by a series of dots.

The “edit users” page obscured the Veraz employee’s password, but the same password was exposed by sloppy coding on the Web page.

However, all one needed to do in order to view said password was to right-click on the employee’s profile page and select “view source,” a function that displays the raw HTML code which makes up the Web site. Buried in that HTML code was the employee’s password in plain text.

A review of those accounts shows all employee passwords were the same as each user’s username. Worse still, each employee’s username appears to be nothing more than their last name, or a combination of their first initial and last name. In other words, if you knew an Equifax Argentina employee’s last name, you also could work out their password for this credit dispute portal quite easily.

But wait, it gets worse. From the main page of the employee portal was a listing of some 715 pages worth of complaints and disputes filed by Argentinians who had at one point over the past decade contacted Equifax via fax, phone or email to dispute issues with their credit reports. The site also lists each person’s DNI — the Argentinian equivalent of the Social Security number — again, in plain text. All told, this section of the employee portal included more than 14,000 such records.

750 pages worth of consumer complaints — more than 14,000 in all — complete with the Argentinian equivalent of the SSN (the DNI) in plain text. This page was auto-translated by Google Chrome into English.

Jorge Speranza, manager of information technology at Hold Security, was born in Argentina and lived there for 40 years before moving to the United States. Speranza said he was aghast at seeing the personal data of so many Argentinians protected by virtually non-existent security.

Speranza explained that — unlike the United States — Argentina is traditionally a cash-based society that only recently saw citizens gaining access to credit.

“People there have put a lot of effort into getting a loan, and for them to have a situation like this would be a disaster,” he said. “In a country that has gone through so much — where there once was no credit, no mortgages or whatever — and now having the ability to get loans and lines of credit, this is potentially very damaging.”

Shortly after receiving details about this epic security weakness from Hold Security, I reached out to Equifax and soon after heard from a Washington, D.C.-based law firm that represents the credit bureau.

I briefly described what I’d been shown by Hold Security, and attorneys for Equifax said they’d get back to me after they validated the claims. They later confirmed that the Veraz portal was disabled and that Equifax is investigating how this may have happened. Here’s hoping it will stay offline until it is fortified with even the most basic of security protections.

According to Equifax’s own literature, the company has operations and consumer “customers” in several other South American nations, including Brazil, Chile, Ecuador, Paraguay, Peru and Uruguay. It is unclear whether the complete lack of security at Equifax’s Veraz unit in Argentina was indicative of a larger problem for the company’s online employee portals across the region, but it’s difficult to imagine they could be any worse.

“To me, this is just negligence,” Holden said. “In this case, their approach to security was just abysmal, and it’s hard to believe the rest of their operations are much better.”

I don’t have much advice for Argentinians whose data may have been exposed by sloppy security at Equifax. But I have urged my fellow Americans to assume their SSN and other personal data was compromised in the breach and to act accordingly. On Monday, KrebsOnSecurity published a Q&A about the breach, which includes all the information you need to know about this incident, as well as detailed advice for how to protect your credit file from identity thieves.

[Author’s note: I am listed as an adviser to Hold Security on the company’s Web site. However this is not a role for which I have been compensated in any way now or in the past.]

TEDThe big idea: 3 reasons to be kind to educators

Any dedicated educator can tell you: A teaching job extends far beyond the hours of the school day. Molding the minds of future leaders while simultaneously ferrying them across the rapids of childhood and adolescence — and dealing with the economics of the job — is a calling not for the faint of heart. Here are three solid reasons to give teachers the love and support they deserve.

1. Being a teacher is tough (just about everywhere)

Loving teaching and being a teacher are two different, but not mutually exclusive things when money can play a deciding factor. Teachers from around the world struggle with similar financial issues, no matter their longitude or latitude. Through our TED-Ed network, we caught with up 17 public school teachers from Kildare to Kathmandu, Johannesburg to Oslo and beyond, on how their salary influences their livelihood.

“I took a pay cut to become a teacher. It is a calling, not a job. Teaching is a privilege that is not for the infirm of purpose or seekers of large pay-stub totals. If I didn’t wake up before my alarm so I can get to school early, I’d be worried. The fact is that I do wake each morning excited for what the day holds for my classroom — the challenges as much as the triumphs — which for some can be a simple as reading a first sentence.”

—  a 6th grade teacher from Markham, Canada

“I am happy but financially strapped. I don’t eat at restaurants; I can’t afford it. I am not a demanding guy, so my income seems sufficient for now, but I can’t sustain my life on it.”

— a computer teacher from Kathmandu, Nepal

“Though I love my job, the stress that comes with it along with the stress of money problems sometimes makes me consider leaving, even though I don’t think I would feel as fulfilled as I do right now. We scrape by, and make the best of what we have, and we are happy for now.”

— an elementary school music teacher from Georgia, United States

Many teach for love of education, to shape the minds of the coming generations; not for the love of money.

2. Educators don’t just teach, they manage a flurry of feelings

As kids age into their late teens, they simultaneously embark on an emotional journey that often plays out during school hours. Heartbreak, arguments with friends, troubled home life, struggles with mental health and schoolwork, never-before-experienced emotions, and numerous other factors typically crop up during and in-between classes. Without a parent or guardian at hand, it’s left to the teachers and school staff to tend to the emotional well-being of students.

Amid administrative duties, endless grading and planning lessons that may forever impact the students they teach, educators must manage a room full of budding young adults who aren’t always ready to sit quietly and be taught. Patience and consideration is tested on a daily basis, no matter how much love a teacher has for their craft and their students. Stress is inevitable in any job, of course. But there’s opportunity for a special, haunting stress to form — one born from the knowledge that the future’s sitting just feet from the chalkboard, in its most formative years; to not acknowledge these demands, within limits, is to not recognize teachers as human beings first.

In addition to all of this, some believe educators should start teaching emotions in grade school. The RULER program, which is used in over a thousand schools in the US and abroad, is currently one of the most prominent tools for teaching emotions that breaks down the skill into five convenient steps:

Recognizing emotions in oneself and others

Understanding the causes and consequences of emotions

Labeling emotional experiences with an accurate and diverse vocabulary

Expressing and

Regulating emotions in ways that promote growth

Educator Nadia Lopez (TED Talk: Why open a school? To close a prison)  has her own tips for dealing with emotions that’ve already begun to bubble over. Lopez opened Mott Hall Bridges Academy in Brooklyn, New York (you may recognize the name from Humans of New York), and she did so with a simple goal: for her school to be a haven and guiding light for young scholars. As principal, she dedicates her life to what she sees in the future of each of her students. Sometimes, that means acting as the emotional bridge or traffic control as kids learn about not just what they should know, but more about who they are and what they stand for.

Lopez shares some of her favorite ways to dial down conflict with administrators, her scholars and staff — applicable in situations far beyond the classroom — broken down into 6 bite-sized tips.

  • Be vulnerable. Though it may seem counterintuitive, being open and honest with your team during challenging times demonstrates a sense of trust that can develop into mutual respect.
  • Be aware. Stop and ask, “Why isn’t this working?”
  • Center yourself. Being calm is so important that Lopez tries to spend at least 15 minutes each day enjoying uninterrupted silence.
  • Manage mediation. No yelling, wait to speak your turn, respect a person’s turn to explain their side.
  • Listen deeply and actively. In tense discussions, it’s important to acknowledge the feelings of each party involved and use reflective language to show that they’ve been heard.
  • Acknowledge, respect and thank. Repeat. A simple email, text or brief handwritten (ideally, hand-delivered) note has the power to touch deeply and stave away challenging occurrences.

3. Yes, teachers help kids, but sometimes they need help too

Teachers often spend hundreds of dollars on school supplies over the course of a school year. There are many options that allow parents and other charitable individuals to support classrooms near and far. Organizations like Donors Choose allow any interested party to choose an inspiring project and donate any amount.

Or, you can always take part in chiseling down fees in your own backyard.

If you’re interested in doing more, here’s a nice list of other ways you can help you educators, if time and/or resources are available.

Let’s be honest, most people have at least one story about their favorite teacher that’s left a lasting impression, shaped a lifelong interest, or helped them get through a tough time. That educator’s compassion and dedication may have even brought you to where you are now. Love is a main ingredient in what makes those memories stick — one that helped principal Linda Cliatt-Wayman (TED Talk: How to fix a broken school? Lead fearlessly, love hard) successfully turn around three schools.

As she says to her students everyday and a mantra for many educators to their kids:

Check out the TED-Ed blog for more education-based love and let’s celebrate educators!

teacher writing on chalkboard, linking to TED Talks by inspiring teachers

TEDThe big idea: 5 ways to be a more thoughtful traveler

There’s a difference between traveling to a place and vacationing there. Vacationing renders visions of relaxation and minimal effort, whereas traveling evokes thoughts of an adventure where Wi-Fi hotspots are few and far between. Here are some ways to think differently about the places you visit and the people you see before stepping out of that train, plane, automobile or boat.

1. Know some history

History offers context: it explains why buildings look a certain way, how foods became staples, what specific clothing styles and patterns mean, and which locations hold significance.

Generally, it’ll help you feel less lost as you wander through streets and interact with locals.

No one’s expecting you to become an expert overnight, or at all really. However, learning a few key facts about how an area, the people and their culture came to be demonstrates a basic level of respect. Skim through articles online or check out a book at your local library prior to your trip, or explore via Google’s Cultural Institute and Art Project.

Find out what LGBTQ life is like around the world; if you ever visit New York City, you might be interested to know what it looked like before it became a city; or you may even be shocked to discover, before you ride one, that camels aren’t originally from Middle East or the Horn of Africa at all.

Familiarize yourself within a place’s history, culture, art and science (politics too, if you’re feeling particularly passionate), and watch as your perspective of the world shifts just enough for things to take on a finer, clearer focus.

2. Think about how you’ll document your trip

A good question to ask yourself: Would you even go to this place if you weren’t allowed to take pictures? Try to keep your picture-taking habit in perspective, because focusing on your photos could keep you from truly immersing yourself in the moment and place. A few ways to be a smarter picture-taker:

Here are a few key tips (check out the entire article to collect them all):

  1. Keep your lens clean and your battery charged. Yes, both of these things are obvious, but they’re also very easy to forget. Phones can get especially dirty from riding around in our pockets and getting our fingerprints all over them. So form a habit where every time you go to pick up your camera, you clean off your lens. You can wipe your lens with a lens cloth or a super soft fabric like an old T-shirt. But be careful; using a fabric that’s too rough will scratch.
  2. Light is king. If you remember one thing from this list, choose this one. Lighting is as valuable a tool as your camera itself. Generally, natural light from the sun is the best option. If you’re inside, raise the blinds and open the curtains to let in as much light as possible and, if you can, move your subject near the window.
  3. Use a reflector. Reflectors bounce light from the sun or a lamp onto an object. If you want to get that clean, professional studio look, use a white piece of poster board or foamcore to reflect light onto your subjects.
  4. Think before you shoot. This means taking time to consider what’s in the frame, and coming up with the best composition. Are there any water bottles or random objects that should be moved? Have you cropped off the top of someone’s head? Take some time to consider it.
  5. Mind the lines. Horizon lines should be straight unless you’re making them diagonal for a creative effect. I like to use the grid feature on my phone to make sure I’m not off. I also often use a 9-square grid like the one below that breaks my photo up into thirds. This is called the Rule of Thirds — aim to place the points of interest in your photo along the lines or where the lines cross, and your photos will naturally feel more balanced to the viewer.

 3. Read a book* set wherever you’re going

*Fiction and nonfiction, if you can.

Books give you a good sense of the atmosphere of a place, the people you may encounter … Of course, people can’t be chalked up to imaginary situations and characteristics because that’s just stereotyping.

In the wise words of writer Chimamanda Ngozi Adichie (TED Talk: The danger of a single story):

“Stories matter. Many stories matter. Stories have been used to dispossess and to malign, but stories can also be used to empower and to humanize. Stories can break the dignity of a people, but stories can also repair that broken dignity,” she says.

It helps to learn about the people and the customs of a place so you don’t go charging in there acting like you’ve just dropped into a different planet. If you’re looking for a place to start, here’s 196 fictional novel recommendations (one from each country in the world).

4. Learn some of the language

It always useful to know at least a few words to help you get around. Like a bad friend, you don’t want to always drop into a place only to eat all the good food, find a comfy place to sleep and leave a few days later with barely a word exchanged.

“Why learn languages? If it isn’t going to change the way you think, what would the other reasons be? There are some,” says linguist John McWhorter. “One of them is that if you want to imbibe a culture, if you want to drink it in, if you want to become part of it, then whether or not the language channels the culture — and that seems doubtful — if you want to imbibe the culture, you have to control to some degree the language that the culture happens to be conducted in. There’s no other way.”


Here’s a simplified list from McWhorter’s talk (but watch the whole thing for all the language-lovin’):

  1. They are tickets to being able to participate in the culture of the people who speak them, just by virtue of the fact that it is their code.
  2. It’s been shown that if you speak two languages, dementia is less likely to set in, and that you are probably a better multitasker.
  3. Languages are just an awful lot of fun. Much more fun than we’re often told. They’re playful, if you let them be.
  4. We live in an era when it’s never been easier to teach yourself another language. Today you can lay down — lie on your living room floor, sipping bourbon, and teach yourself any language that you want to with wonderful sets such as Rosetta Stone. I highly recommend the lesser known Glossika as well. You can do it any time, therefore you can do it more and better.

If you need a little more motivation to sign up for a class, download an app, or leaf through an translation dictionary, check out the playlist below for TED Talks that’ll inspire you to learn a new language.

5. Understand where you come from

What does it mean to be from a place? For some, the answer is straight-forward and obvious. For others, the question isn’t as simple as it sounds. A thought experiment for yourself, as well as others you encounter while traveling — perhaps over beers or a card game — is to ask, “Where are you a local?” instead of “Where are you from?”

Taiye Selasi suggests an examination of life basics, which she calls the three “R’s”:

  • Rituals. Think of your daily rituals, whatever they may be: making your coffee, driving to work, harvesting your crops, saying your prayers. What kind of rituals are these? Where do they occur? In what city or cities in the world do shopkeepers know your face?
  • Relationships. Think of your relationships, of the people who shape your days. To whom do you speak at least once a week, be it face to face or on FaceTime? Be reasonable in your assessment; I’m not talking about your Facebook friends. I’m speaking of the people who shape your weekly emotional experience.
  • Restrictions. How we experience our locality depends in part on our restrictions. By restrictions, I mean, where are you able to live? What passport do you hold? Are you restricted by, say, racism, from feeling fully at home where you live? By civil war, dysfunctional governance, economic inflation, from living in the locality where you had your rituals as a child? This is the least sexy of the R’s, less lyric than rituals and relationships, but the question takes us past “Where are you now?” to “Why aren’t you there, and why?”

“Take a piece of paper and put those three words on top of three columns, then try to fill those columns as honestly as you can,” Selasi says. “A very different picture of your life in local context, of your identity as a set of experiences, may emerge.”

Need a more in-depth exploration of what it means to be a thoughtful traveler? Or if you’re ready to set off on an adventure, but not quite sure where, check out these TED Talks to watch when you’re in the mood for adventure and this great list of talks to give you wanderlust.

TEDCan cities have compassion? A Q&A with OluTimehin Adegbeye following her blockbuster TED Talk

For 12 spellbinding minutes, OluTimehin Adegbeye gave us a moving, challenging talk on cities and communities — and who gets to belong. She spoke at TEDGlobal 2017 on August 30 in Arusha, Tanzania. Photo: Bret Hartman / TED

Urban gentrification in Lagos is displacing hundreds of thousands of people who do not fit into the administration’s resplendent vision for the future. Their crime? Poverty. In what was one of the most moving talks of TEDGlobal 2017, OluTimehin Adegbeye calls us to consider the human cost of progress, specifically for the former inhabitants of Otodo Gbame, a coastal Lagos fishing community that was forcefully demolished to make way for a prime beachfront development. In 12 minutes of fearless oratory, punctuated with ironic humor and stories, Adegbeye makes the case for why cities must have consciences. We asked for more details about the Otodo Gbame situation, and how to think about creating cities that don’t leave their people behind.

How did you come to be invested in the subject of cities pushing out the poor? Was it before or after Otodo Gbame?

Definitely after Otodo Gbame. I had been vaguely aware of some of the anti-poor policies and actions taken by successive governments in Lagos, but the demolition of Otodo Gbame was the first incident that really woke me up to the injustice and urgency of the situation.

My initial involvement was the result of feelings of helplessness; I didn’t know what I could do, so I volunteered to write about it. But the more stories I heard in trying to write, the clearer it became to me how the structures that allowed anti-poor violence to exist unchallenged were not all that different or separate from those that allowed misogyny, or any other kind of violence really, to thrive. So my involvement became less about a desire to ‘help’ others and more about trying to dismantle systems that hurt me too, whether directly or by allowing me to be complicit in unchecked violence.

You are an activist with many causes. Why did you choose this one to be the subject of your talk?

I chose this topic because of the urgency of the situation. The demolitions and forced, systematic evictions in Lagos are happening with increasing regularity under the current government, so my hope is that the talk will lead to increased scrutiny of the actors who are responsible for these displacements, and eventually the abandonment of a model of “development” which prioritizes profits over people.

You said that these forced evictions are unconstitutional, but they happen anyway. I’m aware that there was a court ruling in favor of the displaced Otodo Gbame residents. Are you close enough to the situation to describe the current legal status of the issue? Will those people see some sort of vindication at some point? Or is justice too much to hope for?

The latest update I have is that the Lagos state government is appealing the ruling in favor of Otodo Gbame and other waterfront communities. I’m not sure what the grounds of the appeal are/will be, but since the people of Otodo Gbame have still been neither compensated nor resettled, it doesn’t seem like the executive is particularly interested in justice.

There are certain agencies within the government who have announced intentions to collaborate with informal settlements and waterfront communities to pursue in-situ upgrading, but very little if any concrete action has come of this.

Do you think the Lagos state government hears, feels this at all? Have you seen any reactions or indications that they do?

The Lagos state government definitely knows there has been widespread resistance and outrage, especially where Otodo Gbame is concerned. A handful of government officials, including the governor himself, have made statements attempting to explain or justify the demolitions in the wake of public outcry. However, it is anybody’s guess whether they are interested in going beyond trying to save face.

Asides TED, where else have you talked about this?

I’ve written about the demolitions for US and Norwegian publications, but TED is the only place I’ve spoken about them. I think it’s a great choice for getting the word out.

Do you have an organised campaign working on this?

The NGO I work with, the Justice and Empowerment Initiatives, at, has created a social media campaign tagged #SaveTheWaterfronts, which is specifically about the waterfront communities that are under threat in Lagos, and a broader one tagged #InclusiveLagos that comments on the threats to livelihoods, police brutality, forced migration and other actions that target marginalised groups

Who else is championing these people’s rights, and how can they be supported/helped? Are there organisations that are trusted channels for this help?

JEI has been working with waterfront communities and informal settlements in Lagos and port harcourt, Nigeria, for the past few years. Their model of legal empowerment is one I find incredibly effective for bottom-up organising, and they are a donor-funded organisation so I would definitely recommend donating to them.

Are there other communities at risk of displacement that we should be paying attention to right now?

Right now, Ago Egun Bariga, which is one of the communities you can see from Third Mainland Bridge in Lagos, is being slowly starved out by land reclamation activities that the Lagos state government contracted out to a Nigerian subsidiary of Boskalis, a Netherlands-based company. Efforts to dialogue with them have so far proved abortive. Also, another community, Abete Iwaya, was demolished just two days before I left for TED Global.

This is probably an unfair question, but do you have any thoughts about how to create more inclusive cities, cities with consciences?

I think there are many, many answers to this question that have merit — many of which have been proffered by people with greater expertise than me. But I would suggest responsiveness. Cities that take the needs of their residents into consideration as they grow will inevitably become cities with a conscience, I think. So then the question becomes who the powers-that-be consider ‘legitimate’ residents, and how that is defined. Because it’s not true that the exclusionary cities we have today don’t respond to their residents; it’s just that they respond to a very specific subset of residents. Which brings me back to the question of belonging. So maybe cities with a conscience are those that are non-discriminatory in their responsiveness.

Watch OluTimehin’s TED Talk >>

Worse Than FailureCodeSOD: Cases, Cases, Cases

Illustrated fashion catalogue - summer, 1890 (1890) (14597321320)

Paul R. shows us a classic example of the sort of case statement that maybe, you know, never should've been implemented as a case statement:

It is cut and paste to the extreme.  Even worse, as fields were added, someone would have to go in and update this block of code.  This massive block was replaced with...

var fieldName = reader["TemplateFieldName"].ToString();
theCommands = theCommands.Replace(
    fieldName, WashTheValue(reader["FieldValue"].ToString(), 

Below, you'll find the original code. Don't sprain your scrolling finger!

                    switch (reader["TemplateFieldName"].ToString())
                        case "<2yr.Guarantee>":
                            theCommands = theCommands.Replace("<2yr.Guarantee>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Address>":
                            theCommands = theCommands.Replace("<Address>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<ADDRESS>":
                            theCommands = theCommands.Replace("<ADDRESS>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<address1>":
                            theCommands = theCommands.Replace("<address1>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<address2>":
                            theCommands = theCommands.Replace("<address2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<AddressLine2>":
                            theCommands = theCommands.Replace("<AddressLine2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<BareRoot>":
                            theCommands = theCommands.Replace("<BareRoot>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Blank>":
                            theCommands = theCommands.Replace("<Blank>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<BLANK>":
                            theCommands = theCommands.Replace("<BLANK>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<BlankBack>":
                            theCommands = theCommands.Replace("<BlankBack>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<BulbSize>":
                            theCommands = theCommands.Replace("<BulbSize>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<BulbSizeSpanish>":
                            theCommands = theCommands.Replace("<BulbSizeSpanish>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<CanadianTireCode>":
                            theCommands = theCommands.Replace("<CanadianTireCode>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Certified>":
                            theCommands = theCommands.Replace("<Certified>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<CLEMATIS>":
                            theCommands = theCommands.Replace("<CLEMATIS>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<COLOR_BAR>":
                            theCommands = theCommands.Replace("<COLOR_BAR>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<CompanyAddress>":
                            theCommands = theCommands.Replace("<CompanyAddress>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<CompanyName>":
                            theCommands = theCommands.Replace("<CompanyName>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<CONTAINER_SIZE>":
                            theCommands = theCommands.Replace("<CONTAINER_SIZE>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<ContainerSize>":
                            theCommands = theCommands.Replace("<ContainerSize>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<CTC>":
                            theCommands = theCommands.Replace("<CTC>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Cust.Stock#>":
                            theCommands = theCommands.Replace("<Cust.Stock#>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<CustomerAddress>":
                            theCommands = theCommands.Replace("<CustomerAddress>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<customerCode>":
                            theCommands = theCommands.Replace("<customerCode>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<CustomerStock#>":
                            theCommands = theCommands.Replace("<CustomerStock#>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<CustStockNum>":
                            theCommands = theCommands.Replace("<CustStockNum>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<DeerIcon>":
                            theCommands = theCommands.Replace("<DeerIcon>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<description>":
                            theCommands = theCommands.Replace("<description>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<DisplayStakeHole>":
                            theCommands = theCommands.Replace("<DisplayStakeHole>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<GADD>":
                            theCommands = theCommands.Replace("<GADD>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Gallons>":
                            theCommands = theCommands.Replace("<Gallons>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<GNAME>":
                            theCommands = theCommands.Replace("<GNAME>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Grade>":
                            theCommands = theCommands.Replace("<Grade>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Grower>":
                            theCommands = theCommands.Replace("<Grower>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<GrowerAddress>":
                            theCommands = theCommands.Replace("<GrowerAddress>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<growerAddress>":
                            theCommands = theCommands.Replace("<growerAddress>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<growerName>":
                            theCommands = theCommands.Replace("<growerName>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<GrowerName>":
                            theCommands = theCommands.Replace("<GrowerName>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<GrownBy>":
                            theCommands = theCommands.Replace("<GrownBy>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<grownBy>":
                            theCommands = theCommands.Replace("<grownBy>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<GrownBy1>":
                            theCommands = theCommands.Replace("<GrownBy1>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<GrownBy2>":
                            theCommands = theCommands.Replace("<GrownBy2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<GrownBy3>":
                            theCommands = theCommands.Replace("<GrownBy3>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<GrownByLine2>":
                            theCommands = theCommands.Replace("<GrownByLine2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<GrownByLine3>":
                            theCommands = theCommands.Replace("<GrownByLine3>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<GrownIn>":
                            theCommands = theCommands.Replace("<GrownIn>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<GrowninCanada>":
                            theCommands = theCommands.Replace("<GrowninCanada>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<GrownInCanada>":
                            theCommands = theCommands.Replace("<GrownInCanada>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<HagenCode>":
                            theCommands = theCommands.Replace("<HagenCode>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<HasPrice>":
                            theCommands = theCommands.Replace("<HasPrice>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Inches>":
                            theCommands = theCommands.Replace("<Inches>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<InsidersLogo>":
                            theCommands = theCommands.Replace("<InsidersLogo>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<InsidersReport>":
                            theCommands = theCommands.Replace("<InsidersReport>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<ItemNumber>":
                            theCommands = theCommands.Replace("<ItemNumber>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Licensed>":
                            theCommands = theCommands.Replace("<Licensed>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<LicensedGrower>":
                            theCommands = theCommands.Replace("<LicensedGrower>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Liters>":
                            theCommands = theCommands.Replace("<Liters>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Logo>":
                            theCommands = theCommands.Replace("<Logo>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Logo2>":
                            theCommands = theCommands.Replace("<Logo2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<MultiPrice>":
                            theCommands = theCommands.Replace("<MultiPrice>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<NAME>":
                            theCommands = theCommands.Replace("<NAME>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<NewLogo>":
                            theCommands = theCommands.Replace("<NewLogo>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<NotPlantedRetail>":
                            theCommands = theCommands.Replace("<NotPlantedRetail>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<OnSaleFor>":
                            theCommands = theCommands.Replace("<OnSaleFor>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Origin>":
                            theCommands = theCommands.Replace("<Origin>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<OSHLocation>":
                            theCommands = theCommands.Replace("<OSHLocation>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<OwnRoot>":
                            theCommands = theCommands.Replace("<OwnRoot>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Page_Number>":
                            theCommands = theCommands.Replace("<Page_Number>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<PBS>":
                            theCommands = theCommands.Replace("<PBS>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<PC>":
                            theCommands = theCommands.Replace("<PC>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<PICASCode>":
                            theCommands = theCommands.Replace("<PICASCode>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<PinkDot>":
                            theCommands = theCommands.Replace("<PinkDot>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Plant1>":
                            theCommands = theCommands.Replace("<Plant1>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Plant2>":
                            theCommands = theCommands.Replace("<Plant2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Plant3>":
                            theCommands = theCommands.Replace("<Plant3>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Plant4>":
                            theCommands = theCommands.Replace("<Plant4>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Plant5>":
                            theCommands = theCommands.Replace("<Plant5>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<PlantCount>":
                            theCommands = theCommands.Replace("<PlantCount>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<PlantedRetail>":
                            theCommands = theCommands.Replace("<PlantedRetail>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<PotSize>":
                            theCommands = theCommands.Replace("<PotSize>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<PotSizeIcon>":
                            theCommands = theCommands.Replace("<PotSizeIcon>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Premium>":
                            theCommands = theCommands.Replace("<Premium>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Price>":
                            theCommands = theCommands.Replace("<Price>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<price>":
                            theCommands = theCommands.Replace("<price>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<PricePoint>":
                            theCommands = theCommands.Replace("<PricePoint>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<ProductOfUSA>":
                            theCommands = theCommands.Replace("<ProductOfUSA>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<ProductofUSA>":
                            theCommands = theCommands.Replace("<ProductofUSA>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Retail>":
                            theCommands = theCommands.Replace("<Retail>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<RetailPrice>":
                            theCommands = theCommands.Replace("<RetailPrice>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<RetailPricePoint>":
                            theCommands = theCommands.Replace("<RetailPricePoint>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Season>":
                            theCommands = theCommands.Replace("<Season>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<ShippingCode>":
                            theCommands = theCommands.Replace("<ShippingCode>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Size>":
                            theCommands = theCommands.Replace("<Size>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<SIZE>":
                            theCommands = theCommands.Replace("<SIZE>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<SizeCode>":
                            theCommands = theCommands.Replace("<SizeCode>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<SKU>":
                            theCommands = theCommands.Replace("<SKU>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<SKU2>":
                            theCommands = theCommands.Replace("<SKU2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<SKU3>":
                            theCommands = theCommands.Replace("<SKU3>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Slot_For_Pixie>":
                            theCommands = theCommands.Replace("<Slot_For_Pixie>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<SpecialPricing>":
                            theCommands = theCommands.Replace("<SpecialPricing>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Supplier>":
                            theCommands = theCommands.Replace("<Supplier>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<TagDate>":
                            theCommands = theCommands.Replace("<TagDate>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<TargetLocation>":
                            theCommands = theCommands.Replace("<TargetLocation>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Type>":
                            theCommands = theCommands.Replace("<Type>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<UPC>":
                            theCommands = theCommands.Replace("<UPC>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<UPC_Readable>":
                            theCommands = theCommands.Replace("<UPC_Readable>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<UPCBackground>":
                            theCommands = theCommands.Replace("<UPCBackground>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<UPCCanadian>":
                            theCommands = theCommands.Replace("<UPCCanadian>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<WaM>":
                            theCommands = theCommands.Replace("<WaM>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<WAM>":
                            theCommands = theCommands.Replace("<WAM>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<wam>":
                            theCommands = theCommands.Replace("<wam>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<WaM2>":
                            theCommands = theCommands.Replace("<WaM2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Website>":
                            theCommands = theCommands.Replace("<Website>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Weights>":
                            theCommands = theCommands.Replace("<Weights>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Weights1>":
                            theCommands = theCommands.Replace("<Weights1>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<Weights2>":
                            theCommands = theCommands.Replace("<Weights2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<WeightsAndMeasures>":
                            theCommands = theCommands.Replace("<WeightsAndMeasures>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<WeightsAndMeasures2>":
                            theCommands = theCommands.Replace("<WeightsAndMeasures2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<WeightsMeasures>":
                            theCommands = theCommands.Replace("<WeightsMeasures>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<WeightsMeasures1>":
                            theCommands = theCommands.Replace("<WeightsMeasures1>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<WeightsMeasures2>":
                            theCommands = theCommands.Replace("<WeightsMeasures2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<No_DDD_Logo>":
                            theCommands = theCommands.Replace("<No_DDD_Logo>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                        case "<NGcode>":
                            theCommands = theCommands.Replace("<NGcode>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Don MartiNew WebExtension reveals targeted political ads: Interview with Jeff Larson

The investigative journalism organization ProPublica is teaming up with three German news sites to collect political ads on Facebook in advance of the German parliamentary election on Sept. 24.

Because typical Facebook ads are shown only to finely targeted subsets of users, the best way to understand them is to have a variety of users cooperate to run a client-side research tool. ProPublica developer Jeff Larson has written a WebExtension, that runs on Mozilla Firefox and Google Chrome, to do just that. I asked him how the development went.

Q: Who was involved in developing your WebExtension?

A: Just me. But I can't take credit for the idea. I was at a conference in Germany a few months ago with my colleague Julia Angwin, and we were talking with people who worked at Spiegel about our work on the Machine Bias series. We all thought it would be a good idea to look at political ads on Facebook during the German election cycle, given what little we knew about what happened in the U.S. election last year.

Q: What documentation did you use, and what would you recommend that people read to get started with WebExtensions?

A: I think both Mozilla and Google's documentation sites are great. I would say that the tooling for Firefox is much better due to the web-ext tool. I'd definitely start there (Getting started with web-ext) the next time around.

Basically, web-ext takes care of a great deal of the fiddly bits of writing an extension—everything from packaging to auto reloading the extension when you edit the source code. It makes the development process a lot more smooth.

Q: Did you develop in one browser first and then test in the other, or test in both as you went along?

A: I started out in Chrome, because most of the users of our site use Chrome. But I started using Firefox about halfway through because of web-ext. After that, I sort of ping ponged back and forth because I was using source maps and each browser handles those a bit differently. Mostly the extension worked pretty seamlessly across both browsers. I had to make a couple of changes but I think it took me a few minutes to get it working in Firefox, which was a pleasant surprise.

Q: What are you running as a back end service to collect ads submitted by the WebExtension?

A: We're running a Rust server that collects the ads and uploads images to an S3 bucket. It is my first Rust project, and it has some rough edges, but I'm pretty much in love with Rust. It is pretty wonderful to know that the server won't go down because of all the built in type and memory safety in the language. We've open sourced the project, I could use help if anyone wants to contribute: Facebook Political Ad Collector on GitHub.

Q: Can you see that the same user got a certain set of ads, or are they all anonymized?

A: We strive to clean the ads of all identifying information. So, we only collect the id of the ad, and the targeting information that the advertiser used. For example, people 18 to 44 who live in New York.

Q: What are your next steps?

A: Well, I'm planning on publishing the ads we've received on a web site, as well as a clean dataset that researchers might be interested in. We also plan to monitor the Austrian elections, and next year is pretty big for the U.S. politically, so I've got my work cut out for me.

Q: Facebook has refused to release some "dark" political ads from the 2016 election in the USA. Will your project make "dark" ads in Germany visible?

A: We've been running for about four days, and so far we've collected 300 political ads in Germany. My hope is we'll start seeing some of the more interesting ones from fly by night groups. Political advertising on sites like Facebook isn't regulated in either the United States or Germany, so on some level just having a repository of these ads is a public service.

Q: Your project reveals the "dark" possibly deceptive ads in Chrome and Firefox but not on mobile platforms. Will it drive deceptive advertising away from desktop and toward mobile?

A: I'm not sure, that's a possibility. I can say that Firefox on Android allows WebExtensions and I plan on making sure this extension works there as well, but we'll never be able to see what happens in the native Facebook applications in any sort of large scale and systematic way.

Q: Has anyone from Facebook offered to help with the project?

A: Nope, but if anyone wants to reach out, I would love the help!

Thank you.

Get the WebExtension

Krebs on SecurityThe Equifax Breach: What You Should Know

It remains unclear whether those responsible for stealing Social Security numbers and other data on as many as 143 million Americans from big-three credit bureau Equifax intend to sell this data to identity thieves. But if ever there was a reminder that you — the consumer — are ultimately responsible for protecting your financial future, this is it. Here’s what you need to know and what you should do in response to this unprecedented breach.

Some of the Q&As below were originally published in a 2015 story, How I Learned to Stop Worrying and Embrace the Security Freeze. It has been updated to include new information specific to the Equifax intrusion.

Q: What information was jeopardized in the breach?

A: Equifax was keen to point out that its investigation is ongoing. But for now, the data at risk includes Social Security numbers, birth dates, addresses on 143 million Americans. Equifax also said the breach involved some driver’s license numbers (although it didn’t say how many or which states might be impacted), credit card numbers for roughly 209,000 U.S. consumers, and “certain dispute documents with personal identifying information for approximately 182,000 U.S. consumers.”

Q: Was the breach limited to Americans?

A: No. Equifax said it believes the intruders got access to “limited personal information for certain UK and Canadian residents.” It has not disclosed what information for those residents was at risk or how many from Canada and the UK may be impacted.

Q: What is Equifax doing about this breach?

A: Equifax is offering one free year of their credit monitoring service. In addition, it has put up a Web site — — that tried to let people determine whether they were affected.

Q: That site tells me I was not affected by the breach. Am I safe?

A: As noted in this story from Friday, the site seems hopelessly broken, often returning differing results for the same data submitted at different times. In the absence of more reliable information from Equifax, it is safer to assume you ARE compromised.

Q: I read that the legal language in the terms of service that consumers must accept before enrolling in the free credit monitoring service from Equifax requires one to waive their rights to sue the company in connection with this breach. Is that true?

A: Not according to Equifax. The company issued a statement over the weekend saying that nothing in that agreement applies to this cybersecurity incident.

Q: So should I take advantage of the credit monitoring offer?

A: It can’t hurt, but I wouldn’t count on it protecting you from identity theft.

Q: Wait, what? I thought that was the whole point of a credit monitoring service?

A: The credit bureaus sure want you to believe that, but it’s not true in practice. These services do not prevent thieves from using your identity to open new lines of credit, and from damaging your good name for years to come in the process. The most you can hope for is that credit monitoring services will alert you soon after an ID thief does steal your identity.

Q: Well then what the heck are these services good for?

A: Credit monitoring services are principally useful in helping consumers recover from identity theft. Doing so often requires dozens of hours writing and mailing letters, and spending time on the phone contacting creditors and credit bureaus to straighten out the mess. In cases where identity theft leads to prosecution for crimes committed in your name by an ID thief, you may incur legal costs as well. Most of these services offer to reimburse you up to a certain amount for out-of-pocket expenses related to those efforts. But a better solution is to prevent thieves from stealing your identity in the first place.

Q: What’s the best way to do that?

A: File a security freeze — also known as a credit freeze — with the four major credit bureaus.

Q: What is a security freeze?

A: A security freeze essentially blocks any potential creditors from being able to view or “pull” your credit file, unless you affirmatively unfreeze or thaw your file beforehand. With a freeze in place on your credit file, ID thieves can apply for credit in your name all they want, but they will not succeed in getting new lines of credit in your name because few if any creditors will extend that credit without first being able to gauge how risky it is to loan to you (i.e., view your credit file). And because each credit inquiry caused by a creditor has the potential to lower your credit score, the freeze also helps protect your score, which is what most lenders use to decide whether to grant you credit when you truly do want it and apply for it.

Q: What’s involved in freezing my credit file?

A: Freezing your credit involves notifying each of the major credit bureaus that you wish to place a freeze on your credit file. This can usually be done online, but in a few cases you may need to contact one or more credit bureaus by phone or in writing. Once you complete the application process, each bureau will provide a unique personal identification number (PIN) that you can use to unfreeze or “thaw” your credit file in the event that you need to apply for new lines of credit sometime in the future. Depending on your state of residence and your circumstances, you may also have to pay a small fee to place a freeze at each bureau. There are four consumer credit bureaus, including EquifaxExperianInnovis and Trans Union.  It’s a good idea to keep your unfreeze PIN(s) in a folder in a safe place (perhaps along with your latest credit report), so that when and if you need to undo the freeze, the process is simple.

Q: How much is the fee, and how can I know whether I have to pay it?

A: The fee ranges from $0 to $15 per bureau, meaning that it can cost upwards of $60 to place a freeze at all four credit bureaus (recommended). However, in most states, consumers can freeze their credit file for free at each of the major credit bureaus if they also supply a copy of a police report and in some cases an affidavit stating that the filer believes he/she is or is likely to be the victim of identity theft. In many states, that police report can be filed and obtained online. The fee covers a freeze as long as the consumer keeps it in place. Consumers Union has a useful breakdown of state-by-state fees.

Q: But what if I need to apply for a loan, or I want to take advantage of a new credit card offer?

A: You thaw the freeze temporarily (in most cases the default is for 24 hours).

Q: What’s involved in thawing my credit file? And do I need to thaw it at all three bureaus?

A: The easiest way to unfreeze your file for the purposes of gaining new credit is to spend a few minutes the phone with the company from which you hope to gain the line of credit (or research the matter online) to see which credit bureau they rely upon for credit checks. It will most likely be one of the major bureaus. Once you know which bureau the creditor uses, contact that bureau either via phone or online and supply the PIN they gave you when you froze your credit file with them. The thawing process should not take more than 24 hours, but hiccups in the thawing process sometimes make things take longer. It’s best not to wait until the last minute to thaw your file.

Q: It seems that credit bureaus make their money by selling data about me as a consumer to marketers. Does a freeze prevent that?

A: A freeze on your file does nothing to prevent the bureaus from collecting information about you as a consumer — including your spending habits and preferences — and packaging, splicing and reselling that information to marketers.

Q: Can I still use my credit or debit cards after I file a freeze? 

A: Yes. A freeze does nothing to prevent you from using existing lines of credit you may have.

Q: I’ve heard about something called a fraud alert. What’s the difference between a security freeze and a fraud alert on my credit file?

A: With a fraud alert on your credit file, lenders or service providers should not grant credit in your name without first contacting you to obtain your approval — by phone or whatever other method you specify when you apply for the fraud alert. To place a fraud alert, merely contact one of the credit bureaus via phone or online, fill out a short form, and answer a handful of multiple-choice, out-of-wallet questions about your credit history. Assuming the application goes through, the bureau you filed the alert with must by law share that alert with the other bureaus.

Consumers also can get an extended fraud alert, which remains on your credit report for seven years. Like the free freeze, an extended fraud alert requires a police report or other official record showing that you’ve been the victim of identity theft.

An active duty alert is another alert available if you are on active military duty. The active duty alert is similar to an initial fraud alert except that it lasts 12 months and your name is removed from pre-approved firm offers of credit or insurance (prescreening) for 2 years.

Q: Why would I pay for a security freeze when a fraud alert is free?

A: Fraud alerts only last for 90 days, although you can renew them as often as you like. More importantly, while lenders and service providers are supposed to seek and obtain your approval before granting credit in your name if you have a fraud alert on your file, they are not legally required to do this — and very often don’t.

Q: Hang on: If I thaw my credit file after freezing it so that I can apply for new lines of credit, won’t I have to pay to refreeze my file at the credit bureau where I thawed it?

A: It depends on your state. Some states allow bureaus to charge $5 for a temporary thaw or a lift on a freeze; in other states there is no fee for a thaw or lift. However, even if you have to do this once or twice a year, the cost of doing so is almost certainly less than paying for a year’s worth of credit monitoring services. Again, Consumers Union has a handy state-by-state guide listing the freeze and unfreeze laws and fees.

Q: What about my kids? Should I be freezing their files as well? Is that even possible? 

A: Depends on your state. Roughly half of the U.S. states have laws on the books allowing freezes for dependents. Check out The Lowdown on Freezing Your Kid’s Credit for more information.

Q: Is there anything I should do in addition to placing a freeze that would help me get the upper hand on ID thieves?

A: Yes: Periodically order a free copy of your credit report. By law, each of the three major credit reporting bureaus must provide a free copy of your credit report each year — via a government-mandated site: The best way to take advantage of this right is to make a notation in your calendar to request a copy of your report every 120 days, to review the report and to report any inaccuracies or questionable entries when and if you spot them. Avoid other sites that offer “free” credit reports and then try to trick you into signing up for something else.

Q: I just froze my credit. Can I still get a copy of my credit report from 

A: According to the Federal Trade Commission, having a freeze in place should not affect a consumer’s ability to obtain copies of their credit report from

Q: If I freeze my file, won’t I have trouble getting new credit going forward? 

A: If you’re in the habit of applying for a new credit card each time you see a 10 percent discount for shopping in a department store, a security freeze may cure you of that impulse. Other than that, as long as you already have existing lines of credit (credit cards, loans, etc) the credit bureaus should be able to continue to monitor and evaluate your creditworthiness should you decide at some point to take out a new loan or apply for a new line of credit.

Q: Can I have a freeze AND credit monitoring? 

A: Yes, you can. However, it may not be possible to sign up for credit monitoring services while a freeze is in place. My advice is to sign up for whatever credit monitoring may be offered for free, and then put the freezes in place.

Q: Beyond this breach, how would I know who is offering free credit monitoring? 

A: Hundreds of companies — many of which you have probably transacted with at some point in the last year — have disclosed data breaches and are offering free monitoring. California maintains one of the most comprehensive lists of companies that disclosed a breach, and most of those are offering free monitoring.

Q: I see that Trans Union has a free offering. And it looks like they offer another free service called a credit lock. Why shouldn’t I just use that?

A: I haven’t used that monitoring service, but it looks comparable to others. However, I take strong exception to the credit bureaus’ increasing use of the term “credit lock” to steer people away from securing a freeze on their file. I notice that Trans Union currently does this when consumers attempt to file a freeze. Your mileage may vary, but their motives for saddling consumers with even more confusing terminology are suspect. I would not count on a credit lock to take the place of a credit freeze, regardless of what these companies claim (consider the source).

Q: I read somewhere that the PIN code Equifax gives to consumers for use in the event they need to thaw a freeze at the bureau is little more than a date and time stamp of the date and time when the freeze was ordered. Is this correct? 

A: Yes. However, this does not appear to be the case with the other bureaus.

Q: Does this make the process any less secure? 

A: Hard to say. An identity thief would need to know the exact time your report was ordered. Unless of course Equifax somehow allowed attackers to continuously guess and increment that number through its Web site (there is no indication this is the case). However, having a freeze is still more secure than not having one.

Q: Someone told me that having a freeze in place wouldn’t block ID thieves from fraudulently claiming a tax refund in my name with the IRS, or conducting health insurance fraud using my SSN. Is this true?

A: Yes. There are several forms of identity theft that probably will not be blocked by a freeze. But neither will they be blocked by a fraud alert or a credit lock. That’s why it’s so important to regularly review your credit file with the major bureaus for any signs of unauthorized activity.

Q: Okay, I’ve got a security freeze on my file, what else should I do?

A: It’s also a good idea to notify a company called ChexSystems to keep an eye out for fraud committed in your name. Thousands of banks rely on ChexSystems to verify customers that are requesting new checking and savings accounts, and ChexSystems lets consumers place a security alert on their credit data to make it more difficult for ID thieves to fraudulently obtain checking and savings accounts. For more information on doing that with ChexSystems, see this link

Q: Anything else?

A: ID thieves like to intercept offers of new credit and insurance sent via postal mail, so it’s a good idea to opt out of pre-approved credit offers. If you decide that you don’t want to receive prescreened offers of credit and insurance, you have two choices: You can opt out of receiving them for five years or opt out of receiving them permanently.

To opt out for five years: Call toll-free 1-888-5-OPT-OUT (1-888-567-8688) or visit The phone number and website are operated by the major consumer reporting companies.

To opt out permanently: You can begin the permanent Opt-Out process online at To complete your request, you must return the signed Permanent Opt-Out Election form, which will be provided after you initiate your online request. 


CryptogramA Hardware Privacy Monitor for iPhones

Andrew "bunnie" Huang and Edward Snowden have designed a hardware device that attaches to an iPhone and monitors it for malicious surveillance activities, even in instances where the phone's operating system has been compromised. They call it an Introspection Engine, and their use model is a journalist who is concerned about government surveillance:

Our introspection engine is designed with the following goals in mind:

  1. Completely open source and user-inspectable ("You don't have to trust us")

  2. Introspection operations are performed by an execution domain completely separated from the phone"s CPU ("don't rely on those with impaired judgment to fairly judge their state")

  3. Proper operation of introspection system can be field-verified (guard against "evil maid" attacks and hardware failures)

  4. Difficult to trigger a false positive (users ignore or disable security alerts when there are too many positives)

  5. Difficult to induce a false negative, even with signed firmware updates ("don't trust the system vendor" -- state-level adversaries with full cooperation of system vendors should not be able to craft signed firmware updates that spoof or bypass the introspection engine)

  6. As much as possible, the introspection system should be passive and difficult to detect by the phone's operating system (prevent black-listing/targeting of users based on introspection engine signatures)

  7. Simple, intuitive user interface requiring no specialized knowledge to interpret or operate (avoid user error leading to false negatives; "journalists shouldn't have to be cryptographers to be safe")

  8. Final solution should be usable on a daily basis, with minimal impact on workflow (avoid forcing field reporters into the choice between their personal security and being an effective journalist)

This looks like fantastic work, and they have a working prototype.

Of course, this does nothing to stop all the legitimate surveillance that happens over a cell phone: location tracking, records of who you talk to, and so on.

BoingBoing post.

Worse Than FailureCodeSOD: A Bad Route

Ah, consumer products. Regardless of what the product in question is, therre’s a certain amount of “design” that goes into the device. Not design which might make the product more user-friendly, or useful, or in any way better. No, “design”, which means it looks nicer on the shelf at Target, or Best Buy, or has a better image on its Amazon listing. The manufacturer wants you to buy it, but they don’t really care if you use it.

This thinking extends to any software that may be on the device. This is obviously true if it’s your basic Internet of Garbage device, but it’s often true of something we depend on far more: consumer grade routers.

Micha Koryak just bought a new router, and the first thing he did was peek through the code on the device. Like most routers, it has a web-based configuration tool, and thus it has a directory called “applets” which contains JavaScript.

Javascript like this:

function a6(ba) {
    if (ba == "0") {
        return ad.find("#wireless-channel-auto").text()
    } else {
        if (ba == "1") {
            return "1 - 2.412 GHz"
        } else {
            if (ba == "2") {
                return "2 - 2.417 GHz"
            } else {
                if (ba == "3") {
                    return "3 - 2.422 GHz"
                } else {
                    if (ba == "4") {
                        return "4 - 2.427 GHz"
                    } else {
                        if (ba == "5") {
                            return "5 - 2.432 GHz"
                        } else {
                            if (ba == "6") {
                                return "6 - 2.437 GHz"
                            } else {
                                if (ba == "7") {
                                    return "7 - 2.442 GHz"
                                } else {
                                    if (ba == "8") {
                                        return "8 - 2.447 GHz"
                                    } else {
                                        if (ba == "9") {
                                            return "9 - 2.452 GHz"
                                        } else {
                                            if (ba == "10") {
                                                return "10 - 2.457 GHz"
                                            } else {
                                                if (ba == "11") {
                                                    return "11 - 2.462 GHz"
                                                } else {
                                                    if (ba == "12") {
                                                        return "12 - 2.467 GHz"
                                                    } else {
                                                        if (ba == "13") {
                                                            return "13 - 2.472 GHz"
                                                        } else {
                                                            if (ba == "14") {
                                                                return "14 - 2.484 GHz"
                                                            } else {
                                                                if (ba == "34") {
                                                                    return "34 - 5.170 GHz"
                                                                } else {
                                                                    if (ba == "36") {
                                                                        return "36 - 5.180 GHz"
                                                                    } else {
                                                                        if (ba == "38") {
                                                                            return "38 - 5.190 GHz"
                                                                        } else {
                                                                            if (ba == "40") {
                                                                                return "40 - 5.200 GHz"
                                                                            } else {
                                                                                if (ba == "42") {
                                                                                    return "42 - 5.210 GHz"
                                                                                } else {
                                                                                    if (ba == "44") {
                                                                                        return "44 - 5.220 GHz"
                                                                                    } else {
                                                                                        if (ba == "46") {
                                                                                            return "46 - 5.230 GHz"
                                                                                        } else {
                                                                                            if (ba == "48") {
                                                                                                return "48 - 5.240 GHz"
                                                                                            } else {
                                                                                                if (ba == "52") {
                                                                                                    return "52 - 5.260 GHz"
                                                                                                } else {
                                                                                                    if (ba == "56") {
                                                                                                        return "56 - 5.280 GHz"
                                                                                                    } else {
                                                                                                        if (ba == "60") {
                                                                                                            return "60 - 5.300 GHz"
                                                                                                        } else {
                                                                                                            if (ba == "64") {
                                                                                                                return "64 - 5.320 GHz"
                                                                                                            } else {
                                                                                                                if (ba == "100") {
                                                                                                                    return "100 - 5.500 GHz"
                                                                                                                } else {
                                                                                                                    if (ba == "104") {
                                                                                                                        return "104 - 5.520 GHz"
                                                                                                                    } else {
                                                                                                                        if (ba == "108") {
                                                                                                                            return "108 - 5.540 GHz"
                                                                                                                        } else {
                                                                                                                            if (ba == "112") {
                                                                                                                                return "112 - 5.560 GHz"
                                                                                                                            } else {
                                                                                                                                if (ba == "116") {
                                                                                                                                    return "116 - 5.580 GHz"
                                                                                                                                } else {
                                                                                                                                    if (ba == "120") {
                                                                                                                                        return "120 - 5.600 GHz"
                                                                                                                                    } else {
                                                                                                                                        if (ba == "124") {
                                                                                                                                            return "124 - 5.620 GHz"
                                                                                                                                        } else {
                                                                                                                                            if (ba == "128") {
                                                                                                                                                return "128 - 5.640 GHz"
                                                                                                                                            } else {
                                                                                                                                                if (ba == "132") {
                                                                                                                                                    return "132 - 5.660 GHz"
                                                                                                                                                } else {
                                                                                                                                                    if (ba == "136") {
                                                                                                                                                        return "136 - 5.680 GHz"
                                                                                                                                                    } else {
                                                                                                                                                        if (ba == "140") {
                                                                                                                                                            return "140 - 5.700 GHz"
                                                                                                                                                        } else {
                                                                                                                                                            if (ba == "149") {
                                                                                                                                                                return "149 - 5.745 GHz"
                                                                                                                                                            } else {
                                                                                                                                                                if (ba == "153") {
                                                                                                                                                                    return "153 - 5.765 GHz"
                                                                                                                                                                } else {
                                                                                                                                                                    if (ba == "157") {
                                                                                                                                                                        return "157 - 5.785 GHz"
                                                                                                                                                                    } else {
                                                                                                                                                                        if (ba == "161") {
                                                                                                                                                                            return "161 - 5.805 GHz"
                                                                                                                                                                        } else {
                                                                                                                                                                            if (ba == "165") {
                                                                                                                                                                                return "165 - 5.825 GHz"
                                                                                                                                                                            } else {
                                                                                                                                                                                if (ba == "184") {
                                                                                                                                                                                    return "184 - 4.920 GHz"
                                                                                                                                                                                } else {
                                                                                                                                                                                    if (ba == "188") {
                                                                                                                                                                                        return "188 - 4.940 GHz"
                                                                                                                                                                                    } else {
                                                                                                                                                                                        if (ba == "192") {
                                                                                                                                                                                            return "192 - 4.960 GHz"
                                                                                                                                                                                        } else {
                                                                                                                                                                                            if (ba == "196") {
                                                                                                                                                                                                return "196 - 4.980 GHz"
                                                                                                                                                                                            } else {
                                                                                                                                                                                                return ""
[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!