Planet Russell


CryptogramDepartment of Commerce Report on the Botnet Threat

Last month, the US Department of Commerce released a report on the threat of botnets and what to do about it. I note that it explicitly said that the IoT makes the threat worse, and that the solutions are largely economic.

The Departments determined that the opportunities and challenges in working toward dramatically reducing threats from automated, distributed attacks can be summarized in six principal themes.

  1. Automated, distributed attacks are a global problem. The majority of the compromised devices in recent noteworthy botnets have been geographically located outside the United States. To increase the resilience of the Internet and communications ecosystem against these threats, many of which originate outside the United States, we must continue to work closely with international partners.

  2. Effective tools exist, but are not widely used. While there remains room for improvement, the tools, processes, and practices required to significantly enhance the resilience of the Internet and communications ecosystem are widely available, and are routinely applied in selected market sectors. However, they are not part of common practices for product development and deployment in many other sectors for a variety of reasons, including (but not limited to) lack of awareness, cost avoidance, insufficient technical expertise, and lack of market incentives

  3. Products should be secured during all stages of the lifecycle. Devices that are vulnerable at time of deployment, lack facilities to patch vulnerabilities after discovery, or remain in service after vendor support ends make assembling automated, distributed threats far too easy.

  4. Awareness and education are needed. Home users and some enterprise customers are often unaware of the role their devices could play in a botnet attack and may not fully understand the merits of available technical controls. Product developers, manufacturers, and infrastructure operators often lack the knowledge and skills necessary to deploy tools, processes, and practices that would make the ecosystem more resilient.

  5. Market incentives should be more effectively aligned. Market incentives do not currently appear to align with the goal of "dramatically reducing threats perpetrated by automated and distributed attacks." Product developers, manufacturers, and vendors are motivated to minimize cost and time to market, rather than to build in security or offer efficient security updates. Market incentives must be realigned to promote a better balance between security and convenience when developing products.

  6. Automated, distributed attacks are an ecosystem-wide challenge. No single stakeholder community can address the problem in isolation.


The Departments identified five complementary and mutually supportive goals that, if realized, would dramatically reduce the threat of automated, distributed attacks and improve the resilience and redundancy of the ecosystem. A list of suggested actions for key stakeholders reinforces each goal. The goals are:

  • Goal 1: Identify a clear pathway toward an adaptable, sustainable, and secure technology marketplace.
  • Goal 2: Promote innovation in the infrastructure for dynamic adaptation to evolving threats.
  • Goal 3: Promote innovation at the edge of the network to prevent, detect, and mitigate automated, distributed attacks.
  • Goal 4: Promote and support coalitions between the security, infrastructure, and operational technology communities domestically and around the world
  • Goal 5: Increase awareness and education across the ecosystem.

Worse Than FailureReproducible Heisenbug

Illustration of Heisenberg Uncertainty Principle

Matt had just wrapped up work on a demo program for an IDE his company had been selling for the past few years. It was something many customers had requested, believing the documentation wasn't illustrative enough. Matt's program would exhibit the IDE's capabilities and also provide sample code to help others get started on their own creations.

It was now time for the testers to do their thing with the demo app. Following the QA team's instructions, Matt changed the Debug parameter in the configuration file from 4 (full debugging) to 1 (no debugging). Build and deploy completed without a hitch. Matt sent off the WAR file, feeling good about his programming aptitude and life in general.

And then his desk phone rang. The caller ID revealed it was Ibrahim, one of the testers down in QA.

Already? Matt wondered. With a mix of confusion and annoyance, he picked up the phone, assuming it was something PEBKAC-related.

"I've got no descriptors for the checkboxes on the main page," Ibrahim told him. "And the page after that has been built all skew-whiff."

"Huh?" Matt frowned. "Everything works fine on my side."

What could be different about Ibrahim's setup? The first thing Matt thought of was that he'd disabled debugging before building the WAR file for QA.

That can't be it! But it was easy enough to test.

"Hang on one sec here." Matt muted his phone, then changed the Debug parameter on his local deployment from 4 to 1. Indeed, upon refreshing, the user interface went wonky, just as Ibrahim had described. Unfortunately, with debugging off, Matt couldn't check the logs for a clue as to what was going wrong.

Back on the phone, Matt explained how he was able to do reproduce the problem, then instructed Ibrahim on manually hacking the WAR file to change the Debug parameter. Ibrahim reported that with full debugging enabled, the program worked perfectly on his end.

"OK. Lemme see what I can do," Matt said, trying not to sound as hopeless as he felt.

With absolutely no hints to guide him, Matt spent hours stepping through his code to figure out what was going wrong. At long last, he isolated a misbehaving repeat-until loop. When the Debug parameter was set to 4, the program exited the loop and returned data as expected. But when Debug was set to anything less than 4, it made an extra increment of the loop counter, leading to the graphical mayhem experienced earlier.

Horror crept down Matt's spine. This problem would affect anyone using repeat-until loops in conjunction with the IDE. Such programs were bound to fail in unexpected ways. He immediately issued a bug report, suggesting this needed to be addressed urgently.

Later that day, he received an email from one of the IDE developers. I found where it was testing the wrong boolean. Should we raise this as a defect?

"Yes! Duh!" Matt grumbled out loud, then took to typing. And can we find out where this bug crept in? All projects released since that time are compromised!!

As it turned out, the bug had been introduced to the IDE 2 years earlier. It'd been found almost immediately and fixed. Unfortunately, it'd only been fixed in one specific branch within source control—a branch that had never been merged to the trunk.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Krebs on SecurityPatch Tuesday, July 2018 Edition

Microsoft and Adobe each issued security updates for their products today. Microsoft’s July patch batch includes 14 updates to fix more than 50 security flaws in Windows and associated software. Separately, Adobe has pushed out an update for its Flash Player browser plugin, as well as a monster patch bundle for Adobe Reader/Acrobat.

According to security firm Qualys, all but two of the “critical” fixes in this round of updates apply to vulnerabilities in Microsoft’s browsers — Internet Explorer and Edge. Critical patches mend software flaws that can be exploited remotely by malicious software or bad guys with little to no help from the user, save for perhaps visiting a Web site or opening a booby-trapped link.

Microsoft also patched dangerous vulnerabilities in its .NET Framework (a Windows development platform required by many third-party programs and commonly found on most versions of Windows), as well as Microsoft Office. With both of these weaknesses, an attacker could trick a victim into opening an email that contained a specially crafted Office document which loads malicious code, says Allan Liska, a threat intelligence analyst at Recorded Future.

One of the more nettlesome features of Windows 10 is the operating system by default decides on its own when to install updates, very often shutting down open programs and restarting your PC in the middle of the night to do so unless you change the defaults.

Not infrequently, Redmond ships updates that end up causing stability issues for some users, and it doesn’t hurt to wait a day or two before seeing if any major problems are reported with new updates before installing them. Microsoft doesn’t make it easy for Windows 10 users to change this setting, but it is possible. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

It’s a good idea to get in the habit of backing up your computer before applying monthly updates from Microsoft. Windows has some built-in tools that can help recover from bad patches, but restoring the system to a backup image taken just before installing updates is often much less hassle and an added piece of mind while you’re sitting there praying for the machine to reboot successfully after patching.

As per usual on Microsoft’s Patch Tuesday, Adobe issued an update to its Flash Player browser plugin. The latest update brings Flash to version, and patches at least two security vulnerabilities in the program. Microsoft’s patch bundle includes the Flash update as well.

Adobe says the Flash update addresses “critical” security holes, meaning they could be exploited by malware or miscreants to take complete, remote control over vulnerable systems. My standard advice is for readers to kick Flash to the curb, as it’s a buggy program that is a perennial favorite target of malware purveyors.

For readers still unwilling to cut the Flash cord, there are half-measures that work almost as well. Fortunately, disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

By default, Mozilla Firefox on Windows computers with Flash installed runs Flash in a “protected mode,” which prompts the user to decide if they want to enable the plugin before Flash content runs on a Web site.

Another, perhaps less elegant, alternative to wholesale junking Flash is keeping it installed in a browser that you don’t normally use, and then only using that browser on sites that require Flash.

If you use Adobe Reader or Acrobat to manage PDF documents, you’re probably going to want to update these products soon: Adobe released updates for both today that fix more than 100 security vulnerabilities in the software titles.

Some folks may be unaware that there are other free PDF readers which aren’t quite as bloated as Adobe’s. Whether these alternative readers are more secure is another question; they certainly seem to be updated less frequently, but that may have something to do with the fact that they include far fewer features and likely less overall attack surface area.

I can’t recall the last time I had Adobe Reader installed on anything I own. My preferred PDF reader for Windows is Sumatra PDF, which is comparatively lightweight and very fast. Unfortunately, no matter how many times you change Sumatra to the default PDF reader on Windows 10, the operating system keeps defaulting to opening PDFs in Microsoft Edge.

For a detailed rundown of the individual vulnerabilities patched by Microsoft today, check out the SANS Internet Storm Center, which indexes the fixes by severity, how likely it is that each vulnerability will be exploited anytime soon, and whether specific flaws were publicly disclosed prior to today’s patch release.

According to SANS, at least three of the flaws — CVE-2018-8278, CVE-2018-8313, and CVE-2018-8314 — were previously disclosed publicly, meaning that attackers may have had a head start figuring out how to exploit these flaws for criminal gain.

As always, if you experience any problems installing or downloading these updates, please don’t hesitate to leave a comment. If past Patch Tuesday posts are any indicator, you may even find helpful responses or solutions from other readers experiencing the same issues.

Planet DebianBen Hutchings: Debian LTS work, June 2018

I was assigned 15 hours of work by Freexian's Debian LTS initiative and worked 12 hours, so I have carried 3 hours over to July. Since Debian 7 "wheezy" LTS ended at the end of May, I prepared for Debian 8 "jessie" to enter LTS status.

I prepared a stable update of Linux 3.16, sent it out for review, and then released it. I rebased jessie's linux package on this, but didn't yet upload it.

Since the "jessie-backports" suite is no longer accepting updates, and there are LTS users depending on the updated kernel (Linux 4.9) there, I prepared to add it to the jessie-security suite. The source package I have prepared is similar to what was in jessie-backports, but I have renamed it to "linux-4.9" and disabled building some binary packages to avoid conflicting with the standard linux source package. I also disabled building the "udeb" packages used in the installer, since I don't expect anyone to need them and building them would require updating the "kernel-wedge" package too. I didn't upload this either, since there wasn't a new linux version in "stretch" to backport yet.


CryptogramRecovering Keyboard Inputs through Thermal Imaging

Researchers at the University of California, Irvine, are able to recover user passwords by way of thermal imaging. The tech is pretty straightforward, but it's interesting to think about the types of scenarios in which it might be pulled off.

Abstract: As a warm-blooded mammalian species, we humans routinely leave thermal residues on various objects with which we come in contact. This includes common input devices, such as keyboards, that are used for entering (among other things) secret information, such as passwords and PINs. Although thermal residue dissipates over time, there is always a certain time window during which thermal energy readings can be harvested from input devices to recover recently entered, and potentially sensitive, information.

To-date, there has been no systematic investigation of thermal profiles of keyboards, and thus no efforts have been made to secure them. This serves as our main motivation for constructing a means for password harvesting from keyboard thermal emanations. Specifically, we introduce Thermanator, a new post factum insider attack based on heat transfer caused by a user typing a password on a typical external keyboard. We conduct and describe a user study that collected thermal residues from 30 users entering 10 unique passwords (both weak and strong) on 4 popular commodity keyboards. Results show that entire sets of key-presses can be recovered by non-expert users as late as 30 seconds after initial password entry, while partial sets can be recovered as late as 1 minute after entry. Furthermore, we find that Hunt-and-Peck typists are particularly vulnerable. We also discuss some Thermanator mitigation strategies.

The main take-away of this work is three-fold: (1) using external keyboards to enter (already much-maligned) passwords is even less secure than previously recognized, (2) post factum (planned or impromptu) thermal imaging attacks are realistic, and finally (3) perhaps it is time to either stop using keyboards for password entry, or abandon passwords altogether.

News article.

Worse Than FailureCodeSOD: Is the Table Empty?

Sean has a lucrative career as a consultant/contractor. As such, he spends a great deal of time in other people’s code bases, and finds things like a method with this signature:

public boolean isTableEmpty()

Already, you’re in trouble. Methods which operate directly on “tables” are a code-smell, yes, even in a data-driven application. You want to operate on business objects, and unless you’re a furniture store, tables are not business objects. You might think in those terms when building some of your lower-level components, but then you’d expect to see things like string tableName in the parameter list.

Now, maybe I’m just being opinionated. Maybe there’s a perfectly valid reason to build a method like this that I can’t imagine. Well, let’s check the implementation.

public boolean isTableEmpty()
    boolean res = false;
    Connection conn = cpInstance.getConnection();
    try (PreparedStatement ps = conn.prepareStatement("select * from some_table")) {
        try (ResultSet rs = ps.executeQuery()) {
            if (rs.first()) {
 	        res = true;
    catch (SQLException e) {
    } finally {
        try {
        } catch (SQLException e) {

    return res;

Even if you think this method should exist, it shouldn’t exist like this. No COUNT(*) or LIMIT in the query. Using exceptions as flow control. And the best part: returning the opposite of what the method name implies. false tells us the table is empty.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Don MartiBug futures: business models

Recent question about futures markets on software bugs: what's the business model?

As far as I can tell, there are several available models, just as there are multiple kinds of companies that can participate in any securities or commodities market.

Cushing, Oklahoma

Oracle operator: Read bug tracker state, write futures contract state, profit. This business would take an agreed-upon share of any contract in exchange for acting as a referee. The market won't work without the oracle operator, which is needed in order to assign the correct resolution to each contract, but it's possible that a single market could trade contracts resolved by multiple oracles.

Actively managed fund: Invest in many bug futures in order to incentivize a high-level outcome, such as support for a particular use case, platform, or performance target.

Bot fund: An actively managed fund that trades automatically, using open source metrics and other metadata.

Analytics provider: Report to clients on the quality of software projects, and the market-predicted likelihood that the projects will meet the client's maintenance and improvement requirements in the future.

Stake provider: A developer participant in a bug futures market must invest to acquire a position on the fixed side of a contract. The stake provider enables low-budget developers to profit from larger contracts, by lending or by investing alongside them.

Arbitrageur: Helps to re-focus development efforts by buying the fixed side of one contract and the unfixed side of another. For example, an arbitrageur might buy the fixed side of several user-facing contracts and the unfixed side of the contract on a deeper issue whose resolution will result in a fix for them.

Arbitrageurs could also connect bug futures to other kinds of markets, such as subscriptions, token systems, or bug bounties.

Previous items in the bug futures series:

Bugmark paper

A trading market to incentivize secure software: Malvika Rao, Georg Link, Don Marti, Andy Leak & Rich Bodo (PDF) (presented at WEIS 2018)

Corporate Prediction Markets: Evidence from Google, Ford, and Firm X (PDF) by Bo Cowgill and Eric Zitzewitz.

Despite theoretically adverse conditions, we find these markets are relatively efficient, and improve upon the forecasts of experts at all three firms by as much as a 25% reduction in mean squared error.

(This paper covers a related market type, not bug futures. However some of the material about interactions of market data and corporate management could also turn out to be relevant to bug futures markets.)

Creative Commons

Pipeline monument in Cushing, Oklahoma: photo by Roy Luck for Wikimedia Commons. This file is licensed under the Creative Commons Attribution 2.0 Generic license.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #167

Here’s what happened in the Reproducible Builds effort between Sunday July 1 and Saturday July 7 2018:

Packages reviewed and fixed, and bugs filed

diffoscope development


This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianMinkush Jain: KubeCon + CloudNativeCon, Copenhagen

I attended KubeCon + CloudNativeCon 2018, Europe that took place from 2nd to 4th of May. It was held in Copenhagen, Denmark. I know it’s quite late since I attended it, but still I wanted to share my motivating experiences at the conference, so here it is!

I got scholarship from the Linux Foundation which gave me a wonderful opportunity to attend this conference. This was my first developer conference aboard and I was super-excited to attend it. I got the chance to learn more about containers, straight from the best people out there.

I attended the opening keynote sessions on 2nd May in Bella Centre, Copenhagen. The opening keynote was given by Dan Kohn, Executive Director, CNCF in an enormous hall filled with more than 4100 people. It was like everybody from the container community was present there!

The conference was very well organised considering the large scale of the event. People from all around the world were present, sharing their experience with Kubernetes.

Apart from the keynotes, I mostly attended beginner and intermediate level talks, due to the fact that some of the sessions required high technical knowledge that I didn’t possess yet.

One of the speech that I enjoyed was given by Oliver Beattie, Engineering head, Monzo Bank where he talked about the Kubernetes outrage that they experienced a few months ago and how they handled its consequences.

Other talks that interested me were how Adidas is using cloud native technologies and closing remarks given by the inspiring Kelsey Hightower. It was wonderful to see the growth of cloud and container technologies and its communities.

A large number of sponsor booths were present, including RedHat, AWS, IBM and Google cloud, Azure. They shared their workflow and technologies they were using. I visited several booths, interacted with amazing people and got lots of stickers and goodies!

I was fortunate enough to win two raffles conducted by the sponsors! A big thank you to node.js for the Raspberry Pi kit and Mesosphere for the drone.

Raffle Winner!

Our sponsors also organised Diversity Lunch event for the scholarship recipients. The committee had a great discussion on inclusion and diversity along with excellent meals.

I had face-to-face interactions with some inspiring developers and employees of tech giants. Being among the youngest to attend the conference, I had a lot to learn from everyone around me and grow my network.

On the day before the last, an all attendee party and dinner was held in Tivoli Gardens, in the heart of the city. The evening was filled with amusement rides, beautiful gardens, and more. What more would you expect?

Tivoli Gardens Event in Tivoli Gardens, Copenhagen

I would like to express gratitude to CNCF, The Linux Foundation and Wendy West for this opportunity, and for helping the community involve more diversity. I look forward to attend more such events in the future!

Photographs by Cloud Native Foundation


Planet DebianCharles Plessy: Still not going to Debconf....

I was looking forward to this year's Debconf in Taiwan, the first in Asia, and the perspective of attending it with no jet lag, but I happen to be moving to Okinawa and changing jobs on August 1st, right at the middle of it...

Moving is a mixed feeling of happiness and excitation for what I am about to find, and melancholy about what and whom I am about to leave. But flights to Tôkyô and Yokohama are very affordable.

Special thanks to the Tôkyô Debian study group, where I got my GPG key signed by Debian developers a long time ago :)

Planet DebianBálint Réczey: Run Ubuntu on Windows, even multiple releases in parallel!

Running Linux terminals on Windows needs just a few clicks since we can install Ubuntu, Debian and other distributions right from the Store as apps, without the old days’ hassle of dual-booting or starting virtual machines. It just works and it works even in enterprise environments where installation policies are tightly controlled.

If you check the Linux distribution apps based on the Windows Subsystem for Linux technology you may notice that there is not only one Ubuntu app, but there are already three, Ubuntu, Ubuntu 16.04 and Ubuntu 18.04. This is no accident. It matches the traditional Ubuntu release offering where the LTS releases are supported for long periods and there is always a recommended LTS release for production:

  • Ubuntu 16.04 (code name: Xenial) was the first release really rocking on WSL and it will be updated in the Store until 16.04’s EOL, April, 2021.
  • Ubuntu 18.04 (code name: Bionic) is the current LTS release (also rocking :-)) and the first one supporting even ARM64 systems on Windows. It will be updated in the Store until 18.04’s EOL, April, 2023.
  • Ubuntu (without the release version) always follows the recommended release, switching over to the next one when it gets the first point release. Right now it installs Ubuntu 16.04 and will switch to 18.04.1, on 26th July, 2018.

The apps in the Store are like installation kits. Each app creates a separate root file system in which Ubuntu terminals are opened but app updates don’t change the root file system afterwards. Installing a different app in parallel creates a different root file system allowing you to have both Ubuntu LTS releases installed and running in case you need it for keeping compatibility with other external systems. You can also upgrade your Ubuntu 16.04 to 18.04 by running ‘do-release-upgrade’ and have three different systems running in parallel, separating production and sandboxes for experiments.

What amazes me in the WSL technology is not only that Linux programs running directly on Windows perform surprisingly well (benchmarks), but the coverage of programs you can run unmodified without any issues and without the large memory overhead of virtual machines.

I hope you will enjoy the power or the Linux terminals on Windows at least as much we enjoyed building the apps at Canonical working closely with Microsoft to make it awesome!

Planet DebianMarkus Koschany: My Free Software Activities in June 2018

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • I advocated Phil Morrell to become Debian Maintainer with whom I have previously worked together on corsix-th. This month I sponsored his updates for scorched3d and the new package, an installer for drm-free commercial games. is basically a collection of shell scripts that create a wrapper around games from or Steam and put them into a Debian package which is then seamlessly integrated into the user’s system.  Similar software are game-data-packager, playonlinux or lutris (not yet in Debian).
  • I packaged new upstream releases of blockattack, renpy, atomix and minetest, and also backported Minetest version to Stretch later on.
  • I uploaded RC bug fixes from Peter de Wachter for torus-trooper, tumiki-fighters and val-and-rick and moved the packages to Git.
  • I tackled an RC bug (#897548) in yabause, a Saturn emulator.
  • I sponsored connectagram, cutemaze and tanglet updates for Innocent de Marchi.
  • Last but not least I refreshed the packaging of trophy and sauerbraten which had not seen any updates for the last couple of years.

Debian Java

  • I packaged a new upstream release of activemq and could later address #901366 thanks to a bug report by Chris Donoghue.
  • I also packaged upstream releases of bouncycastle, libpdfbox-java, libpdfbox2-java because of reported security vulnerabilities.
  • I investigated and fixed RC bugs in openjpa (#901045), osgi-foundation-ee (#893382) and ditaa (#897494, Java 10 related).
  • A snakeyaml update introduced a regression in apktool (#902666) which was only visible at runtime. Once known I could fix it.
  •   I worked on Netbeans again. It can be built from source now but there is still a runtime error (#891957) that prevents users from starting the application. The current plan is to package the latest release candidate of Netbeans 9 and move forward.

Debian LTS

This was my twenty-eight month as a paid contributor and I have been paid to work 23,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 18.06.2018 until 24.06.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in jasperreports, 389-ds-base, asterisk, lava-server, libidn, php-horde-image, tomcat8, thunderbird, glusterfs, ansible, mercurial, php5, jquery, redis, redmine, libspring-java, php-horde-crypt, mupdf, binutils, jetty9 and libpdfbox-java.
  • DSA-4221-1. Issued a security update for libvncserver fixing 1 CVE.
  • DLA-1398-1. Issued a security update for php-horde-crypt fixing 2 CVE.
  • DLA-1399-1. Issued a security update for ruby-passenger fixing 2 CVE.
  • DLA-1411-1. Issued a security update for tiff fixing 5 CVE.
  • DLA-1410-1. Issued a security update for python-pysaml fixing 2 CVE.
  • DLA-1418-1. Issued a security update for bouncycastle fixing 7 CVE.


Extended Long Term Support (ELTS) is a new project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my first month and I have been paid to work 7 hours on ELTS.

  • ELA-1-1. Issued a security update for Git fixing 1 CVE.
  • ELA-8-1. Issued a security update for ruby-passenger fixing 1 CVE.
  • ELA-14-1. Backported the Linux 3.16 kernel from Jessie to Wheezy. This update also included backports of initramfs-tools and the linux-latest source package. The new kernel is available for amd64 and i386 architectures.


  • I prepared security updates for libvncserver (Stretch, DSA-4221-1) and Sid) and bouncycastle (Stretch, DSA-4233-1)

Thanks for reading and see you next time.

TEDGhana eliminates trachoma, The Bail Project opens a fifth site: Updates from The Audacious Project

At One Acre Fund’s soil lab in Kenya, soil samples from small farms are analyzed to help the farmers select the right seeds and fertilizer to maximize their yields. Photo: Courtesy of One Acre Fund

Their ideas are big — aimed at impacting millions of lives or creating sweeping global change. Three months after the first project leaders of The Audacious Project stood on the TED stage and shared their ambitious plans, things are already starting to happen. Below, enjoy the latest news.

The drive to end trachoma

When Sightsavers and its partners started working in Ghana in 2000, about 2.8 million people in the country were at risk of contracting trachoma, an ancient disease that eventually causes blindness. But on June 13, 2018, the World Health Organization announced: Ghana has eliminated trachoma. It’s a very big deal, the first country in sub-Saharan Africa to reach this milestone. Caroline Harper and her team expect more countries to follow — their goal is to end trachoma across twelve African countries. Last month’s news, she says, is proof it can be done when a country’s ministry of health teams up with the right coalition of partners.

The launch of The Bail Project

The Bail Project is gaining national momentum — since launching in January 2018, the project has bailed out more than 1,000 people in four US cities. And it recently opened a fifth site in Louisville, Kentucky, where on any given night there are about 2,100 people and fewer than 1,800 beds in the Department of Corrections jail. The department estimates that 77 percent of those being held are there because they can’t afford to pay bail. The Bail Project aims to help as many of them as possible return to their families to await trial. Next up for Robin Steinberg and her team: Detroit, where The Bail Project will work with the Detroit Justice Center to assist residents who can’t otherwise pay their bail bonds.

Two new missions explore the twilight zone

Most people know The Twilight Zone as a vintage television show. Now, more people are getting to know it as the vast, dark midwater region of the ocean. On World Ocean Day in early June, as TED posted a talk from Heidi Sosik of Woods Hole Oceanographic Institution (WHOI), both The New York Times and Washington Post ran op-eds on why exploring the twilight zone is so critical. Both called for increasing our knowledge of the region before commercial interests can exploit it. WHOI’s far-reaching twilight zone exploration will begin in August. One mission, leaving from Rhode Island, will test DEEP-SEE, a new instrument designed to gather acoustical data and imagery. And a second, leaving from Seattle (funded by both NASA and the National Science Foundation), will study how phytoplankton and other organisms move carbon through the ocean to the twilight zone, making it a critical part of the climate system.

The satellite to curb methane

Last month, Environmental Defense Fund (EDF) released a study showing that US oil and natural gas companies are leaking 60 percent more methane than EPA estimates predicted. About 2.3 percent of overall natural gas output is lost, meaning that companies are essentially leaking $2 billion of their product. But EDF stresses the potential for this to motivate action — in fact, Shell, ExxonMobil and BP have already committed to reduction efforts. At the World Gas Conference, held in Washington D.C. in late June, EDF continued to share this message, showing how the launch of MethaneSAT will help companies and governments take action. During a panel, Fred Krupp said the satellite should be in orbit in three years. And at a booth, EDF demoed a virtual reality experience that showed just how easily methane leaks can be spotted and fixed. With headsets on, Methane CH4llenge users could play hero by stopping multiple leaks.

At the World Gas Conference in June, an attendee plays the Methane CH4llenge, spotting and fixing methane leaks. Photo: Courtesy of Environmental Defense Fund

The Woodstock for Black women’s health

T. Morgan Dixon and Vanessa Garrison of GirlTrek are laying the groundwork for next summer’s big event, the Summer of Selma. They’re on a 12-month, 50-city wellness revival that they’re calling the Road to Selma, and they’re making stops all around the country, holding Civil Rights Movement-style teach-ins for Black women. So far, they’ve been to New York, Detroit and New Orleans and are gearing up for stops in Houston, Baltimore and Kansas City. The Summer of Selma festival will be held May 24–27, 2019, and registration is expected to open later in the year.

The community health work revolution

Living Goods and Last Mile Health are on the way to their 2018 goal of equipping nearly 14,000 community health workers with mobile technology that will allow them to more effectively diagnose and treat members of their community at their doorsteps. “No one should die because they live too far from a doctor. Not in the 21st century,” said Raj Panjabi at a TIME 100 x WeWork Speaker event in June, where he highlighted how training community health workers in 30 life-saving skills has the potential to save 30 million lives by 2030. In the fall, Last Mile Health’s Community Health Academy will begin enrolling students in its first course, designed to help local leaders build community health worker programs in their countries. Reps from both Living Goods and Last Mile Health spoke on this topic at the World Health Assembly in late May, just as community health workers were applauded for circulating the vaccines that squashed the Ebola flare-up in Democratic Republic of the Congo.

Support for small-scale farmers

One Acre Fund is thinking a lot about soil and how optimizing it can help smallholder farmers boost their income and feed their families. On their blog, they gave readers a peek inside their soil analytics lab in Kakamega, Kenya, just as 3,000 samples had arrived from small farms in Rwanda to be analyzed. The goal of the analysis is two-fold: to determine the best kinds of seeds and fertilizer mixes for each farmer, and to collect data for a study on how farming practices affect soil health. This kind of research is helping Andrew Youn and his team scale and improve their overall operations. By the end of the year, they plan to serve 760,000 small-scale farmers, tracking well ahead of their goal of working with one million by 2020. This expansion is key for preventing another global food crisis — and promoting gender equality in a region where a high percentage of farmers are women, yet systems are not designed to help them thrive.

CryptogramPROPagate Code Injection Seen in the Wild

Last year, researchers wrote about a new Windows code injection technique called PROPagate. Last week, it was first seen in malware:

This technique abuses the SetWindowsSubclass function -- a process used to install or update subclass windows running on the system -- and can be used to modify the properties of windows running in the same session. This can be used to inject code and drop files while also hiding the fact it has happened, making it a useful, stealthy attack.

It's likely that the attackers have observed publically available posts on PROPagate in order to recreate the technique for their own malicious ends.

Worse Than FailureWalking on the Sun

In 1992, I worked at a shop that was all SunOS. Most people had a Sparc-1. Production boxes were the mighty Sparc-2, and secretaries had the lowly Sun 360. Somewhat typical hardware for the day.

SPARCstation 1

Sun was giving birth to their brand spanking new Solaris, and was pushing everyone to convert from SunOS. As with any OS change in a large shop, it doesn't just happen; migration planning needs to occur. All of our in-house software needed to be ported to the new Operating System.

This planning boiled down to: assign it to snoofle; let him figure it out.

This was before Sun made OpCom available to help people do their migrations.

I took the latest official code, opened an editor, grepped through the include files and compiled, for each OS. Then I went into a nine month long compile-edit-build cycle, noting the specifics of each item that required different include files/syntax/whatever. Basically, Sun had removed the Berkeley libraries when they first put out Solaris, so everything signal or messaging related had to change.

Finally, I naively thought the pain was over; it compiled. I had coalesced countless functions that had nearly identical multiple versions, deleted numerous blocks of dead code, and reduced 1.4 million LOC to about 700K. Then began the debugging cycle. That took about 3 weeks.

Then I was told not to merge it because another subteam in our group was doing a 9-month sub-project and couldn't be interrupted. Naturally, they were working in the main branch, which forced me to keep pulling and porting their code into mine several times a week, for months. Ironically, they were constantly changing dead code as part of trying to fix their own code.

You can only do this for so long before getting fed up; I'd had it and let it be known to boss+1 (who was pushing for Solaris) that this had to end. He set a date three months out, at which time I would do the merge and commit; other tasks be damned! The subteam was repeatedly informed of this drop-dead date.

So I put up with it for 3 months, then did the final merge; over 3,500 diffs. I went through them all, praying the power wouldn't cut out. After fixing a few typos and running the cursory test, I held my breath and committed. Then I told everyone to pull and merge.

It turns out that I missed 3 little bugs, but they were suffiently visible that it prevented the application from doing anything useful. The manager of the sub-team ordered me to roll it back because they were busy. I handed her the written memo from B+1 ordering me to do it on this date and told her to suck it up and give me a chance to debug it.

An hour later, it was working and committed.

I instructed everyone to pull and build, and to follow the instructions in my handout for coding going forward. Anything that broke the Solaris build would be summarily rolled back per orders from B+1.

It took a few months and hundreds of rollbacks for them to start to follow my instructions, but when they finally did, the problems ceased.

Then the managers from the other teams took my instructions and all my global edit scripts (it wasn't a perfect parser, but it at least left syntax errors if it tried to change code that was really badly formatted, so you could trivially find them and fix them very quickly).

Using my scripts and cheat sheets, my peers on the other projects managed to do their ports in just a couple of hours, and mercilessly rode me about it for the next 3 years.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianPetter Reinholdtsen: What is the most supported MIME type in Debian in 2018?

Five years ago, I measured what the most supported MIME type in Debian was, by analysing the desktop files in all packages in the archive. Since then, the DEP-11 AppStream system has been put into production, making the task a lot easier. This made me want to repeat the measurement, to see how much things changed. Here are the new numbers, for unstable only this time:

Debian Unstable:

  count MIME type
  ----- -----------------------
     56 image/jpeg
     55 image/png
     49 image/tiff
     48 image/gif
     39 image/bmp
     38 text/plain
     37 audio/mpeg
     34 application/ogg
     33 audio/x-flac
     32 audio/x-mp3
     30 audio/x-wav
     30 audio/x-vorbis+ogg
     29 image/x-portable-pixmap
     27 inode/directory
     27 image/x-portable-bitmap
     27 audio/x-mpeg
     26 application/x-ogg
     25 audio/x-mpegurl
     25 audio/ogg
     24 text/html

The list was created like this using a sid chroot: "cat /var/lib/apt/lists/*sid*_dep11_Components-amd64.yml.gz| zcat | awk '/^ - \S+\/\S+$/ {print $2 }' | sort | uniq -c | sort -nr | head -20"

It is interesting to see how image formats have passed text/plain as the most announced supported MIME type. These days, thanks to the AppStream system, if you run into a file format you do not know, and want to figure out which packages support the format, you can find the MIME type of the file using "file --mime <filename>", and then look up all packages announcing support for this format in their AppStream metadata (XML or .desktop file) using "appstreamcli what-provides mimetype <mime-type>. For example if you, like me, want to know which packages support inode/directory, you can get a list like this:

% appstreamcli what-provides mimetype inode/directory | grep Package: | sort
Package: anjuta
Package: audacious
Package: baobab
Package: cervisia
Package: chirp
Package: dolphin
Package: doublecmd-common
Package: easytag
Package: enlightenment
Package: ephoto
Package: filelight
Package: gwenview
Package: k4dirstat
Package: kaffeine
Package: kdesvn
Package: kid3
Package: kid3-qt
Package: nautilus
Package: nemo
Package: pcmanfm
Package: pcmanfm-qt
Package: qweborf
Package: ranger
Package: sirikali
Package: spacefm
Package: spacefm
Package: vifm

Using the same method, I can quickly discover that the Sketchup file format is not yet supported by any package in Debian:

% appstreamcli what-provides mimetype  application/vnd.sketchup.skp
Could not find component providing 'mimetype::application/vnd.sketchup.skp'.

Yesterday I used it to figure out which packages support the STL 3D format:

% appstreamcli what-provides mimetype  application/sla|grep Package
Package: cura
Package: meshlab
Package: printrun

PS: A new version of Cura was uploaded to Debian yesterday.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianLouis-Philippe Véronneau: Taiwan Travel Blog - Day 1

I'm going to DebConf18 later this month, and since I had some free time and I speak a somewhat understandable mandarin, I decided to take a full month of vacation in Taiwan.

I'm not sure if I'll keep blogging about this trip, but so far it's been very interesting and I felt the urge to share the beauty I've seen with the world.

This was the first proper day I spent in Taiwan. I arrived on the 8th during the afternoon, but the time I had left was all spent traveling to Hualien County (花蓮縣) were I intent to spend the rest of my time before DebConf.

Language Rant

I'm pretty pissed off at Taiwan for using traditional Chinese characters instead of simplified ones like they do in Mainland China. So yeah, even though I've been studying mandarin for a while now, I can't read much if anything at all. For those of you not familiar with mandarin, here is an example of a very common character written with simplified (后) and traditional characters (後). You don't see the resemblance between the two? Me neither.

I must say technology is making my trip much easier though. I remember a time when I had to use my pocket dictionary to lookup words and characters and it used to take me up to 5 minutes to find a single character1. That's how you end up ordering cold duck blood soup from a menu without pictures after having given up on translating it.

Now, I can simply use my smartphone and draw the character I'm looking for in my dictionary app. It's fast, it's accurate and it's much more complete than a small pocket dictionary.

Takoro National Park (太鲁阁国家公园)

Since I've seen a bunch of large cities in China already and I both dislike pollution and large amounts of people squished up in too few square meters, I rapidly decided I wasn't going to visit Taipei and would try to move out and explore one of the many national parks in Taiwan.

After looking it up, Takoro National Park in the Hualien County seemed the best option for an extended stay. It's large enough that there is a substantial tourism economy built around visiting the multiple trails of the park, there are both beginner and advanced trails you can choose from and the scenery is incredible.

Also Andrew Lee lives nearby and had a bunch of very nice advice for me, making my trip to Takoro much easier.

Swallow Gorge (燕子口)

Picture of the LiWu river in Yanzikou

The first trail I visited in the morning was Swallow Gorge. Apparently it's frequently closed because of falling rocks. Since the weather was very nice and the trail was open, I decided to start by this one.

Fun fact, at first I thought the swallow in Swallow Gorge meant swallowing, but it is swallow as in the cute bird commonly associated with spring time. The gorge is named that way because the small holes in the cliffs are used by swallows to nest. I kinda understood that when I saw a bunch of them diving and playing in the wind in front of me.

The Gorge was very pretty, but it was full of tourists and the "trail" was actually a painted line next to the road where car drives. It was also pretty short. I guess that's ok for a lot of people, but I was looking for something a little more challenging and less noisy.

Shakadang Trail (砂卡礑步道)

The second trail I visited was the Shakadang trail. The trail dates back to 1940, when the Japanese tried to use the Shakadang river for hydroelectricity.

Shakadang's river water is bright blue and extremely clear

This trail was very different from Yanzikou, being in the wild and away from cars. It was a pretty easy trail (2/5) and although part of it was paved with concrete, the more you went the wilder it got. In fact, most of the tourist gave up after the first kilometer and I had the rest of the path to myself afterwards.

Some cute purple plant growing along the river

The path is home to a variety of wild animals, plants and insects. I didn't see any wild board, but gosh damn did I saw some freakingly huge spiders. As I learnt later, Taiwan is home of the largest spiders in the world. The ones I saw (Golden silk orb-weaver, Nephila pilipes) had bodies easily 3 to 5cm long and 2cm thick, with an overall span of 20cm with their legs.

I also heard some bugs (I guess it was bugs) making a huge racket that somewhat reminded me of an old car's loose alternator belt strap on a cold winter morning.

  1. Using a Chinese dictionary is a hard thing to do since there is no alphabet. Instead, the characters are classified by the number of strokes in their radicals and then by the number of strokes in the rest of the character. 

Planet DebianIan Wienand: uwsgi; oh my!

The world of Python based web applications, WSGI, its interaction with uwsgi and various deployment methods can quickly turn into a incredible array of confusingly named acronym soup. If you jump straight into the uwsgi documentation it is almost certain you will get lost before you start!

Below tries to lay out a primer for the foundations of application deployment within devstack; a tool for creating a self-contained OpenStack environment for testing and interactive development. However, it is hopefully of more general interest for those new to some of these concepts too.


Let's start with WSGI. Fully described in PEP 333 -- Python Web Server Gateway Interface the core concept a standardised way for a Python program to be called in response to a web request. In essence, it bundles the parameters from the incoming request into known objects, and gives you can object to put data into that will get back to the requesting client. The "simplest application", taken from the PEP directly below, highlights this perfectly:

def simple_app(environ, start_response):
     """Simplest possible application object"""
     status = '200 OK'
     response_headers = [('Content-type', 'text/plain')]
     start_response(status, response_headers)
     return ['Hello world!\n']

You can start building frameworks on top of this, but yet maintain broad interoperability as you build your application. There is plenty more to it, but that's all you need to follow for now.

Using WSGI

Your WSGI based application needs to get a request from somewhere. We'll refer to the diagram below for discussions of how WSGI based applications can be deployed.

Overview of some WSGI deployment methods

In general, this is illustrating how an API end-point might be connected together to an underlying WSGI implementation written in Python ( Of course, there are going to be layers and frameworks and libraries and heavens knows what else in any real deployment. We're just concentrating on Apache integration -- the client request hits Apache first and then gets handled as described below.


Starting with 1 in the diagram above, we see CGI or "Common Gateway Interface". This is the oldest and most generic method of a web server calling an external application in response to an incoming request. The details of the request are put into environment variables and whatever process is configured to respond to that URL is fork() -ed. In essence, whatever comes back from stdout is sent back to the client and then the process is killed. The next request comes in and it starts all over again.

This can certainly be done with WSGI; above we illustrate that you'd have a framework layer that would translate the environment variables into the python environ object and connect up the processes output to gather the response.

The advantage of CGI is that it is the lowest common denominator of "call this when a request comes in". It works with anything you can exec, from shell scripts to compiled binaries. However, forking processes is expensive, and parsing the environment variables involves a lot of fiddly string processing. These become issues as you scale.


Illustrated by 2 above, it is possible to embed a Python interpreter directly into the web server and call the application from there. This is broadly how mod_python, mod_wsgi and mod_uwsgi all work.

The overheads of marshaling arguments into strings via environment variables, then unmarshaling them back to Python objects can be removed in this model. The web server handles the tricky parts of communicating with the remote client, and the module "just" needs to translate the internal structures of the request and response into the Python WSGI representation. The web server can manage the response handlers directly leading to further opportunities for performance optimisations (more persistent state, etc.).

The problem with this model is that your web server becomes part of your application. This may sound a bit silly -- of course if the web server doesn't take client requests nothing works. However, there are several situations where (as usual in computer science) a layer of abstraction can be of benefit. Being part of the web server means you have to write to its APIs and, in general, its view of the world. For example, mod_uwsgi documentation says

"This is the original module. It is solid, but incredibly ugly and does not follow a lot of apache coding convention style".


mod_python is deprecated with mod_wsgi as the replacement. These are obviously tied very closely to internal Apache concepts.

In production environments, you need things like load-balancing, high-availability and caching that all need to integrate into this model. Thus you will have to additionally ensure these various layers all integrate directly with your web server.

Since your application is the web server, any time you make small changes you essentially need to manage the whole web server; often with a complete restart. Devstack is a great example of this; where you have 5-6 different WSGI-based services running to simulate your OpenStack environment (compute service, network service, image service, block storage, etc) but you are only working on one component which you wish to iterate quickly on. Stopping everything to update one component can be tricky in both production and development.


Which brings us to uwsgi (I call this "micro-wsgi" but I don't know if it actually intended to be a μ). uwsgi is a real Swiss Army knife, and can be used in contexts that don't have to do with Python or WSGI -- which I believe is why you can get quite confused if you just start looking at it in isolation.

uwsgi lets us combine some of the advantages of being part of the web server with the advantages of abstraction. uwsgi is a complete pluggable network daemon framework, but we'll just discuss it in one context illustrated by 3.

In this model, the WSGI application runs separately to the webserver within the embedded python interpreter provided by the uwsgi daemon. uwsgi is, in parts, a web-server -- as illustrated it can talk HTTP directly if you want it to, which can be exposed directly or via a traditional proxy.

By using the proxy extension mod_proxy_uwsgi we can have the advantage of being "inside" Apache and forwarding the requests via a lightweight binary channel to the application back end. In this model, uwsgi provides a uwsgi:// service using its internal protcol on a private port. The proxy module marshals the request into small packets and forwards it to the given port. uswgi takes the incoming request, quickly unmarshals it and feeds it into the WSGI application running inside. Data is sent back via similarly fast channels as the response (note you can equally use file based Unix sockets for local only communication).

Now your application has a level of abstraction to your front end. At one extreme, you could swap out Apache for some other web server completely and feed in requests just the same. Or you can have Apache start to load-balance out requests to different backend handlers transparently.

The model works very well for multiple applications living in the same name-space. For example, in the Devstack context, it's easy with mod_proxy to have Apache doing URL matching and separate out each incoming request to its appropriate back end service; e.g.

  • http://service/identity gets routed to Keystone running at localhost:40000
  • http://service/compute gets sent to Nova at localhost:40001
  • http://service/image gets sent to glance at localhost:40002

and so on (you can see how this is exactly configured in lib/apache:write_uwsgi_config).

When a developer makes a change they simply need to restart one particular uwsgi instance with their change and the unified front-end remains untouched. In Devstack (as illustrated) the uwsgi processes are further wrapped into systemd services which facilitates easy life-cycle and log management. Of course you can imagine you start getting containers involved, then container orchestrators, then clouds-on-clouds ...


There's no right or wrong way to deploy complex web applications. But using an Apache front end, proxying requests via fast channels to isolated uwsgi processes running individual WSGI-based applications can provide both good performance and implementation flexibility.


Planet DebianJonathan McDowell: Fixing a broken ESP8266

One of the IoT platforms I’ve been playing with is the ESP8266, which is a pretty incredible little chip with dev boards available for under £4. Arduino and Micropython are both great development platforms for them, but the first board I bought (back in 2016) only had a 4Mbit flash chip. As a result I spent some time writing against the Espressif C SDK and trying to fit everything into less than 256KB so that the flash could hold 2 images and allow over the air updates. Annoyingly just as I was getting to the point of success with Richard Burton’s rBoot my device started misbehaving, even when I went back to the default boot loader:

 ets Jan  8 2013,rst cause:1, boot mode:(3,6)

load 0x40100000, len 816, room 16
tail 0
chksum 0x8d
load 0x3ffe8000, len 788, room 8
tail 12
chksum 0xcf
ho 0 tail 12 room 4
load 0x3ffe8314, len 288, room 12
tail 4
chksum 0xcf
csum 0xcf

2nd boot version : 1.2
  SPI Speed      : 40MHz
  SPI Mode       : DIO
  SPI Flash Size : 4Mbit
jump to run user1

Fatal exception (0):
epc1=0x402015a4, epc2=0x00000000, epc3=0x00000000, excvaddr=0x00000000, depc=0x00000000
Fatal exception (0):
epc1=0x402015a4, epc2=0x00000000, epc3=0x00000000, excvaddr=0x00000000, depc=0x00000000
Fatal exception (0):

(repeats indefinitely)

Various things suggested this was a bad flash. I tried a clean Micropython install, a restore of the original AT firmware backup I’d taken, and lots of different combinations of my own code/the blinkenlights demo and rBoot/Espressif’s bootloader. I made sure my 3.3v supply had enough oompf (I’d previously been cheating and using the built in FT232RL regulator, which doesn’t have quite enough when the device is fully operational, rather than in UART boot mode, such as doing an OTA flash). No joy. I gave up and moved on to one of the other ESP8266 modules I had, with a greater amount of flash. However I was curious about whether this was simply a case of the flash chip wearing out (various sites claim the cheap ones on some dev boards will die after a fairly small number of programming cycles). So I ordered some 16Mb devices - cheap enough to make it worth trying out, but also giving a useful bump in space.

They arrived this week and I set about removing the old chip and soldering on the new one (Andreas Spiess has a useful video of this, or there’s Pete Scargill’s write up). Powered it all up, ran flash_id to see that it was correctly detected as a 16Mb/2MB device and set about flashing my app onto it. Only to get:

 ets Jan  8 2013,rst cause:2, boot mode:(3,3)

load 0x40100000, len 612, room 16
tail 4
chksum 0xfd
load 0x88380000, len 565951362, room 4
flash read err, ets_unpack_flash_code

Ooops. I had better luck with a complete flash erase ( erase_flash) and then a full program of Micropython using --baud 460800 write_flash --flash_size=detect -fm dio 0 esp8266-20180511-v1.9.4.bin, which at least convinced me I’d managed to solder the new chip on correctly. Further experimention revealed I needed to pass all of the flash parameters to to get rBoot entirely happy, and include esp_init_data_default.bin (FWIW I updated everything to v2.2.1 as part of the process): write_flash --flash_size=16m -fm dio 0x0 rboot.bin 0x2000 rom0.bin \
    0x120000 rom1.bin 0x1fc000 esp_init_data_default_v08.bin

Which gives (at the default 76200 of the bootloader bit):

 ets Jan  8 2013,rst cause:1, boot mode:(3,7)

load 0x40100000, len 1328, room 16
tail 0
chksum 0x12
load 0x3ffe8000, len 604, room 8
tail 4
chksum 0x34
csum 0x34

rBoot v1.4.2 -
Flash Size:   16 Mbit
Flash Mode:   DIO
Flash Speed:  40 MHz

Booting rom 0.
rf cal sector: 507
freq trace enable 0

Given the cost of the modules it wasn’t really worth my time and energy to actually fix the broken one rather than buying a new one, but it was rewarding to be sure of the root cause. Hopefully this post at least serves to help anyone seeing the same exception messages determine that there’s a good chance their flash has died, and that a replacement may sort the problem.

Planet DebianPetter Reinholdtsen: Debian APT upgrade without enough free space on the disk...

Quite regularly, I let my Debian Sid/Unstable chroot stay untouch for a while, and when I need to update it there is not enough free space on the disk for apt to do a normal 'apt upgrade'. I normally would resolve the issue by doing 'apt install <somepackages>' to upgrade only some of the packages in one batch, until the amount of packages to download fall below the amount of free space available. Today, I had about 500 packages to upgrade, and after a while I got tired of trying to install chunks of packages manually. I concluded that I did not have the spare hours required to complete the task, and decided to see if I could automate it. I came up with this small script which I call 'apt-in-chunks':

# Upgrade packages when the disk is too full to upgrade every
# upgradable package in one lump.  Fetching packages to upgrade using
# apt, and then installing using dpkg, to avoid changing the package
# flag for manual/automatic.

set -e

ignore() {
    if [ "$1" ]; then
	grep -v "$1"

for p in $(apt list --upgradable | ignore "$@" |cut -d/ -f1 | grep -v '^Listing...'); do
    echo "Upgrading $p"
    apt clean
    apt install --download-only -y $p
    for f in /var/cache/apt/archives/*.deb; do
	if [ -e "$f" ]; then
	    dpkg -i /var/cache/apt/archives/*.deb

The script will extract the list of packages to upgrade, try to download the packages needed to upgrade one package, install the downloaded packages using dpkg. The idea is to upgrade packages without changing the APT mark for the package (ie the one recording of the package was manually requested or pulled in as a dependency). To use it, simply run it as root from the command line. If it fail, try 'apt install -f' to clean up the mess and run the script again. This might happen if the new packages conflict with one of the old packages. dpkg is unable to remove, while apt can do this.

It take one option, a package to ignore in the list of packages to upgrade. The option to ignore a package is there to be able to skip the packages that are simply too large to unpack. Today this was 'ghc', but I have run into other large packages causing similar problems earlier (like TeX).

Update 2018-07-08: Thanks to Paul Wise, I am aware of two alternative ways to handle this. The "unattended-upgrades --minimal-upgrade-steps" option will try to calculate upgrade sets for each package to upgrade, and then upgrade them in order, smallest set first. It might be a better option than my above mentioned script. Also, "aptutude upgrade" can upgrade single packages, thus avoiding the need for using "dpkg -i" in the script above.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Don Martitake the YouTube advertisers bowling

What if there is a better way forward on the whole Safe Harbor controversy and Article 13?

Companies don't advertise on sites like YouTube, sites teeming with copyright infringers and nationalist extremists, because those companies are run by copyright infringers or nationalist extremists. Marketing decision-makers are incentivized to play a corrupt online advertising game that rewards them for supporting infringement and extremism.

So the trick here is to help people move marketing money out of bad things (negative externalities) and toward good things (positive externalities). We know that YouTube is a brand-unsafe shitshow because Google won't advertise its own end-user-facing products and services there without a whole extra layer of brand safety protection.

Big Internet companies are set up to insulate decision-makers from the consequences of their own online asshattery, anyway. The way to affect those big Internet companies is through their advertisers. So how about a tweak to Article 13? Instead of putting the consequences of infringement on the "online content sharing service provider," put it on the brand advertised. This should help in several ways.

  • Give legit services some flexibility. If your web site's business model is anything other than "get cheap eyeballs with other people's creative work" or "get cheap eyeballs by recommending divisive bullshit" then you don't have to change a thing.

  • Incentivize sites to pay for new creative work, by making works covered by an author or artist contract a more attractive place for paid advertising than "content" uploaded by random users.

  • Make it easier for marketers who want to do the right thing, by pointing out the risks of supporting bad people.

  • Move some of the risks of online advertising away from the public and toward the people who can make a difference.

How about it?

Planet DebianMinkush Jain: Getting Started with Debian Packaging

One of my tasks in GSoC involved set up of Thunderbird extensions for the user. Some of the more popular add-ons like ‘Lightning’ (calendar organiser) already has a Debian package.

Another important add on is ‘Cardbook’ which is used to manage contacts for the user based on CardDAV and vCard standards. But it doesn’t have a package yet.

My mentor, Daniel motivated me to create a package for it and upload it to It would ease the installation process as it could get installed through apt-get. This blog describes how I learned and created a Debian package for CardBook from scratch.

Since, I was new to packaging, I did extensive research on basics of building a package from the source code and checked if the license was DFSG compatible.

I learned from various Debian wiki guides like ‘Packaging Intro’, ‘Building a Package’ and blogs.

I also studied the amd64 files included in Lightning extension package.

The package I created could be found here.

Debian Package! Debian Package

Creating an empty package

I started by creating a debian directory by using dh_make command

# Empty project folder
$ mkdir -p Debian/cardbook

# create files
$ dh_make\
> --native \
> --single \
> --packagename cardbook_1.0.0 \
> --email

Some important files like control, rules, changelog, copyright are initialized with it.

The list of all the files created:

$ find /debian

I gained an understanding of Dpkg package management program in Debian and its use to install, remove and manage packages.

I build an empty package with dpkg commands. This created an empty package with four files namely .changes, .deb, .dsc, .tar.gz

.dsc file contains the changes made and signature

.deb is the main package file which can be installed

.tar.gz (tarball) contains the source package

The process also created the README and changelog files in /usr/share. They contain the essential notes about the package like description, author and version.

I installed the package and checked the installed package contents. My new package mentions the version, architecture and description!

$ dpkg -L cardbook

Including CardBook source files

After successfully creating an empty package, I added the actual CardBook add-on files inside the package. The CardBook’s codebase is hosted here on Gitlab. I included all the source files inside another directory and told the build package command which files to include in the package.

I did this by creating a file debian/install using vi editor and listed the directories that should be installed. In this process I spent some time learning to use Linux terminal based text editors like vi. It helped me become familiar with editing, creating new files and shortcuts in vi.

Once, this was done, I updated the package version in the changelog file to document the changes that I have made.

$ dpkg -l | grep cardbook
ii  cardbook       1.1.0          amd64        Thunderbird add-on for address book

changelog file Changelog file after updating Package

After rebuilding it, dependencies and detailed description can be added if necessary. The Debian control file can be edited to add the additional package requirements and dependencies.

Local Debian Repository

Without creating a local repository, CardBook could be installed with:

$ sudo dpkg -i cardbook_1.1.0.deb

To actually test the installation for the package, I decided to build a local Debian repository. Without it, the apt-get command would not locate the package, as it is not in uploaded in Debian packages on net.

For configuring a local Debian repository, I copied my packages (.deb) to Packages.gz file placed in a /tmp location.

Packages.gz Local Debian Repo

To make it work, I learned about the apt configuration and where it looks for files.

I researched for a way to add my file location in apt-config. Finally I could accomplish the task by adding *.list file with package’s path in APT and updating ‘apt-cache’ afterwards.

Hence, the latest CardBook version could be successfully installed by apt-get install cardbook

Installation CardBook Installation through apt-get

Fixing Packaging errors and bugs

My mentor, Daniel helped me a lot during this process and guided me how to proceed further with the package. He told me to use Lintian for fixing common packaging error and then using dput to finally upload the CardBook package.

Lintian is a Debian package checker which finds policy violations and bugs. It is one of the most widely used tool by Debian Maintainers to automate checks for Debian policies before uploading the package.

I have uploaded the second updated version of the package in a separate branch of the repository on Salsa here inside Debian directory.

I installed Lintian from backports and learned to use it on a package to fix errors. I researched on the abbreviations used in its errors and how to show detailed response from lintian commands

$ lintian -i -I --show-overrides cardbook_1.2.0.changes

Initially on running the command on the .changes file, I was surprised to see that a large number of errors, warnings and notes were displayed!

Running Lintian on changelog Brief errors after running Lintian on Package

Running Lintian Detailed Lintian errors (1)

Running Lintian Detailed Lintian errors (2) and many more…

I spend some days to fix some errors related to Debian package policy violations. I had to dig into every policy and Debian rules carefully to eradicate a simple error. For this I referred various sections on Debian Policy Manual and Debian Developer’s Reference.

I am still working on making it flawless and hope to upload it on soon!

It would be grateful if people from the Debian community who use Thunderbird could help fix these errors.

Planet DebianMinkush Jain: Getting Started with Debian Packaging

One of my tasks in GSoC involved set up of Thunderbird extensions for the user. Some of the more popular add-ons like ‘Lightning’ (calendar organiser) already has a Debian package.

Another important add on is ‘Cardbook’ which is used to manage contacts for the user based on CardDAV and vCard standards. But it doesn’t have a package yet.

My mentor, Daniel motivated me to create a package for it and upload it to It would ease the installation process as it could get installed through apt-get. This blog describes how I learned and created a Debian package for CardBook from scratch.

Since, I was new to packaging, I did extensive research on basics of building a package from the source code and checked if the license was DFSG compatible.

I learned from various Debian wiki guides like ‘Packaging Intro’, ‘Building a Package’ and blogs.

I also studied the amd64 files included in Lightning extension package.

The package I created could be found here.

Debian Package! Debian Package

Creating an empty package

I started by creating a debian directory by using dh_make command

# Empty project folder
$ mkdir -p Debian/cardbook

# create files
$ dh_make\
> --native \
> --single \
> --packagename cardbook_1.0.0 \
> --email

Some important files like control, rules, changelog, copyright are initialized with it.

The list of all the files created:

$ find /debian

I gained an understanding of Dpkg package management program in Debian and its use to install, remove and manage packages.

I build an empty package with dpkg commands. This created an empty package with four files namely .changes, .deb, .dsc, .tar.gz

.dsc file contains the changes made and signature

.deb is the main package file which can be installed

.tar.gz (tarball) contains the source package

The process also created the README and changelog files in /usr/share. They contain the essential notes about the package like description, author and version.

I installed the package and checked the installed package contents. My new package mentions the version, architecture and description!

$ dpkg -L cardbook

Including CardBook source files

After successfully creating an empty package, I added the actual CardBook add-on files inside the package. The CardBook’s codebase is hosted here on Gitlab. I included all the source files inside another directory and told the build package command which files to include in the package.

I did this by creating a file debian/install using vi editor and listed the directories that should be installed. In this process I spent some time learning to use Linux terminal based text editors like vi. It helped me become familiar with editing, creating new files and shortcuts in vi.

Once, this was done, I updated the package version in the changelog file to document the changes that I have made.

$ dpkg -l | grep cardbook
ii  cardbook       1.1.0          amd64        Thunderbird add-on for address book

changelog file Changelog file after updating Package

After rebuilding it, dependencies and detailed description can be added if necessary. The Debian control file can be edited to add the additional package requirements and dependencies.

Local Debian Repository

Without creating a local repository, CardBook could be installed with:

$ sudo dpkg -i cardbook_1.1.0.deb

To actually test the installation for the package, I decided to build a local Debian repository. Without it, the apt-get command would not locate the package, as it is not in uploaded in Debian packages on net.

For configuring a local Debian repository, I copied my packages (.deb) to Packages.gz file placed in a /tmp location.

Packages.gz Local Debian Repo

To make it work, I learned about the apt configuration and where it looks for files.

I researched for a way to add my file location in apt-config. Finally I could accomplish the task by adding *.list file with package’s path in APT and updating ‘apt-cache’ afterwards.

Hence, the latest CardBook version could be successfully installed by apt-get install cardbook

Installation CardBook Installation through apt-get

Fixing Packaging errors and bugs

My mentor, Daniel helped me a lot during this process and guided me how to proceed further with the package. He told me to use Lintian for fixing common packaging error and then using dput to finally upload the CardBook package.

Lintian is a Debian package checker which finds policy violations and bugs. It is one of the most widely used tool by Debian Maintainers to automate checks for Debian policies before uploading the package.

I have uploaded the second updated version of the package in a separate branch of the repository on Salsa here inside Debian directory.

I installed Lintian from backports and learned to use it on a package to fix errors. I researched on the abbreviations used in its errors and how to show detailed response from lintian commands

$ lintian -i -I --show-overrides cardbook_1.2.0.changes

Initially on running the command on the .changes file, I was surprised to see that a large number of errors, warnings and notes were displayed!

Running Lintian on changelog Brief errors after running Lintian on Package

Running Lintian Detailed Lintian errors (1)

Running Lintian Detailed Lintian errors (2) and many more…

I spend some days to fix some errors related to Debian package policy violations. I had to dig into every policy and Debian rules carefully to eradicate a simple error. For this I referred various sections on Debian Policy Manual and Debian Developer’s Reference.

I am still working on making it flawless and hope to upload it on soon!

It would be grateful if people from the Debian community who use Thunderbird could help fix these errors.


Planet DebianDominique Dumont: New Software::LicenseMoreUtils Perl module


Debian project has rather strict requirements regarding package license. One of these requirements is to provide a copyright file mentioning the license of the files included in a Debian package.

Debian also recommends to provide this copyright information in a machine readable format that contain the whole text of the license(s) or a summary pointing to a pre-defined location on the file system (see this example).

cme and Config::Model::Dpkg::Copyright helps in this task using Software::License module. But this module lacks the following features to properly support the requirements of Debian packaging:

  • license summary
  • support for clause like “GPL version 2 or (at your option) any later version”

Long story short, I’ve written Software::LicenseMoreUtils to provide these missing features. This module is a wrapper around Software::License and has the same API.

Adding license summaries for Debian requires only to update this YAML file.

This modules was written for Debian while keeping other distros in minds. Debian derevatives like Ubuntu or Mind are supported. Adding license summaries for other Linux distribution is straightforward. Please submit a bug or a PR to add support for other distributions.

For more details. please see:


All the best

Planet DebianCraig Small: wordpress 4.9.7

No sooner than I had patched WordPress 4.9.5 to fix the arbitrary unlink bug than I realised there is a WordPress 4.9.7 out there. This release (just out for Debian, if my Internet behaves) fixes the unlink bug found by RIPS Technologies.  However, the WordPress developers used a different method to fix it.

There will be Debian backports for WordPress that use one of these methods. It will come down to do those older versions use hooks and how different the code is in post.php

You should update, and if you don’t like WordPress deleting or editing its own files, perhaps consider using AppArmor.


Planet DebianClint Adams: Solve for q

f 0   =  1.5875
f 0.5 =  1.5875
f 1   =  3.175
f 2   =  6.35
f 3   =  9.525
f 4   = 12.7
f 5   = 15.875
f 6   = 19.05
f 7   = 22.225
f 8   = 25.4
f 9   = 28.575
f 10  = 31.75
Posted on 2018-07-06
Tags: barks

CryptogramFriday Squid Blogging: Squid Unexpectedly Playing a Part in US/China Trade War

Chinese buyers are canceling orders to buy US squid in advance of an expected 25% tariff.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityExxonMobil Bungles Rewards Card Debut

Energy giant ExxonMobil recently sent snail mail letters to its Plenti rewards card members stating that the points program was being replaced with a new one called Exxon Mobil Rewards+. Unfortunately, the letter includes a confusing toll free number and directs customers to a parked page that tries to foist Web browser extensions on visitors.

The mailer (the first page of which is screenshotted below) urges customers to visit exxonmobilrewardsplus[dot]com, to download its mobile app, and to call “1-888-REWARD+” with any questions. It may not be immediately obvious, but that “+” sign is actually the same thing as a zero on the telephone keypad (although I’m ashamed to say I had to look that up online to be sure).

Anyone curious enough to guess at other ending numbers other than zero will wind up at a call center advertising “free” Caribbean (1) cruises or at a pricey adult chat service dubbed “America’s hottest talk line” (6).

Worse, visiting the company’s new rewards Web site in Google Chrome prompted my browser to run a “security check,” followed by a series of popups offering to install a Chrome extension called “Browsing Safely.”

That extension changes your default search engine to Yahoo and appears to redirect all searches through a domain called lastlog[dot]in, which seems to be affiliated with an Israeli online advertising network. After adding the Browsing Safely extension to Chrome using a virtual machine, my browser was redirected to

The Google Chrome extension offered when I first visited exxonmobilrewardsplus-dot-com.

Many people on Twitter who expressed confusion about the mailer said they accidentally added an “e” to the end of “exxonmobil” and ended up getting bounced around to spammy-looking sites with ad redirects and dodgy download offers.

ExxonMobil corporate has not yet responded to requests for comment. But after about 10 minutes on hold listening to the same Muzak-like song, I was able to reach a customer service person at the confusing ExxonMobil Rewards+ phone number. That person said the Web site for the rewards program wasn’t going to be active until July 11.

“Currently the Web site is not available,” the representative said. “Please don’t try to download anything from it right now. It should be active and available next week.”

It always amazes me when major companies with oodles of cash (ExxonMobil made $20 billion last year) roll out new marketing initiatives without consulting professionals who help mitigate security and privacy issues for a living. It seems likely that happened in this case because anyone who knows a thing or two about security would strongly advise against instructing customers to visit a parked domain or one that isn’t yet fully under the company’s control.

CryptogramThe NSA's Domestic Surveillance Centers

The Intercept has a long story about the NSA's domestic interception points.

Includes some new Snowden documents.

Planet DebianJonathan Dowland: Newcastle University Historic Computing

some of our micro computers

some of our micro computers

Since first writing about my archiving activities in 2012 I've been meaning to write an update on what I've been up to, but I haven't got around to it. This, however, is noteable enough to be worth writing about!

In the last few months I became chair of the Historic Computing Committee at Newcastle University. We are responsible for a huge collection of historic computing artefacts from the University's past, going back to the 1950s, which has been almost single-handedly assembled and curated over the course of decades by the late Roger Broughton, who did much of the work in his retirement.

Segment of IBM/360 mainframe

Segment of IBM/360 mainframe

Sadly, Roger died in 2016.

Recently there has been an upsurge of interest and support for our project, partly as a result of other volunteers stepping in and partly due to the School of Computing moving to a purpose-built building and celebrating its 60th birthday.

We've managed to secure some funding from various sources to purchase proper, museum-grade storage and display cabinets. Although portions of the collection have been exhibited for one-off events, including School open days, this will be the first time that a substantial portion of the collection will be on (semi-)permanent public display.

Amstrad PPC640 portable PC

Amstrad PPC640 portable PC

Things have been moving very quickly recently. I am very happy to announce that the initial public displays will be unveiled as part of the Great Exhibition of the North! Most of the details are still TBC, but if you are interested you can keep an eye on this A History Of Computing events page.

For more about the Historic Computing Committee, cs-history Special Interest Group and related stuff, you can follow the CS History SIG blog, which we will hopefully be updating more often going forward. For the Historic Computing Collection specifically, please see the The Roger Broughton Museum of Computing Artefacts.

Planet DebianHolger Levsen: 20180706-rise-of-the-machines

Rise of the machines

Last week I was in a crowd of 256 people watching and cheering Compressorhead, some were stage-diving. Truely awesome.

Worse Than FailureError'd: Is Null News Good News?

"The Eugene (Oregon) Register-Guard knows when it's a slow news day, null happens," Bill T. writes.


"12 months for free or a year for not hard to choose!" writes Paige S.


Rodrigo M. wrote, "GlobalProtect thinks the current version I have installed is not very good, so why not upgrade by downgrading?"


"After flying with Norwegian airways I got a mail asking to take the survey," Nathan K. wrote, "Now I apparently need to find out how to file a ticket with their sysadmins."


" name is 'Marketing' now?" wrote Anon and totally not named Marketing.


Brad W. writes, "For having 'Caterpillar,' 'Revolver,' and 'Steel Toe' in the description the shoe seems a bit wimpy."


[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!


Rondam RamblingsTrump is a personality cult

If you want proof that Donald Trump has become a cult of personality look no further than this story in the LA Times: Workers in this town may become victims of Trump's trade war, but they're behind him 'no matter what' Jimmie Coffer, a machine programmer at the nation’s largest nail-making plant, voted for Donald Trump partly because he was confident he would bring manufacturing jobs back to

Planet DebianThorsten Alteholz: My Debian Activities in June 2018

FTP master

This month I accepted 166 packages and rejected only 7 uploads. The overall number of packages that got accepted this month was 216.

Debian LTS

This was my forty eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 23.75h. During that time I did LTS uploads of:

  • [DLA 1404-1] lava-server security update for one CVE
  • [DLA 1403-1] zendframework security update for one CVE
  • [DLA 1409-1] mosquitto security update for two CVE
  • [DLA 1408-1] simplesamlphp security update for two CVE

I also prepared a test package for slurm-llnl but got no feadback yet *hint* *hint*.

This month has been the end of Wheezy LTS and the beginning of Jessie LTS. After asking Ansgar, I did the reconfiguration of the upload queues on seger to remove the embargoed queue for Jessie and reduce the number of supported architectures.

Further I started to work on opencv.

Unfortunately the normal locking mechanism for work on packages by claiming the package in dla-needed.txt did not really work during the transition. As a result I worked on libidn and mercurial parallel to others. There seems to be room for improvement for the next transition.

Last but not least I did one week of frontdesk duties.

Debian ELTS

This month was the first ELTS month.

During my allocated time I made the first CVE triage in my week of frontdesk duties, extended the check-syntax part in the ELTS security tracker and uploaded:

  • ELA-3-1 for file
  • ELA-4-1 for openssl

Other stuff

During June I continued the libosmocore transition but could not finish it. I hope I can upload all missing packages in July.

Further I continued to sponsor some glewlwyd packages for Nicolas Mora.

The DOPOM package for this month was dvbstream.

I also upload a new upstream version of …

CryptogramBeating Facial Recognition Software with Face Makeup

At least right now, facial recognition algorithms don't work with Juggalo makeup.

Worse Than FailureCodeSOD: To Read or Parse

When JSON started to displace XML as the default data format for the web, my initial reaction was, "Oh, thank goodness." Time passed, and people reinvented schemas for JSON and RPC APIs in JSON and wrote tools which turn JSON schemas into UIs and built databases which store BSON, which is JSON with extra steps, and… it makes you wonder what it was all for.

Then people like Mark send in some code with a subject, "WHY??!??!". It's code which handles some XML, in C#.

Now, a useful fact- C# has a rich set of API- for handling XML, and like most XML APIs, they implement two approaches.

The simplest and most obvious is the DOM-style approach, where you load an entire XML document into memory and construct a DOM out of it. It's easy to manipulate, but for large XML documents can strain the available memory.

The other is the "reader" approach, where you treat the document as a stream, and read through the document, one element at a time. This is a bit trickier for developers, but scales better to large XML files.

So let's say that you're reading a multi-gigabyte XML file. You'd want to quit your job, obviously. But assuming you didn't, you'd want to use the "reader" approach, yes? There's just one problem: the reader approach requires you to go through the document element-by-element, and you can't skip around easily.

public void ReadXml(XmlReader reader) { string xml = reader.ReadOuterXml(); XElement element = XElement.Parse(xml); … }

Someone decided to give us the "best of both worlds". They load the multi-gigabyte file using a reader, but instead of going elementwise through the document, they use ReadOuterXml to pull the entire document in as a string. Once they have the multi-gigabyte string in memory, they then feed it into the XElement.Parse method, which turns the multi-gigabyte string into a multi-gigabyte DOM structure.

You'll be shocked to learn that this code was tested with small testing files, not multi-gigabyte files, worked fine in those conditions, and thus ended up in production.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Dave HallMigrating AWS System Manager Parameter Store Secrets to a new Namespace

When starting with a new tool it is common to jump in start doing things. Over time you learn how to do things better. Amazon's AWS System Manager (SSM) Parameter Store was like that for me. I started off polluting the global namespace with all my secrets. Over time I learned to use paths to create namespaces. This helps a lot when it comes to managing access.

Recently I've been using Parameter Store a lot. During this time I have been reminded that naming things is hard. This lead to me needing to change some paths in SSM Parameter Store. Unfortunately AWS doesn't allow you to rename param store keys, you have to create new ones.

There was no way I was going to manually copy and paste all those secrets. Python (3.6) to the rescue! I wrote a script to copy the values to the new namespace. While I was at it I migrated them to use a new KMS key for encryption.

Grab the code from my gist, make it executable, pip install boto3 if you need to, then run it like so: source-tree-name target-tree-name new-kms-uuid

The script assumes all parameters are encrypted. The same key is used for all parameters. boto3 expects AWS credentials need to be in ~/.aws or environment variables.

Once everything is verified, you can use a modified version of the script that calls ssm.delete_parameter() or do it via the console.

I hope this saves someone some time.


Planet DebianRaphaël Hertzog: My Free Software Activities in June 2018

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Distro Tracker

I merged a branch adding appstream related data (thanks to Matthias Klumpp). I merged multiple small contributions from a new contributor: Lev Lazinskiy submitted a fix to have the correct version string in the documentation and ensured that we could not use the login page if we are already identified (see MR 36).

Arthur Del Esposte continued his summer of code project and submitted multiple merge requests that I reviewed multiple times before they were ready to be merged. He implemented a team search feature, created a small framework to display an overview of all packages of a team.

On a more administrative level, I had to deal with many subscriptions that became immediately invalid when shut down. So I tried to replace all email subscriptions using * with alternate emails linked to the same account. When no fallback was possible, I simply deleted the subscription.

pkg-security work

I sponsored cewl 5.4.3-1 (new upstream release) and wfuzz_2.2.11-1.dsc (new upstream release), masscan 1.0.5+ds1-1 (taken over by the team, new upstream release) and wafw00f 0.9.5-1 (new upstream release). I sponsored wifite2, made the unittests run during the build and added some autopkgtest. I submitted a pull request to skip tests when some tools are unavailable.

I filed #901595 on reaver to get a fixed watch file.

Misc Debian work

I reviewed multiple merge requests on live-build (about its handling of archive keys and the associated documentation). I uploaded a new version of live-boot (20180603) with the pending changes.

I sponsored pylint-django 0.11-1 for Joseph Herlant, xlwt 1.3.0-2 (bug fix) and python-num2words_0.5.6-1~bpo9+1 (backport requested by a user).

I uploaded a new version of ftplib fixing a release critical bug (#901224: ftplib FTCBFS: uses the build architecture compiler).

I submitted two patches to git (fixing french l10n in git bisect and marking two strings for translation).

I reviewed multiple merge requests on debootstrap: make --unpack-tarball no longer downloads anything, --components not carried over with --foreign/--second-stage and enabling --merged-usr by default.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Worse Than FailureClassic WTF: Common Sense Not Found

It's the Forth of July in the US, where we all take a day off and launch fireworks to celebrate the power of stack based languages. While we participate in American traditions, like eating hot dogs without buns, enjoy this classic WTF about a real 455hole. --Remy

Mike was a server admin at your typical everyday Initech. One day, project manager Bill stopped by his cube with questions from Jay, the developer of an internal Java application.

“Hello there- thanks for your time!” Bill dropped into Mike’s spare chair. “We needed your expertise on this one.”

“No problem,” Mike said, swiveling to face Bill. “What can I help with?”

Bill’s pen hovered over the yellow notepad in his lap. He frowned down at some notes already scribbled there. “The WAS HTTP server- that’s basically an Apache server, right?”

HTTP Error 455 - User is a Jackass

“Basically,” Mike answered. “Some IBM customizations, but yeah.”

“So it has a… HT Access file, or whatever it’s called?” Bill asked.

He meant .htaccess, the config file. “Sure, yeah.”

“OK.” Bill glanced up with wide-eyed innocence. “So we could put something into that file that would allow a redirect, right?”

“Um… it’s possible.” Uneasiness crept over Mike, who realized he was about to discuss a custom solution to a problem he didn’t know about, on a server he was responsible for. “What’s going on?”

“Well, Jay wants a redirect in there to send people to another server,” Bill replied.

Mike frowned in confusion. “We just stood this server up. Now he wants another domain?”

“Huh? Oh, no, It’s not our domain. It’s someone else’s.”

“OK… I’m lost,” Mike admitted. “Let’s start at the beginning. What’s the problem Jay wants to fix?”

“Well, he has this broken link in his app, and he wants to redirect people to the correct site,” Bill explained.

Mike stared, dumbfounded for several moments. “Excuse me?”

“Yeah. He has this link that points off to some external federal website- IRS, I think- and the link is broken. He wants to automatically redirect users to the correct site so they don’t get a 404 error. We started looking into it, and found that Apache has this HT Access file thingy. It looks like that’s what we need.”

“You’re kidding, right?” Mike blurted ahead of discretion.

“No. Why?” Bill’s eyes widened. “Something wrong?”

Mike swiveled around to retrieve his coffee mug, and a measure of composure. “Why doesn’t he just fix the link within the app so it points to the right URL?”

“Well, that’s what I asked him. But he thinks it’d be more convenient to redirect people.”

“If the link is updated, they won’t need to be redirected.”

“I realize that.”

Mike took a long swig. “That’s not what the .htaccess file is for. It’s meant to redirect an incoming request to a different server of your own, not someone else’s.”

“Oh.” Bill scribbled this down on his notepad, then stared hard at the scribbles. Every moment of silence ratcheted Mike’s nervousness higher.

“So you’re saying we can’t do the HT Access thing?” Bill finally asked, looking up again.

“To fix a broken link?”

“Yeah!” Bill’s eyes lit up. Apparently, Mike’s clarifying question had given him new hope.

“No.” Mike crushed that hope as mercilessly as he could.

“OK, so the HT Access thing won’t work. Hmm, OK.” Bill frowned back down at his notes, falling silent again. Mike sensed, and dreaded, another inane line of questioning about to follow.

“Well, another thing Jay mentioned was a custom error page,” Bill’s next foray began. “Can we do that in Apache?”

Mike hesitated. “…Yes?”

“Great! I’ll tell him that. He can develop a custom 404 page with some Javascript in it or something to redirect people to the correct site.”


“Not the prettiest solution, I know, but Jay said he can make it work.”

Mike spoke slowly. “He’s going to create a custom 404 error page… for that broken link of his?”


“And that 404 page is supposed to display… when his broken link sends users off to some IRS web server?”


“The IRS web server, when it gets a request for a page that doesn’t exist, is gonna display Jay’s custom 404 error page. Is that what you’re telling me?”

Bill’s confidence faltered. “Um… I think so.”

Mike dropped the bomb. “How’s he gonna get that custom page onto their server?”

“Well, it’d be on our server.”

“Right! So how would that custom 404 error get displayed?”

“When the user clicks the broken link.”

“I asked how. You just answered when.”

“Well, OK, I don’t know! I’m not the developer here.” Bill’s hands rose defensively. “Jay said he could make it work.”

“He’s wrong!” Mike snapped.

“He was pretty confident.”

Mike hesitated a moment before his shoulders dropped. Facts and common sense were not to prevail that day. “OK then. Lemme know when it works.”

Bill perked up. “Really? You’ll put it on the server?”

“Sure. Just have him fill out a service request and I’ll deploy it.”

“Excellent! Thank you!” Bill jumped up with pleasant surprise, and left the cube.

A few days later, Mike was completely unsurprised to find Jay frowning into his cube. “My 404 page isn’t displaying!”

Mike created a new email addressed to Jay, then copied and pasted a link to the IRS Help and Resources page. “Sorry- you’ll have to take it up with the taxman.”

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


Sociological ImagesThe Flag Fight

Every year I see the Fourth of July spark a social media fight. First, the flag swag comes out for the ritual parties and barbecues:

Then, somebody posts the U.S. flag code, especially this part:

(d) The flag should never be used as wearing apparel, bedding, or drapery.

It is interesting that flag apparel has become a quintessential dudebro look for the Fourth. Activist Abbie Hoffman was arrested for wearing a flag shirt in protest in 1968, and we still argue about whether flag burning in protest should be legal.

Are the dudebros disrespectful? Are the flag purists raining on the parade? Sociology shows us how this debate runs into deep assumptions about how we show respect for sacred things.

In 1966, the late sociologist Robert Bellah presented a now-classic essay, “Civil Religion in America.” The essay is about religion in public life, and how American politicians created a sense of shared national identity around general religious claims. Since then, sociologists and political theorists have argued about how inclusive civil religion really is (Does it include atheists or other minority groups who aren’t Christian? Lots of Americans don’t seem to think so.), but the theory is useful for highlighting how much of American political life takes on a religious tone.

While Bellah focused on religious references in speeches and texts, there is a more general point that stands out for the flag debate:

What we have, then, from the earliest years of the republic is a collection of beliefs, symbols, and rituals with respect to sacred things and institutionalized in a collectivity…

The American civil religion…borrowed selectively from the religious tradition in such a way that the average American saw no conflict between the two. In this way, the civil religion was able to build up without any bitter struggle with the church powerful symbols of national solidarity and to mobilize deep levels of personal motivation for the attainment of national goals.

It is pretty easy to see the flag as a sacred symbol—one that represents a long history of solidarity and commitment in the United States. The trick is that civil religion focuses on the content of political beliefs more than the conduct of honoring those beliefs. The rich variety of human religious experience shows us that just because people share a sacred symbol doesn’t mean they agree about how best to celebrate it. Sure, the styles of American Christianity might appreciate quiet reverence and contemplation, but other societies partied to show their piety (Bacchanalia, anyone?).

Photo Credits: Wikimedia Commons, Scott Sherrill-Mix and US Embassy Canada via Flickr CC.

Once you consider the range in how people express their deeply-held political and cultural beliefs, it gets easier to understand where they are coming from, even if you completely disagree with them. What starts as an argument about disrespect hides a deeper argument about different kinds of celebration (and, of course, whether it is appropriate to celebrate at all)Political tensions are high these days, but cases like this show how we can have more productive arguments by getting to the core of our cultural disagreements.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

TEDTED en Español: TED’s first-ever Spanish-language speaker event in NYC

Host Gerry Garbulsky opens the TED en Español event in the TEDNYC theater, New York, NY. (Photo: Dian Lofton / TED)

Thursday marked the first-ever TED en Español speaker event hosted by TED in its New York City office. The all-Spanish daytime event featured eight speakers, a musical performance, five short films and fifteen one-minute talks given by members of the audience.

The New York event is just the latest addition to TED’s sweeping new Spanish-language TED en Español initiative, designed to spread ideas to the global Hispanic community. Led by TED’s Gerry Garbulsky, also head of the world’s largest TEDx event, TEDxRiodelaPlata in Argentina, TED en Español includes a Facebook community, Twitter feed, weekly “Boletín” newsletter, YouTube channel and — as of earlier this month — an original podcast created in partnership with Univision Communications.

Should we automate democracy? “Is it just me, or are there other people here that are a little bit disappointed with democracy?” asks César A. Hidalgo. Like other concerned citizens, the MIT physics professor wants to make sure we have elected governments that truly represent our values and wishes. His solution: What if scientists could create an AI that votes for you? Hidalgo envisions a system in which each voter could teach her own AI how to think like her, using quizzes, reading lists and other types of data. So once you’ve trained your AI and validated a few of the decisions it makes for you, you could leave it on autopilot, voting and advocating for you … or you could choose to approve every decision it suggests. It’s easy to poke holes in his idea, but Hidalgo believes it’s worth trying out on a small scale. His bottom line: “Democracy has a very bad user interface. If you can improve the user interface, you might be able to use it more.”

When the focus of failure shifts from what is lost to what is gained, we can all learn to “fail mindfully,” says Leticia Gasca. (Photo: Jasmina Tomic / TED)

How to fail mindfully. If your business failed in Ancient Greece, you’d have to stand in the town square with a basket over your head. Thankfully, we’ve come a long way — or have we? Failed-business owner Leticia Gasca doesn’t think so. Motivated by her own painful experience, she set out to create a way for others like her to convert the guilt and shame of a business venture gone bad into a catalyst for growth. Thus was born “Fuckup Nights” (FUN), a global movement and event series for sharing stories of professional failure, and The Failure Institute, a global research group that studies failure and its impact on people, businesses and communities. For Gasca, when the focus of failure shifts from what is lost to what is gained, we can all learn to “fail mindfully” and see endings as doorways to empathy, resilience and renewal.

From four countries to one stage. The pan-Latin-American musical ensemble LADAMA brought much more than just music to the TED en Español stage. Inviting the audience to dance with them, Venezuelan Maria Fernanda Gonzalez, Brazilian Lara Klaus, Colombian Daniela Serna and American Sara Lucas sing and dance to a medley of rhythms that range from South American to Caribbean-infused styles. Playing “Night Traveler” and “Porro Maracatu,” LADAMA transformed the stage into a place of music worth spreading.

Gastón Acurio shares stories of the power of food to change lives. (Photo: Jasmina Tomic / TED)

World change starts in your kitchen. In his pioneering work to bring Peruvian cuisine to the world, Gastón Acurio discovered the power that food has to change peoples’ lives. As ceviche started appearing in renowned restaurants worldwide, Gastón saw his home country of Peru begin to appreciate the diversity of its gastronomy and become proud of its own culture. But food hasn’t always been used to bring good to the world. With the industrial revolution and the rise of consumerism, “more people in the world are dying from obesity than hunger,” he notes, and many peoples’ lifestyles aren’t sustainable. 
By interacting with and caring about the food we eat, Gastón says, we can change our priorities as individuals and change the industries that serve us. He doesn’t yet have all the answers on how to make this a systematic movement that politicians can get behind, but world-renowned cooks are already taking these ideas into their kitchens. He tells the stories of a restaurant in Peru that supports native people by sourcing ingredients from them, a famous chef in NYC who’s fighting against the use of monocultures and an emblematic restaurant in France that has barred meat from the menu. “Cooks worldwide are convinced that we cannot wait for others to make changes and that we must jump into action,” he says. But professional cooks can’t do it all. If we want real change to happen, Gastón urges, we need home cooking to be at the center of everything.

The interconnectedness of music and life. Chilean musical director Paolo Bortolameolli wraps his views on music within his memory of crying the very first time he listened to live classical music. Sharing the emotions music evoked in him, Bortolameolli presents music as a metaphor for life — full of the expected and the unexpected. He thinks that we listen to the same songs again and again because, as humans, we like to experience life from a standpoint of expectation and stability, and he simultaneously suggests that every time we listen to a musical piece, we enliven the music, imbuing it with the potential to be not just recognized but rediscovered.

We reap what we sow — let’s sow something different. Up until the mid-’80s, the average incomes in major Latin American countries were on par with those in Korea. But now, less than a generation later, Koreans earn two to three times more than their Latin American counterparts. How can that be? The difference, says futurist Juan Enriquez, lies in a national prioritization of brainpower — and in identifying, educating and celebrating the best minds. What if in Latin America we started selecting for academic excellence the way we would for an Olympic soccer team? If Latin American countries are to thrive in the era of technology and beyond, they should look to establish their own top universities rather than letting their brightest minds thirst for nourishment, competition and achievement — and find it elsewhere, in foreign lands.

Rebeca Hwang shares her dream of a world where identities are used to bring people together, not alienate them. (Photo: Jasmina Tomic / TED)

Diversity is a superpower. Rebeca Hwang was born in Korea, raised in Argentina and educated in the United States. As someone who has spent a lifetime juggling various identities, Hwang can attest that having a blended background, while sometimes challenging, is actually a superpower. The venture capitalist shared how her fluency in many languages and cultures allows her to make connections with all kinds of people from around the globe. As the mother of two young children, Hwang hopes to pass this perspective on to her kids. She wants to teach them to embrace their unique backgrounds and to create a world where identities are used to bring people together, not alienate them.

Marine ecologist Enric Sala wants to protect the last wild places in the ocean. (Photo: Jasmina Tomic / TED)

How we’ll save our oceans If you jumped in the ocean at any random spot, says Enric Sala, you’d have a 98 percent chance of diving into a dead zone — a barren landscape empty of large fish and other forms of marine life. As a marine ecologist and National Geographic Explorer-in-Residence, Sala has dedicated his life to surveying the world’s oceans. He proposes a radical solution to help protect the oceans by focusing on our high seas, advocating for the creation of a reserve that would include two-thirds of the world’s ocean. By safeguarding our high seas, Sala believes we will restore the ecological, economic and social benefits of the ocean — and ensure that when our grandchildren jump into any random spot in the sea, they’ll encounter an abundance of glorious marine life instead of empty space.

And to wrap it up … In an improvised rap performance with plenty of well-timed dance moves, psychologist and dance therapist César Silveyra closes the session with 15 of what he calls “nano-talks.” In a spectacular showdown of his skills, Silveyra ties together ideas from previous speakers at the event, including Enric Sala’s warnings about overfished oceans, Gastón Acurio’s Peruvian cooking revolution and even a shoutout for speaker Rebeca Hwang’s grandmother … all the while “feeling like Beyoncé.”

TEDTED en Español: el primer evento de oradores TED de habla hispana

El presentador Gerry Garbulsky da inicio al evento TED en Español en el teatro TEDNYC, Nueva York, NY (Foto: Dian Lofton/TED)

El 26 de abril tuvo lugar el primer evento de oradores de TED en Español, presentado por TED en su oficina de Nueva York. El evento, completamente en español, contó con ocho oradores, una presentación musical, cinco cortometrajes y 13 charlas de un minuto dadas por miembros de la audiencia.

El evento en Nueva York es la última incorporación a la iniciativa “TED en Español” de TED, diseñada para difundir ideas en Español a la comunidad hispana mundial. El evento fue conducido por Gerry Garbulsky, director de TED en Español (también director del mayor evento de TEDx del mundo: TEDxRiodelaPlata en Argentina.) TED en Español, además, incluye su página en, una comunidad de Facebook, un feed de Twitter, un “Boletín” semanal, un canal de YouTube y, a principios de este mes, un podcast original creado en asociación con Univision.

¿Deberíamos automatizar la democracia? “¿Soy solo yo, o hay más personas que están un poco decepcionadas con la democracia?, pregunta César A. Hidalgo. Al igual que otros ciudadanos preocupados, el profesor e investigador de física del MIT quiere asegurarse de que hayamos elegido gobiernos que realmente representen nuestros valores y deseos. Su solución: ¿qué tal si los científicos pudieran crear una IA que votara por ti? Hidalgo visualiza un sistema en el que cada votante pueda enseñar a su propia IA, cómo pensar como ella, utilizando cuestionarios, listas de lectura y otros tipos de datos. Una vez que hayas entrenado a tu IA y validado algunas decisiones que toma por ti, puedes dejarla en piloto automático, votando y representándote… o puedes decidir aprobar cada cosa que sugiera. Es muy sencillo restarle credibilidad a su idea, pero Hidalgo cree que vale la pena probarlo a menor escala. Su conclusión: “la democracia tiene una pésima interfaz de usuario. Si se pudiera mejorar la interfaz, podríamos usarla más”.

Cuando el foco del fracaso cambia de lo que se pierde a lo que se gana, todos podemos aprender a “fallar conscientemente”, afirma Leticia Gasca (Foto: Jasmina Tomic/TED)

Cómo fallar conscientemente. Si tu negocio hubiera fallado en la Antigua Grecia, habrías tenido que pararte en la plaza del pueblo con una canasta sobre tu cabeza. Afortunadamente, hemos recorrido un largo camino… ¿o no? La dueña de un negocio fallido, Leticia Gasca, no lo cree. Motivada por su dolorosa experiencia, se dispuso a crear una forma para que otros como ella, transformaran la culpa y la vergüenza de un emprendimiento que salió mal, en un acelerador del crecimiento. En consecuencia, nació “Fuckup Nights” (FUN), una serie de eventos en diversos lugares del mundo para compartir historias de fracaso profesional; y “The Failure Institute” (el Instituto del Fracaso), un grupo de investigación, que estudia el fracaso y su impacto en las personas, empresas y comunidades. Para Gasca, cuando el foco del fracaso cambia de lo que se pierde a lo que se gana, todos podemos aprender a “fallar conscientemente” y ver los desenlaces como puertas a la empatía, la resiliencia y la renovación.

De cuatro países a un escenario. El grupo musical panlatinoamericano LADAMA trajo mucho más que música al escenario de TED en Español. La venezolana María Fernanda González, la brasilera Lara Klaus, la colombiana Daniela Serna y la estadounidense Sara Lucas cantan y bailan al son de una variedad de ritmos, que van desde estilos sudamericanos hasta fusiones caribeñas, invitando a la audiencia a bailar con ellas. Tocando “Night Traveler” y “Porro Maracatu”, LADAMA transformó el escenario en un espacio musical que vale la pena difundir.

Gastón Acurio comparte historias sobre el poder de la comida para cambiar vidas (Foto: Jasmina Tomic/TED)

El cambio mundial comienza en tu cocina. En su trabajo pionero por llevar la cocina peruana al mundo, Gastón Acurio descubrió el poder que tiene la comida para cambiar la vida de las personas. A medida que el ceviche apareció en restaurantes de renombre en todo el mundo, Gastón vio que su país natal, Perú, comenzaba a apreciar la diversidad de su gastronomía y se enorgullecía de su propia cultura. Pero la comida no siempre se ha usado para traer bien al mundo. Debido a la revolución industrial y al aumento del consumismo, “muere más cantidad de gente de obesidad que de hambre”, afirma, y el estilo de vida de muchas personas no es sostenible. Al interactuar y preocuparnos por los alimentos que comemos, dice Gastón, podemos cambiar nuestras prioridades como individuos y cambiar las industrias que nos sirven. Todavía no tiene las respuestas a cómo hacer de esto un movimiento sistemático que los políticos puedan respaldar, sin embargo, cocineros de renombre alrededor del mundo están llevando estas ideas a sus cocinas. Él cuenta historias sobre un restaurante en Perú que ayuda a los nativos obteniendo ingredientes de ellos, un chef famoso en Nueva York que lucha contra el uso de monocultivos y un restaurante emblemático en Francia que ha excluido la carne del menú. “Los cocineros alrededor del mundo estamos convencidos de que no podemos esperar a que otros hagan los cambios y que debemos ponernos en acción”, afirma. Pero los cocineros profesionales no pueden hacerlo todo. Si queremos realizar un cambio profundo, urge Gastón, necesitamos que la comida casera sea la clave.

La interconexión de la música y la vida. El director de orquesta chileno, Paolo Bortolameolli, envuelve su opinión sobre la música, alrededor de su recuerdo de haber llorado la primera vez que escuchó música clásica en vivo. Compartiendo las emociones que la música causó en él, Bortolameolli presenta la misma como una metáfora de la vida, llena de lo esperado y lo inesperado. Cree que escuchamos las mismas canciones una y otra vez porque, como humanos, nos gusta experimentar la vida desde un punto de vista de expectativa y estabilidad y, a la vez, sugiere que cada vez que escuchamos una canción, animamos la música, impregnándola con el potencial de no solo ser reconocida, sino también redescubierta.

Cosechamos lo que sembramos – sembremos algo distinto. Hasta mediados de los años 80, los ingresos en los principales países latinoamericanos estaban a la par de los de Corea. Pero ahora, menos de una generación después, los coreanos ganan entre dos y tres veces más que sus contrapartes latinoamericanos. ¿Cómo puede ser? La diferencia, afirma el futurista Juan Enríquez, radica en una priorización nacional de la capacidad intelectual y en identificar, educar y celebrar las mejores mentes. ¿Qué sucedería si en América Latina empezáramos a seleccionar la excelencia académica como lo hacemos hoy con la selección nacional de fútbol? Si los países latinoamericanos prosperan en la era de la tecnología y más, deberían buscar establecer sus propias universidades superiores en lugar de dejar que sus mentes más brillantes estén ansiosas de alimento, competencia y logros, y lo encuentren en otro lugar, en tierras extranjeras.

Rebeca Hwang comparte su sueño de un mundo donde las identidades se utilizan para unir a la gente, no para alienarlas (Foto: Jasmina Tomic/TED)

La diversidad es un superpoder. Rebeca Hwang nació en Corea, fue criada en Argentina y educada en los Estados Unidos. Como alguien que ha pasado su vida intercambiando varias identidades, Hwang afirma que tener un trasfondo variado, aunque a veces sea desafiante, es en realidad un superpoder. La inversora de riesgo compartió cómo su fluidez en muchos idiomas y culturas le permite establecer conexiones con todo tipo de personas de todo el mundo. Como madre de dos niños pequeños, Hwang espera transmitir esta perspectiva a sus hijos. Ella quiere enseñarles a abrazar sus orígenes y crear un mundo donde las identidades se utilicen para unir a las personas, no para alienarlas.

El ecologista marino Enric Sala desea proteger las últimas especies salvajes del océano (Foto: Jasmina Tomic/TED)

Cómo salvaremos nuestros océanos. Si saltas al océano en cualquier lugar, dice Enric Sala, tendrías un 98 por ciento de posibilidades de sumergirte en una zona muerta, un paisaje estéril, vacío de grandes peces y otras formas de vida marina. Como ecologista marino y explorador residente de National Geographic, Sala ha dedicado su vida a inspeccionar los océanos del mundo. Enfocándose en alta mar, propone una solución radical para ayudar a proteger los océanos, fomentando la creación de una reserva que incluiría dos tercios de los océanos del planeta. Al salvaguardar nuestra alta mar, Sala cree que restauraremos los beneficios ecológicos, económicos y sociales del océano y podremos asegurarnos de que cuando nuestros nietos salten a cualquier lugar en el mar, se encuentren con una gran cantidad de vida marina gloriosa en lugar de un espacio vacío.

Y para concluir… En una presentación improvisada de rap con muchos pasos de baile bien sincronizados, el psicólogo, rapero y bailarín César Silveyra cierra el evento. En una espectacular demostración de sus habilidades, Silveyra une las ideas de oradores anteriores del evento, incluyendo las advertencias de Enric Sala sobre la sobrepesca en los océanos, la revolución de la cocina peruana de Gastón Acurio e incluso un grito para la abuela de la oradora Rebeca Hwang… todo el tiempo “sintiéndose como Beyoncé”.

Planet DebianDirk Eddelbuettel: anytime 0.3.1

A new minor release of the anytime package is now on CRAN. This is the twelveth release, and the first in a little over a year as the package has stabilized.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string. See the anytime page, or the GitHub for a few examples.

This release adds a few minor tweaks. For numeric input, the function is now immutable: arguments that are not already cast to a different type (a common use case) are now cloned so that the input is never changed. We also added two assertion helpers for dates and datetimes, a new formatting helper for the (arguably awful, but common) ‘yyyymmdd’ format, and expanded some unit tests.

Changes in anytime version 0.3.1 (2017-06-05)

  • Numeric input is now preserved rather than silently cast to the return object type (#69 fixing #68).

  • New assertion function assertDate() and assertTime().

  • Unit tests were expanded for the new functions, for conversion from integer as well as for yyyymmdd().

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

TEDIdeas from the intersections: A night of talks from TED and Brightline

Onstage to host the event, Corey Hajim, TED’s business curator, and Cloe Shasha, TED’s speaker development director, kick off TEDNYC Intersections, a night of talks presented by TED and the Brightline Initiative. (Photo: Ryan Lash / TED)

At the intersections where we meet and collaborate, we can pool our collective wisdom to seek solutions to the world’s greatest problems. But true change begs for more than incremental steps and passive reactions — we need to galvanize transformation to create our collective future.

To celebrate the effort of bold thinkers building a better world, TED has partnered with the Brightline Initiative, a noncommercial coalition of organizations dedicated to helping leaders turn ideas into reality. In a night of talks at TED HQ in New York City — hosted by TED’s speaker development director Cloe Shasha and co-curated by business curator Corey Hajim and technology curator Alex Moura — six speakers and two performers showed us how we can effect real change. After opening remarks from Brightline’s Ricardo Vargas, the session kicked off with Stanford professor Tina Seelig.

Creativity expert Tina Seelig shares three ways we can all make our own luck. (Photo: Ryan Lash / TED)

How to cultivate more luck in your life. “Are you ready to get lucky?” asks Tina Seelig, a professor at Stanford University who focuses on creativity, entrepreneurship and innovation. While luck may seem to be brought on by chance alone, it turns out that there are ways you can enhance it — no matter how lucky or unlucky you think you are. Seelig shares three simple ways you can help luck to bend a little more in your direction: Take small risks that bring you outside your comfort zone; find every opportunity to show appreciation when others help you; and find ways to look at bad or crazy ideas with a new perspective. “The winds of luck are always there,” Seelig says, and by using these three tactics, you can build a bigger and bigger sail to catch them.

A new mantra: let’s fail mindfully. We celebrate bold entrepreneurs whose ingenuity led them to success — but how do we treat those who have failed? Leticia Gasca, founder and director of the Failure Institute, thinks we need to change the way we talk about business failure. After the devastating closing of her own startup, Gasca wiped the experience from her résumé and her mind. But she later realized that by hiding her failure, she was missing out on a valuable opportunity to connect. In an effort to embrace failure as an experience to learn from, Gasca co-created the Failure Institute, which includes international Fuck-Up Nights — spaces for vulnerability and connection over shared experiences of failure. Now, she advocates for a more holistic culture around failure. The goal of failing mindfully, Gasca says, is to “be aware of the consequences of the failed business,” and “to be aware of the lessons learned and the responsibility to share those learnings with the world.” This shift in the way we address failure can help make us better entrepreneurs, better people, and yes — better failures.

A police officer for 25 years, Tracie Keesee imagines a future where communities and police co-produce public safety in local communities. Photo: Ryan Lash / TED

Preserving dignity, guaranteeing justice. We all want to be safe, and our safety is intertwined, says Tracie Keesee, cofounder of the Center for Policing Equity. Sharing lessons she’s learned from 25 years as a police officer, Keesee reflects on the challenges — and opportunities — we all have for creating safer communities together. Policies like “Stop, Question and Frisk” set police and neighborhoods as adversaries, creating alienation, specifically among African Americans; instead, Keesee shares a vision for how the police and the neighborhoods they serve can come together to co-produce public safety. One example: the New York City Police Department’s “Build the Block Program,” which helps community members interact with police officers to share their experiences. The co-production of justice also includes implicit bias training for officers — so they can better understand how this biases we all carry impact their decision-making. By ending the “us vs. them” narrative, Keesee says, we can move forward together.

We can all be influencers. ​Success was once defined by power, but today it’s tied to influence, or “the ability to have an effect on a person or outcome,” says behavioral scientist Jon Levy. It rests on two building blocks: who you’re connected to and how much they trust you. In 2010, Levy created “Influencers” dinners, gathering a dozen high-profile people (who are strangers to each other) at his apartment. But how to get them to trust him and the rest of the group? He asks his guests to cook the meal and clean up. “I had a hunch this was working,” Levy recalls, “when one day I walked into my home and 12-time NBA All-Star Isiah Thomas was washing my dishes, while singer Regina Spektor was making guac with the Science Guy himself, Bill Nye.” From the dinners have emerged friendships, professional relationships and support for social causes. He believes we can cultivate our own spheres of influence at a scale that works for us. “If I can encourage you to do anything, it’s to bring together people you admire,” says Levy. “There’s almost no greater joy in life.”

Yelle and GrandMarnier rock the TED stage with electro-pop and a pair of bright yellow jumpsuits. (Photo: Ryan Lash / TED)

The intersection of music and dance. All the way from France, Yelle and GrandMarnier grace the TEDNYC stage with two electro-pop hits, “Interpassion” and “Ba$$in.” Both songs groove with robotic beats, Yelle’s hypnotic voice, kaleidoscopic rhythms and hypersonic sounds that rouse the audience to stand up, let loose and dance in the aisles.

How to be a great ally. We’re taught to believe that working hard leads directly to getting what you deserve — but sadly, this isn’t the case for many people. Gender, race, ethnicity, religion, disability, sexual orientation, class and geography — all of these can affect our opportunities for success, says writer and advocate Melinda Epler, and it’s up to all of us to do better as allies. She shares three simple ways to start uplifting others in the workplace: do no harm (listen, apologize for mistakes and never stop learning); advocate for underrepresented people in small ways (intervene if you see them being interrupted); and change the trajectory of a life by mentoring or sponsoring someone through their career. “There is no magic wand that corrects diversity and inclusion,” Epler says. “Change happens one person at a time, one act at a time, one word at a time.”

AJ Jacobs explains the powerful benefits of gratitude — and takes us on his quest to thank everyone who made his morning cup of coffee. (Photo: Ryan Lash / TED)

Lessons from the Trail of Gratitude. Author AJ Jacobs embarked on a quest with a deceptively simple idea at its heart: to personally thank every person who helped make his morning cup of coffee. “This quest took me around the world,” Jacobs says. “I discovered that my coffee would not be possible without hundreds of people I take for granted.” His project was inspired by a desire to overcome the brain’s innate “negative bias” — the psychological tendency to focus on the bad over the good — which is most effectively combated with gratitude. Jacobs ended up thanking everyone from his barista and the inventor of his coffee cup lid to the Colombian farmers who grew the coffee beans and the steelworkers in Indiana who made their pickup truck — and more than a thousand others in between. Along the way, he learned a series of perspective-altering lessons about globalization, the importance of human connection and more, which are detailed in his new TED Book, Thanks a Thousand: A Gratitude Journey. “It allowed me to focus on the hundreds of things that go right every day, as opposed to the three or four that go wrong,” Jacobs says of his project. “And it reminded me of the astounding interconnectedness of our world.”

Planet DebianAna Beatriz Guerrero Lopez: Introducing debos, a versatile images generator

In Debian and derivative systems, there are many ways to build images. The simplest tool of choice is often debootstrap. It works by downloading the .deb files from a mirror and unpacking them into a directory which can eventually be chrooted into.

More often than not, we want to make some customization on this image, install some extra packages, run a script, add some files, etc

debos is a tool to make this kind of trivial tasks easier. debos works using recipe files in YAML listing the actions you want to perform in your image sequentially and finally, choosing the output formats.

As opposite to debootstrap and other tools, debos doesn't need to be run as root for making actions that require root privileges in the images. debos uses fakemachine a library that setups qemu-system allowing you to work in the image with root privileges and to create images for all the architectures supported by qemu user. However, for this to work, make sure your user has permission to use /dev/kvm.

Let's see how debos works with a simple example. If we wanted to create an arm64 image for Debian Stretch customized, we would follow these steps:

  • debootstrap the image
  • install the packages we need
  • create an user
  • setup our preferred hostname
  • run a script creating an user
  • copy a file adding the user to sudoers
  • creating a tarball with the final image

This would translate into a debos recipe like this one:

{{- $architecture := or .architecture "arm64" -}}
{{- $suite := or .suite "stretch" -}}
{{ $image := or .image (printf "debian-%s-%s.tgz" $suite $architecture) }}

architecture: {{ $architecture }}

  - action: debootstrap
    suite: {{ $suite }}
      - main
    variant: minbase

  - action: apt
    recommends: false
      - adduser
      - sudo

  - action: run
    description: Set hostname
    chroot: true
    command: echo debian-{{ $suite }}-{{ $architecture }} > /etc/hostname

  - action: run
    chroot: true
    script: scripts/

  - action: overlay
    description: Add sudo configuration
    source: overlays/sudo

  - action: pack
    file: {{ $image }}
    compression: gz

(The files used in this example are available from this git repository)

We run debos on the recipe file:

$ debos simple.yaml

The result will be a tarball named debian-stretch-arm64.tar.gz. If you check the top two lines of the recipe, you can see that the recipe defaults to architecture arm64 and Debian stretch. We can override these defaults when running debos:

$ debos -t suite:"buster" -t architecture:"amd64" simple.yaml

This time the result will be a tarball debian-buster-amd64.tar.gz.

The recipe allows some customization depending on the parameters. We could install packages depending on the target architecture, for example, installing python-libsoc in armhf and arm64:

- action: apt
  recommends: false
    - adduser
    - sudo
{{- if eq $architecture "armhf" "arm64" }}
    - python-libsoc
{{- end }}

What happens if in addition to a tarball we would like to create a filesystem image? This could be done adding two more actions to our example, a first action creating the image partition with the selected filesystem and a second one deploying the image in the filesystem:

- action: image-partition
  imagename: {{ $ext4 }}
  imagesize: 1GB
  partitiontype: msdos
    - mountpoint: /
      partition: root
    - name: root
      fs: ext4
      start: 0%
      end: 100%
      flags: [ boot ]

- action: filesystem-deploy
  description: Deploying filesystem onto image

{{ $ext4 }} should be defined in the top of the file as follows:

{{ $ext4 := or .image (printf "debian-%s-%s.ext4" $suite $architecture) }}

We could even make this step optional and make the recipe by default to only create the tarball and add the filesystem image only adding an option to debos:

$ debos -t type:"full" full.yaml

The final debos recipe will look like this:

{{- $architecture := or .architecture "arm64" -}}
{{- $suite := or .suite "stretch" -}}
{{ $type := or .type "min" }}
{{ $image := or .image (printf "debian-%s-%s.tgz" $suite $architecture) }}
{{ $ext4 := or .image (printf "debian-%s-%s.ext4" $suite $architecture) }}

architecture: {{ $architecture }}

  - action: debootstrap
    suite: {{ $suite }}
      - main
    variant: minbase

  - action: apt
    recommends: false
      - adduser
      - sudo
{{- if eq $architecture "armhf" "arm64" }}
      - python-libsoc
{{- end }}

  - action: run
    description: Set hostname
    chroot: true
    command: echo debian-{{ $suite }}-{{ $architecture }} > /etc/hostname

  - action: run
    chroot: true
    script: scripts/

  - action: overlay
    description: Add sudo configuration
    source: overlays/sudo

  - action: pack
    file: {{ $image }}
    compression: gz

{{ if eq $type "full" }}
  - action: image-partition
    imagename: {{ $ext4 }}
    imagesize: 1GB
    partitiontype: msdos
      - mountpoint: /
        partition: root
      - name: root
        fs: ext4
        start: 0%
        end: 100%
        flags: [ boot ]

  - action: filesystem-deploy
    description: Deploying filesystem onto image

debos also provides some other actions that haven't been covered in the example above:

  • download allows to download a single file from the internet
  • raw can directly write a file to the output image at a given offset
  • unpack can be used to unpack files from archive in the filesystem
  • ostree-commit create an OSTree commit from rootfs
  • ostree-deploy deploy an OSTree branch to the image

The example in this blog post is simple and short on purpose. Combining the actions presented above, you could also include a kernel and install a bootloader to make a bootable image. Upstream is planning to add more examples soon to the debos recipes repository.

debos is a project from Sjoerd Simons at Collabora, it's still missing some features but it's actively being developed and there are big plans for the future!

CryptogramCalifornia Passes New Privacy Law

The California legislature unanimously passed the strongest data privacy law in the nation. This is great news, but I have a lot of reservations. The Internet tech companies pressed to get this law passed out of self-defense. A ballot initiative was already going to be voted on in November, one with even stronger data privacy protections. The author of that initiative agreed to pull it if the legislature passed something similar, and that's why it did. This law doesn't take effect until 2020, and that gives the legislature a lot of time to amend the law before it actually protects anyone's privacy. And a conventional law is much easier to amend than a ballot initiative. Just as the California legislature gutted its net neutrality law in committee at the behest of the telcos, I expect it to do the same with this law at the behest of the Internet giants.

So: tentative hooray, I guess.

Planet DebianDaniel Silverstone: Docker Compose

I glanced back over my shoulder to see the Director approaching. Zhe stood next to me, watched me intently for a few moments, before turning and looking out at the scape. The water was preturnaturally calm, above it only clear blue. A number of dark, almost formless, shapes were slowly moving back and forth beneath the surface.

"Is everything in readiness?" zhe queried, sounding both impatient and resigned at the same time. "And will it work?" zhe added. My predecessor, and zir predecessor before zem, had attempted to reach the same goal now set for myself.

"I believe so" I responded, sounding perhaps slightly more confident than I felt. "All the preparations have been made, everything is in accordance with what has been written". The director nodded, zir face pinched, with worry writ across it.

I closed my eyes, took a deep breath, opened them, raised my hand and focussed on the scape, until it seemed to me that my hand was almost floating on the water. With all of my strength of will I formed the incantation, repeating it over and over in my mind until I was sure that I was ready. I released it into the scape and dropped my arm.

The water began to churn, the blue above darkening rapidly, becoming streaked with grey. The shapes beneath the water picked up speed and started to grow, before resolving to what appeared to be stylised Earth whales. Huge arcs of electricity speared the water, a screaming, crashing, wall of sound rolled over us as we watched, a foundation rose up from the depths on the backs of the whale-like shapes wherever the lightning struck.

Chunks of goodness-knows-what rained down from the grey streaked morass, thumping into place seamlessly onto the foundations, slowly building what I had envisioned. I started to allow myself to feel hope, things were going well, each tower of the final solution was taking form, becoming the slick and clean visions of function which I had painstakingly selected from among the masses of clamoring options.

Now and then, the whale-like shapes would surface momentarily near one of the towers, stringing connections like bunting across the water, until the final design was achieved. My shoulders tightened and I raised my hand once more. As I did so, the waters settled, the grey bled out from the blue, and the scape became calm and the towers shone, each in its place, each looking exactly as it should.

Chanting the second incantation under my breath, over and over, until it seemed seared into my very bones, I released it into the scape and watched it flow over the towers, each one ringing out as the command reached it, until all the towers sang, producing a resonant and consonant chord which rose of its own accord, seeming to summon creatures from the very waters in which the towers stood.

The creatures approached the towers, reached up as one, touched the doors, and screamed in horror as their arms caught aflame. In moments each and every creature was reduced to ashes, somehow fundamentally unable to make use of the incredible forms I had wrought. The Director sighed heavily, turned, and made to leave. The towers I had sweated over the design of for months stood proud, beautiful, worthless.

I also turned, made my way out of the realisation suite, and with a sigh hit the scape-purge button on the outer wall. It was over. The grand design was flawed. Nothing I created in this manner would be likely to work in the scape and so the most important moment of my life was lost to ruin, just as my predecessor, and zir predecessor before zem.

Returning to my chambers, I snatched up the book from my workbench. The whale-like creature winking to me from the cover, grinning, as though it knew what I had tried to do and relished my failure. I cast it into the waste chute and went back to my drafting table to design new towers, towers which might be compatible with the creatures which were needed to inhabit them and breath life into their very structure, towers which would involve no grinning whales.

Worse Than FailureFlobble

The Inner Platform Effect, third only after booleans and dates, is one of the most complicated blunders that so-called developers (who think that they know what they're doing) do to Make Things Better.™ Combine that with multiple inheritance run-amok and a smartass junior developer who thinks documentation and method naming are good places to be cute, and you get todays' submission.

A cat attacking an impossible object illusion to get some tuna from their human

Chops,an experienced C++ developer somewhere in Europe, was working on their flagship product. It had been built slowly over 15 years by a core of 2-3 main developers, and an accompanying rotating cast of enthusiastic but inexperienced C++ developers. The principal developer had been one of those juniors himself at the start of development. When he finally left, an awful lot of knowledge walked out the door with him.

Enormous amounts of what should have been standard tools were homegrown. Homegrown reference counting was a particular bugbear, being thread dangerous as it was - memory leaks abounded. The whole thing ran across a network, and there were a half-dozen ways any one part could communicate with another. One such way was a "system event". A new message object was created and then just launched into the underlying messaging framework, in the hopes that it would magically get to whoever was interested, so long as that other party had registered an interest (not always the case).

A new system event was needed, and a trawl was made for anyone who knew anything about them. <Crickets> Nobody had any idea how they worked, or how to make a new one. The documentation was raked over, but it was found to mostly be people complaining that there was no documentation. The code suffered from inheritance fever. In a sensible system, there would be only one message type, and one would simply tag it appropriately with an identifier before inserting the data of interest.

In this system, there was an abstract base message type, and every specific message type had to inherit from it, implement some of the functions and override some others. Unfortunately, each time it seemed to be a different set of functions being implemented and a different set being overridden. Some were clearly cut and paste jobs, copying others, carrying their mistakes forward. Some were made out of several pieces of others; cut, paste and compiled until the warning messages were disabled compiler stopped complaining.

Sometimes, when developing abstract base types that were intended to be inherited from to create a concrete class for a new purpose, those early developers had created a simple, barebones concrete example implementation. A reference implementation, with "Example" in the name, that could be used as a starting point, with comments, making it clear what was necessary and what was optional. No such example class could be found for this.

Weeks of effort went into reverse-engineering the required messaging functionality, based on a few semi-related examples. Slowly, the shape of the mutant inside became apparent. Simple, do-nothing message objects were created and tested. Each time they failed, the logs were pored over, breakpoints were added, networks were watched, tracing the point of failure and learning something new.

Finally, the new message object was finished. It worked. There was still some voodoo coding in it; magic incantations that were not understood (the inheritance chain was more than five levels deep, with multiple diamonds, and one class being inherited from six times), but it worked, although nobody was certain why.

During the post development documentation phase, Mister Chops was hunting down every existing message object. Each would need reviewing and examination at some point, with the benefit of the very expensive reverse engineering. He came across one with an odd name; it wasn't used anywhere, so hadn't been touched since it was first committed. Nobody had ever had a reason to look at it. The prefix of the name was as expected, but the suffix - the part that told you at a glance what kind of message it was - was "Flobble". Chops opened it up.

It was a barebones example of a concrete implementation of the abstract base class, with useful explanatory comments on how to use/extend it, and how it worked. Back at the start, some developer, instead of naming the example class "Example" as was customary, or naming it anything at all that would have made it clear what it was, had named it "Flobble". It sat there for a decade, while people struggled to understand these objects over and over, and finally reverse engineered it at *significant* expense. Because some whimsical developer a decade previously had decided to be funny.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianJulien Danjou: How I stopped merging broken code

How I stopped merging broken code

It's been a while since I moved all my projects to GitHub. It's convenient to host Git projects, and the collaboration workflow is smooth.

I love pull requests to merge code. I review them, I send them, I merge them. The fact that you can plug them into a continuous integration system is great and makes sure that you don't merge code that will break your software. I usually have Travis-CI setup and running my unit tests and code style check.

The problem with the GitHub workflow is that it allows merging untested code.


Yes, it does. If you think that your pull requests, all green decorated, are ready to be merged, you're wrong.

How I stopped merging broken code This might not be as good as you think

You see, pull requests on GitHub are marked as valid as soon as the continuous integration system passes and indicates that everything is valid. However, if the target branch (let's say, master) is updated while the pull request is opened, nothing forces to retest that pull request with this new master branch. You think that the code in this pull request works while that might have become false.

How I stopped merging broken code Master moved, the pull request is not up to date though it's still marked as passing integration.

So it might be that what went into your master branch now breaks this not-yet-merged pull request. You've no clue. You'll trust GitHub and press that green merge button, and you'll break your software. For whatever reason, it's possible that the test will break.

How I stopped merging broken code If the pull request has not been updated with the latest version of its target branch, it might break your integration.

The good news is that's something that's solvable with the strict workflow that Mergify provides. There's a nice explanation and example in Mergify's blog post You are merging untested code that I advise you to read. What Mergify provides here is a way to serialize the merge of pull requests while making sure that they are always updated with the latest version of their target branch. It makes sure that there's no way to merge broken code.

That's a workflow I've now adopted and automatized on all my repositories, and we've been using such a workflow for Gnocchi for more than a year, with great success. Once you start using it, it becomes impossible to go back!

Don MartiNudgestock 2018 transcript

(This is a cleaned-up and lightly edited version of my talk from Nudgestock 2018.)

First I have to give everybody a disclaimer. This is 100% off message. I work for Mozilla. I am NOT speaking for Mozilla here.

If you follow Rory, you have probably heard a lot about signaling in advertising, so I'm going to go over this material pretty quickly. Why does Homo economicus read magazine advertising but hangs up on cold calls? To put it another way why is every car commercial the same? You could shoot the "car driving down the windy road" commercial with any car. All that the car commercial tells you is: if it was a waste of your time to test drive our car then it would have been a waste of our money to make this little movie about it.

There's a whole literature of economics and math about signaling involving deceptive senders and honest senders. With this paper, Gardete and Bart show that when the sender wants to really get a message across, counter-intuitively the best thing for the sender to do is deprive themselves of some information about the receiver. If you're in the audience and you know what the sender knows about you, then you can't tell are they honestly expressing their intentions in the market, or are they just telling you what you want to hear? Anyone who used to read Computer Shopper magazine for the ads didn't read for specific information about all the parts that you might put into your computer. You read it to find out which manufacturers are adopting which standards so you don't buy a motherboard that won't support the video card that you might want to upgrade to next year.

There are three sets of papers in the signaling literature. There are papers that have pure math where you devise kind of a mathematical game of buyers and sellers and see how that game works out. And there are papers where you take users in an experimental setting. Ambler and Hollier took 540 users, showed them different versions of expensive looking and cheap looking advertising that conveys the same information. Finally you've got the kind of research that looks at spending across different product categories, and in this study they found that types of product that have different advertising to sales ratios really depends on how much extra user experience it takes to evaluate that product.

The feedback loop here is that when brands have signaling power, then that means market power for the publishers that carry their advertising, which means advertising rates tend to go up, which means the publishers can afford to make obviously expensive content. And when you attach advertising to obviously expensive content, that means more signaling power. It's kind of a loop that builds more and more value for the advertiser.

Some people compare this to the signaling that a bank does when they build this monstrous stone building to keep your money. Really, the stuff that a bank does, having a stone building doesn't do any more for keeping money in it than having a metal building or a concrete building, but it just shows that they've got this big stone building with their name on it so if they turned out to be deceptive it would be more costly for them to do it. That's the pure signaling model. But the other area that we can see when we compare this kind of classic signal-carrying advertising to online advertising, the kind of ads that are targeted to you based on who you are, is what's up with the norms enforcers?

Rory has his blue checkmark on Twitter which means he doesn't see Twitter ads. I'm less Internet Famous, so I still get the advertising on Twitter. A lot of the ads that I get are deceptive issue ads. This is one. A company that's getting sued for lead paint related issues is trying to convince residents of California that government inspectors are coming to their houses to declare them a nuisance. This is bogus and it's the kind of thing that if it appeared in the newspaper that everyone got to see then journalists and public interest lawyers, and everyone else who enforces the norms on how we communicate, would call it out. But in a targeted ad medium this kind of deceptive advertising can target me directly.

So let me show a little simulation here. What we're looking at is deceptive sellers making a sale. When a deceptive seller makes a sale that's a red line. When an honest seller makes a sale, that's a green line. The little blue squares are norms enforcers, and the only thing that makes a norms enforcer different in this game from a regular customer is when a deceptive seller contacts a norms enforcer the deceptive seller pays a higher price than they would have made in profit from a sale. So with honest sellers and deceptive sellers evolving and competing in this primordial soup of customers, what ends up happening to the deceptive sellers that try to do a broad reach and hit a bunch of different customers is, well you saw them, they hit the norms enforcers, the blue squares lit up. Advertisers who are deceptive and try to reach a bunch of different people end up getting squeezed out in this version of the game. An honest advertiser like this little square down here can reach over the whole board because they don't pay the penalty for reaching the norms enforcer.

So what does this really mean for the real web? On the World Wide Web, have we inadvertently built a game that gives an unfair advantage to deceptive sellers? If somebody can take advantage of all the the user profiling information that's available out there, and say, "oh I believe that these people are rural, low-income, unlikely to be finance journalists, therefore I'm going to hit them with the predatory finance ads," does that cause users to pay less attention to the medium?

Online advertising effectiveness has declined since the launch of the first banner advertisement in 1994. That's certainly not news. This is a slide that appeared in Mary Meeker's famous Internet Trends presentation, and as you can see blue is percentage of ad spending, grey is percentage of people's time. So TV is 36% of the time 36% of the money. Desktop web 18%, 20%, about right.

What's going on with print? Print is 9% of the money for 4% of the time. Now you might say this is just inertia, that that this year people are finally just cutting back on spending money in print because of people spending less time on print and it'll eventually catch up. But I went back and plotted the same slide from the same presentation going back to 2011, and I've got time plotted across the bottom, money plotted on the y axis, and what do we see about print? Print is on a whole different trend line. Print is on a trend line of much more value to the advertiser per unit of time spent than these other ad medium. My hypothesis is that targeting breaks signaling and this means an opportunity.

Targeting means that when you see an ad coming in targeted to you it's more like a cold call. It doesn't carry credible information about the seller's intention in the market.

From the point of view of who has an incentive to to support signal-carrying ad media instead, the people who have an interest in that signal for attention bargain in that positive feedback loop are of course the publishers, high reputation brands that want to be able to send that signal, writers, photographers, and editors, people who get paid by that publisher, and people who benefit from the positive externalities of those signal carrying ads that support news and cultural works.

So if the signaling model is such a big thing then why are there so many targeted ads still out there?


Let's have a look at, just to pick an example, the Facebook advertising policy. As you know, the Facebook advertising platform will let you micro target individuals extremely specifically. You can pick out seven people in Florida, you can pick out everyone who's looking for an apartment who doesn't have a certain ethnic affinity, that kind of thing. But the one thing you're not allowed to do with Facebook targeting is put anything in your ad that might indicate how you're targeting it. The policy says:

ads must not contain content that asserts or implies personal attributes

You can't say, I know you're male or female, I know your sexual orientation, I know what you do for a living. The ad copy has to be generic even if the targeting can be extremely specific. You can't even say other. You can't say meet other singles because that implies that the advertiser knows that the reader is single. Facebook will let you target people with depression but you can't reveal that you know that about them. Aanother good example is Target. They do targeting of individuals who they believe to be pregnant, but they'll pad out those ads for baby stuff with ads for other types of products so as not to creep everybody out.

Back to our shared interest in signal for attention bargain. Pretty much everybody has an interest in that original positive feedback loop of getting the higher reputation for brands of getting reputation driven publishers that'll build high quality content for us. Writers and photographers have an interest in getting paid, and people who are shopping for goods are the ones who want the signal the most. All that stands on the opposite side is behavioral tricks to conceal targeting. Now I'm not going to say this as a privacy issue. I know that there are privacy issues here but that is really not my department. Besides, Facebook just announced a dating site so they're going to breed privacy preferences out of their user base anyway.

Can the web as an advertising medium be redesigned to make it work better for carrying signal? We know from the existence of print that this type of signal carrying ad medium can exist. Print is an existence proof of signal carrying advertising. We also know that building that kind of an ad medium can't be that hard because print was built when people were breathing fumes from molten lead all day.

The prize for building a signal-carrying ad medium is all the cultural works that you get when somebody like Kurt Vonnegut can quit his job as manager of a car dealership and write for Collier's magazine full-time. This book is still on sale with the resulting stories. And of course local news. Democracy depends on the the vital flow of information of public interest. Some people say that the problem with news and information on the web is that it's all been made free, and if people would just subscribe we could fix the system. But honestly if if free was the problem, then Walter Cronkite would have destroyed the media business in 1962. It's a market design problem and a signaling problem, not just a problem of who has to pay for what.

And the web browsers got a bunch of things wrong in the 1990s. There are certain patterns of information flow that the browser facilitated, like third-party tracking, where browsers enable some companies to follow your activity from site to site, and data leakage. Things that that just don't work according to the way that people expect. Most people don't want their activity on one site to follow them over to another site, and the original batch of web browsers got that terribly wrong. The good news is web browsers are getting it right, and web browsers are under tremendous pressure now to do so. As a product the web browser is pretty much complete and working and generic. The whole point of a web browser is it shows web sites the same as all the other web browsers do, so there's less and less reason for a user to want to switch web browsers. But everybody who is trying to get you to install a web browser needs for there to be a reason, so the opportunity for browsers is to align with those interests of users that the browser wasn't able to pick up on previously.

At Mozilla some user researchers recently did a study on users with no ad blocker installed and users within the first few weeks of installing an ad blocker. Anybody want to guess on the increased engagement? How much more time those ad blocker users spend with that same browser than the non ad blocker users? Anybody shout out a number. All right, 28%. From the point of view of the browser those kinds of numbers, moving user engagement in a way that helps that browser meet its goals, that's something that that the browser can't ignore. So that means we're going from the old web game where everyone tries win by collecting as much data on people can without their permission to a new game in which the browser, high reputation publishers, and high reputation brands are all aligned in trying to build enough trust to work on information that users choose to share.

I know when I say information that users choose to share you're going to think about all these GDPR dialogs and I know I've seen these too, and they're just tons of companies on these. To be honest, looking at some of these company names it looks like most of them were made up by guys from Florida who communicate primarily by finger guns. Users should not have to micromanage their consent for all this data collection activity any more than email users should have to go in and read their SMTP headers to filter spam. And really if you think about what brands are, it's offloading information about a product buying decision onto the reputation coprocessor in the user's brain. It's kind of like taking a computational task and instead of running it on the CPU in your data center where you have to to pay the power and cooling bills for it, you offload it and run it on on the GPU on the client. It'll run faster, it'll run better, and the audience is maintaining that reputation state.

The future is here, it's just not very evenly distributed, as William Gibson said. This picture is the cyberpunk of the 1990s. Today all of that stuff he's carrying, his video camera, his laptop, his scanner, all that stuff's on a phone and everybody has it.

Today, the privacy sensitive users, the ones who are already working based on sharing data with permission, they're out there. But they're in niches today. If you have a relationship with those people now, then now is an opportunity to connect with them, figure out how to build that signal carrying advertising game, and and create a reputation based advertising model for the web. Thank you very much.

CryptogramTraffic Analysis of the LTE Mobile Standard

Interesting research in using traffic analysis to learn things about encrypted traffic. It's hard to know how critical these vulnerabilities are. They're very hard to close without wasting a huge amount of bandwidth.

The active attacks are more interesting.

EDITED TO ADD (7/3): More information.

I have been thinking about this, and now believe the attacks are more serious than I previously wrote.

Planet DebianAthos Ribeiro: Towards Debian Unstable builds on Debian Stable OBS

This is the sixth post of my Google Summer of Code 2018 series. Links for the previous posts can be found below:

My GSoC contributions can be seen at the following links

Debian Unstable builds on OBS

Lately, I have been working towards triggering Debian Unstable builds with Debian OBS packages. As reported before, We can already build packages for both Debian 8 and 9 based on the example project configurations shipped with the package in Debian Stable and with the project configuration files publicly available on OBS SUSE instance.

While trying to build packages agains Debian unstable I have been reaching the following issue:

OBS scheduler reads the project configuration and starts downloading dependencies. The deendencies get downloaded but the build is never dispatched (the package stays on a “blocked” state). The downloaded dependencies get cleaned up and the scheduler starts the downloads again. OBS enters in an infinite loop there.

This only happens for builds on sid (unstable) and buster (testing).

We realized that the OBS version packaged in Debian 9 (the one we are currently using) does not support debian source packages built with dpkg >= 1.19. At first I started applying this patch on the OBS Debian package, but after reporting the issue to Debian OBS maintainers, they pointed me to the obs-build package in Debian stable backports repositories, which included the mentioned patch.

While the backports package included the patch needed to support source packages built with newr versions of dpkg, we still get the same issue with unstable and testing builds: the scheduler downloads the dependencies, hangs for a while but the build is never dispatched (the package stays on a “blocked” state). After a while, the dependencies get cleaned up and the scheduler starts the downloads again.

The bug has been quite hard to debug since OBS logs do not provide feedback on the problem we have been facing. To debug the problem, We tried to trigger local builds with osc. First, I (successfuly) triggered a few local builds against Debian 8 and 9 to make sure the command would work. Then We proceeded to trigger builds against Debian Unstable.

The first issue we faced was that the osc package in Debian stable cannot handle builds against source packages built with new dpkg versions. We fixed that by patching osc/util/ (we just substituted the file with the latest file in osc development version). After applying the patch, we got the same results we’d get when trying to build the package remotelly, but with debug flags on, we could have a better understanding of the problem:

BuildService API error: CPIO archive is incomplete (see .errors file)

The .errors file would just contain a list of dependencies which were missing in the CPIO archive.

If we kept retrying, OBS would keep caching more and more dependencies, until the build succeeded at some point.

We now know that the issue lies with the Download on Demand feature.

We then tried a local build in a fresh OBS instance (no cached packages) using the --disable-cpio-bulk-download osc build option, which would make OBS download each dependency individually instead of doing so in bulks. For our surprise, the builds succeeded in our first attempt.

Finally, we traced the issue all the way down to the OBS API call which is triggered when OBS needs to download missing dependenies. For some reason, the number of parameters (number of dependencies to be downloaded) affects the final response of the API call. When trying to download too many packages, The CPIO archive is not built correctly and OBS builds fail.

At the moment, we are still investigating why such calls fail with too many params and why it only fails for Debian Testing and Unstable repositories.

Next steps (A TODO list to keep on the radar)

  • Fix OBS builds on Debian Testing and Unstable
  • Write patch for Debian osc’s so it can build Debian packages with control.tar.xz
  • Write patches for the OBS worker issue described in post 3
  • Change the default builder to perform builds with clang
  • Trigger new builds by using the dak/mailing lists messages
  • Verify the script idempotency and propose patch to opencollab repository
  • Separate salt recipes for workers and server (locally)
  • Properly set hostnames (locally)


Cory DoctorowMark Zuckerberg and his empire of oily rags

Surveillance capitalism sucks: it improves the scattershot, low-performance success-rate of untargeted advertising (well below 1 percent) and doubles or triples it (to well below 1 percent!).

But surveillance captialism is still dangerous: all those dossiers on the personal lives of whole populations can be used for blackmail, identity theft and political manipulation. As I explain in my new Locus column, Cory Doctorow: Zuck’s Empire of Oily Rags, Facebook’s secret is that they’ve found a way to turn a profit on an incredibly low-yield resource — like figuring out how to make low-grade crude out of the oil left over from oily rags.

But because the margins on surveillance data are so poor, the business is only sustainable if it fails to take the kinds of prudent precautions that would make it safe to warehouse these unimaginably gigantic piles of oily rags.

It’s as though Mark Zuckerberg woke up one morning and realized that the oily rags he’d been accumulating in his garage could be refined for an extremely low-grade, low-value crude oil. No one would pay very much for this oil, but there were a lot of oily rags, and provided no one asked him to pay for the inevitable horrific fires that would result from filling the world’s garages with oily rags, he could turn a tidy profit.

A decade later, everything is on fire and we’re trying to tell Zuck and his friends that they’re going to need to pay for the damage and install the kinds of fire-suppression gear that anyone storing oily rags should have invested in from the beginning, and the commercial surveillance industry is absolutely unwilling to contemplate anything of the sort.

That’s because dossiers on billions of people hold the power to wreak almost unimaginable harm, and yet, each dossier brings in just a few dollars a year. For commercial surveillance to be cost effective, it has to socialize all the risks associated with mass surveillance and privatize all the gains.

There’s an old-fashioned word for this: corruption. In corrupt systems, a few bad actors cost everyone else billions in order to bring in millions – the savings a factory can realize from dumping pollution in the water supply are much smaller than the costs we all bear from being poisoned by effluent. But the costs are widely diffused while the gains are tightly concentrated, so the beneficiaries of corruption can always outspend their victims to stay clear.

Facebook doesn’t have a mind-control problem, it has a corruption problem. Cambridge Analytica didn’t convince decent people to become racists; they convinced racists to become voters.

Cory Doctorow: Zuck’s Empire of Oily Rags [Cory Doctorow/Locus]

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #166

Here’s what happened in the Reproducible Builds effort between Sunday June 24 and Saturday June 30 2018:

Packages reviewed and fixed, and bugs filed

diffoscope development

diffoscope versions 97 and 98 were uploaded to Debian unstable by Chris Lamb. They included contributions already covered in previous weeks as well as new ones from:

Chris Lamb also updated the SSL certificate for


This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianSune Vuorela: 80bit x87 FPU

Once again, I got surprised by the 80 bit x87 FPU stuff.

First time was around a decade ago. Back then, it was something along the lines of a sort function like:

struct ValueSorter()
    bool operator (const Value& first, const Value& second) const
         double valueFirst = first.amount() * first.value();
         double valueSecond = second.amount() * second.value();
         return valueFirst < valueSecond;

With some values, first and would be smaller than second, and second smaller than first. All depending on which one got truncated to 64 bit, and which one came directly from the 80bit fpu.

This time, the 80 bit version when cast to integers was 1 smaller than the 64 bit version.

Oh. The joys of x86 CPU’s.

TEDCuring cancer one nanoparticle at a time, and more news from TED speakers

As usual, the TED community is hard at work — here are some highlights:

A new drug-delivering nanoparticle. Paula Hammond, the head of the Department of Chemical Engineering at MIT, is part of a research team that has developed a new nanoparticle designed to treat a kind of brain tumor called glioblastoma multiforme. The nanoparticles deliver drugs to the brain that work in two ways — to destroy the DNA of tumor cells, and to impede the reparation of those cells. The researchers were able to shrink tumors and stop them from growing back in mice — and there’s hope this technology can be used for human applications in the future. (Watch Hammond’s TED Talk).

Reflections on grief, loss and love. Amy Krouse Rosenthal penned a poignant, humorous and heart-rending love letter to her husband — published in The New York Times ten days before her death — that resonated deeply with readers across the world. In the year since, Jason Rosenthal established a foundation in her name to fund ovarian cancer research and childhood literacy initiatives. Following the anniversary of Amy’s death, Rosenthal responded to her letter in a moving reflection on mourning and the gifts of generosity she left in her wake. “We did our best to live in the moment until we had no more moments left,” he wrote for The New York Times. “Amy continues to open doors for me, to affect my choices, to send me off into the world to make the most of it. Recently I gave a TED Talk on the end of life and my grieving process that I hope will help others.” (Watch Rosenthal’s TED Talk.)

Why we need to change our perceptions of teenagers. Neurologist Sarah-Jayne Blakemore urges us to reconsider the way we understand and treat teenagers, especially in school settings. (She wrote a book about the secret life of the teenage brain in March.) According to the latest research, teenagers shed 17% of their grey matter in the prefrontal cortex between childhood and adulthood, which, as Blakemore says, explains that traditional “bad” behaviors like sleeping in late and moodiness are a result of cognitive changes, not laziness or abrasiveness. (Watch Blakemore’s TED Talk.)

Half empty or half full? Research by Dan Gilbert indicates that our decisions may be more faulty than we think — and that we may be predisposed to seeing problems even when they aren’t there. In a recent paper Gilbert co-authored, researchers found that our judgment doesn’t follow fixed rules, but rather, our decisions are more relative. In one experiment, participants were asked to look at dots along a color spectrum from blue to purple, and note which dots were blue; at first, the dots were shown in equal measure, but when blue dots were shown less frequently, participants began marking dots they previously considered purple as blue (this video does a good job explaing). In another experiment, participants were more likely to mark ethical papers as unethical, and nonthreatening faces as threatening, when the previously-set negative stimulus was shown less frequently. This behavior — dubbed “prevalence-induced concept change” — has broad implications; the paper suggests it may explain why social problems never seem to go away, regardless of how much work we do to fix them. (Watch Gilbert’s TED Talk).

Terrifying insights from the world of parasites. Ed Yong likes to write about the creepy and uncanny of the natural world. In his latest piece for The Atlantic, Yong offered a deeper view into the bizarre habits and powers of parasitic worms. Based on research by Nicolle Demandt and Benedikt Saus from the University of Munster, Yong described how some tapeworms capitalize on the way fish shoals guide and react to each other’s behaviors and movements. Studying stickleback fish, Demandt and Saus realized parasite-informed decisions of infected sticklebacks can influence the behavior of uninfected fish, too. This means that if enough infected fish are led to dangerous situations by the controlling powers of the tapeworms, uninfected fish will be impacted by those decisions — without ever being infected themselves. (Read more of Yong’s work and watch his TED Talk.)

A new documentary on corruption within West African football. Ghanaian investigative journalist Anas Aremeyaw Anas joined forces with BBC Africa to produce an illuminating and hard-hitting documentary exposing fraud and corruption in West Africa’s football industry. In an investigation spanning two years, almost 100 officials were recorded accepting cash “gifts” from a slew of undercover reporters from Anas’ team posing as business people and investors. The documentary has already sent shock-waves throughout Ghana — including FIFA bans and resignations from football officials across the country. (Watch the full documentary and Anas’ TED Talk.)


Worse Than FailureCodeSOD: An Eventful Career Continues

You may remember Sandra from her rather inglorious start at Initrovent. She didn't intend to continue working for Karl for very long, but she also didn't run out the door screaming. Perhaps she should have, but if she had- we wouldn't have this code.

Initrovent was an event-planning company, and thus needed to manage events, shows, and spaces. They wrote their own exotic suite of software to manage that task.

This code predates their current source control system, and thus it lacks any way to blame the person responsible. Karl, however, was happy to point out that he used to do Sandra's job, and he knew a thing or two about programming. "My fingerprints are on pretty much every line of code," he was proud to say.

if($showType == 'unassigned' || $showType == 'unassigned' || $showType == 'new') { ... }

For a taster, here's one that just leaves me puzzling. Were it a long list, I could more easily see how the same value might appear multiple times. A thirty line conditional would be more of a WTF, but I can at least understand it. There are only three options, two of them are duplicates, and they're right next to each other.

What if you wanted to conditionally enable debugging messages. Try this approach on for size.

foreach($current_open as $key => $value) { if ($value['HostOrganization']['ticket_reference'] == '400220') { //debug($value); } }

What a lovely use of magic numbers. I also like the mix of PascalCase and snake_case keys. But see, if there's any unfilled reservation for a ticket reference number of 400220, we'll print out a debugging message… if the debug statement isn't commented out, anyway.

With that in mind, let's think about a real-world problem. For a certain set of events, you don't want to send emails to the client. The planner wants to send those emails manually. Who knows why? It doesn't matter. This would be a trivial task, yes? Simply chuck a flag on the database table- manual_emails and add a code branch. You could do that, yes, but remember how we controlled the printing of debugging messages before. You know how they actually did this:

$hackSkipEventIds = array('55084514-0864-46b6-95aa-6748525ee4db'); if (in_array($eventId, $hackSkipEventIds)) { // Before we implement #<redacted>, we prefer to skip all roommate // notifications in certain events, and just let the planner send // manual emails. return; }

Look how extensible this solution is- if you ever need to disable emails for more events, you can just extend this array. There's no need to add a UI or anything!

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianSteve Kemp: Another golang port, this time a toy virtual machine.

I don't understand why my toy virtual machine has as much interest as it does. It is a simple project that compiles "assembly language" into a series of bytecodes, and then allows them to be executed.

Since I recently started messing around with interpreters more generally I figured I should revisit it. Initially I rewrote the part that interprets the bytecodes in golang, which was simple, but now I've rewritten the compiler too.

Much like the previous work with interpreters this uses a lexer and an evaluator to handle things properly - in the original implementation the compiler used a series of regular expressions to parse the input files. Oops.

Anyway the end result is that I can compile a source file to bytecodes, execute bytecodes, or do both at once:

I made a couple of minor tweaks in the port, because I wanted extra facilities. Rather than implement an opcode "STRING_LENGTH" I copied the idea of traps - so a program can call-back to the interpreter to run some operations:

int 0x00  -> Set register 0 with the length of the string in register 0.

int 0x01  -> Set register 0 with the contents of reading a string from the user


This notion of traps should allow complex operations to be implemented easily, in golang. I don't think I have the patience to do too much more, but it stands as a simple example of a "compiler" or an interpreter.

I think this program is the longest I've written. Remember how verbose assembly language is?

Otherwise: Helsinki Pride happened, Oiva learned to say his name (maybe?), and I finished reading all the James Bond novels (which were very different to the films, and have aged badly on the whole).

Planet DebianSteinar H. Gunderson: Modern OpenGL

New project, new version of OpenGL—4.5 will be my hard minimum this time. Sorry, macOS, you brought this on yourself.

First impressions: Direct state access makes things a bit less soul-sucking. Immutable textures is not really a problem when you design for it to begin with, as opposed to retrofitting them. But you still need ~150 lines of code to compile a shader and render a fullscreen quad to another texture. :-/ VAOs, you are not my friend.

Next time, maybe Vulkan? Except the amount of stuff to get that first quad on screen seems even worse there.

Don MartiWorse is better, again?

Are there parallels between the rise of Worse Is Better in software and the success of the "uncreative counterrevolution" in advertising? (for more on that second one: John Hegarty: Creativity is receding from marketing and data is to blame) The winning strategy in software is to sacrifice consistency and correctness for simplicity. (probably because of network effects, principal-agent problems, and market failures.) And it seems like advertising has similar trade-offs between

  • Signal

  • Measurability (How well can we measure this project's effect on sales?)

  • Message (Is it persuasive and on brand?)

Just as it's rational for software decision-makers to choose simplicity, it can be rational for marketing decsion-makers to choose measurability over signal and message. (This is probably why there is a brand crisis going on—short-term CMOs are better off when they choose brand-unsafe tactics, sacrificing Message.)

As we're now figuring out how to use market-based tools to fix market failures in software, where can we use better market design to fix market failures in advertising? Maybe this is where it actually makes sense to use #blockchain: give people whose decisions can affect #brandEquity some kind of #skinInTheGame?

Against privacy defeatism: why browsers can still stop fingerprinting

How to get away with financial fraud

Google invests $22M in feature phone operating system KaiOS

Inside the investor revolt that’s trying to take down Mark Zuckerberg

Ryan Wallman: Marketers must loosen their grip on the creative process

Open source sustainability

K2’s Media Transparency Report Still Rocks The Ad Industry Two Years After Its Release

Mark Ritson: How ‘influencers’ made my arse a work of art

Ad fraud one of the most profitable criminal enterprises in the world, researcher says

Cover story: Adtech won’t fix ad fraud because it is too lucrative, say specialists

Sir John Hegarty: Great advertising elevates brands to a part of culture …


Planet DebianDirk Eddelbuettel: nanotime 0.2.1

A new minor version of the nanotime package for working with nanosecond timestamps just arrived on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it now uses a more rigorous S4-based approach thanks to a rewrite by Leonardo Silvestri.

This release brings three different enhancements / fixes that robustify usage. No new features were added.

Changes in version 0.2.1 (2018-07-01)

  • Added attribute-preserving comparison (Leonardo in #33).

  • Added two integer64 casts in constructors (Dirk in #36).

  • Added two checks for empty arguments (Dirk in #37).

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Valerie AuroraBryan Cantrill has been accused of verbal abuse by at least seven people

It sounds like Bryan Cantrill is thinking about organizing another computer conference. When he did that in 2016, I wrote a blog post about why I wouldn’t attend, because, based on my experience as Bryan’s former co-worker, I believed that Bryan Cantrill would probably say cruel and humiliating things to people who attended.

I understand that some people still supported Bryan and his conference after they read that post. After all, Bryan is so intelligent and funny and accomplished, and it’s a “he said, she said” situation, and if you can’t take the heat get out of the kitchen, etc. etc.

What’s changed since then? Well, at least six other people spoke up publicly about their own experiences with Bryan, many of which seem worse than mine. Then #metoo happened and we learned how many people a powerful person can abuse before any of their victims speak up, and why they stay quiet: worry about their careers being destroyed, being bankrupted by a lawsuit, or being called a liar and worse. If you’re still supporting Bryan, I invite you to read this story about Jeffrey Tambor verbally abusing Jessica Walter on the set of Arrested Development, and re-examine why you are supporting someone who has been verbally abusive to so many people.

Here are six short quotes from other people speaking about their experiences with Bryan Cantrill:

Having been a Joyent ‘customer’ and working to porting an application to run on SmartOS was like being a personal punching bag for Bryan.”

I worked at Joyent from 2010 through 2013. Valerie’s experience comports with my own. This warning is brave and wise.”

All that you say is true, and if anything, toned down from reality. Bryan is a truly horrible human being.”

I know for sure Bryan’s behavior prevented or at the very least delayed other developers from reaching their potential in the kernel group. Unfortunately the lack of moral and ethical leadership in Solaris allowed this to go on for far too long.”

Sun was such a toxic environment for so many people and it is very brave of you to share your experience. After six years in this oppressive environment, my confidence was all but destroyed.”

Having known Bryan from the days of being a junior engineer…he has always been a narcissistic f_ck that proudly leaves a wake of destruction rising up on the carcasses of his perceived foes (real and imagined). His brilliance comes at too high of a cost.”

This is what six people are willing to say publicly about how Bryan treated them. If you think that isn’t a lot, please take the time to read more about #metoo and consider how Bryan’s position of power would discourage people from coming forward with their stories of verbal abuse. If you do believe that Bryan has abused these people, consider what message you are sending to others by continuing to follow him on social media or otherwise validating his behavior.

If you have been abused by Bryan, I have a request: please do not contact me to tell me your story privately, unless you want help making your story public in some way. I’m exhausted and it doesn’t do any good to tell me—I’m already convinced he’s awful. Here’s what I can say: There are dozens of you, and you have remarkably similar stories.

I’ll be heavily moderating comments on this post and in particular won’t approve anything criticizing victims of abuse for speaking up. If your comment gets stuck in the spam filter, please email me at and I’ll post it for you.

Planet DebianJunichi Uekawa: It's been 10 years since I changed Company and Job.

It's been 10 years since I changed Company and Job. If you ask me now I think it was a successful move but not without issues. I think it's a high risk move to change company and job and location at the same time, you should change one of them. I changed job and company and marital status at the same time, that was too high risk.

Planet DebianPaul Wise: FLOSS Activities June 2018





  • fossjobs: merge pull requests
  • Debian: LDAP support request
  • Debian mentors: fix disk space issue
  • Debian wiki: clean up temp files, whitelist domains, whitelist email addresses, unblacklist IP addresses, disable accounts with bouncing email



The apt-cacher-ng bugs, leptonlib backport and git-repair feature request were sponsored by my employer. All other work was done on a volunteer basis.


Planet DebianElana Hashman: Report on the Debian Bug Squashing Party

Last weekend, six folks (one new contributor and five existing contributors) attended the bug squashing party I hosted in Brooklyn. We got a lot done, and attendees demanded that we hold another one soon!

So when's the next one?

We agreed that we'd like to see the NYC Debian User Group hold two more BSPs in the next year: one in October or November of 2018, and another after the Buster freeze in early 2019, to squash RC bugs. Stay tuned for more details; you may want to join the NYC Debian User Group mailing list.

If you know of an organization that would be willing to sponsor the next NYC BSP (with space, food, and/or dollars), or you're willing to host the next BSP, please reach out.

What did folks work on?

We had a list of bugs we collected on an etherpad, which I have now mirrored to gobby ( Attendees updated the etherpad with their comments and progress. Here's what each of the participants reported.

Elana (me)

  • I filed bugs against two upstream repos (pomegranate, leiningen) to update dependencies, which are breaking the Debian builds due to the libwagon upgrade.
  • I uploaded new versions of clojure and clojure1.8 to fix a bug in how alternatives were managed: despite /usr/share/java/clojure.jar being owned by the libclojure${ver}-java binary package, the alternatives group was being managed by the postinst script in the clojure${ver} package, a holdover from when the library was not split from the CLI. Unfortunately, the upgrade is still not passing piuparts in testing, so while I thoroughly tested local upgrades and they seemed to work okay I'm hoping the failures didn't break anyone on upgrade. I'll be taking a look at further refining the preinst script to address the piuparts failure this week.
  • I fixed the Vcs error on libjava-jdbc-clojure and uploaded new version 0.7.0-2. I also added autopkgtests to the package.
  • I fixed the Vcs warning on clojure-maven-plugin and uploaded new version 1.7.1-2.
  • I helped dkg with setting up and running autopkgtests.


  • was hateful in #901327 (tigervnc)
  • uploaded a fix for #902279 (youtube-dl) to DELAYED/3-day
  • sent a patch for #902318 (monkeysphere)
  • was less hateful in #899060 (monkeysphere)


Editor's note: Lincoln was a new Debian contributor and made lots of progress! Thanks for joining us—we're thrilled with your contributions 🎉 Here's his report.

  • Got my environment setup to work on packages again \o/
  • Played with 901318 for a while but didn't really make progress because it seems to be a long discussion so its bad for a newcomer
  • Was indeed more successful on the python lands. Opened the following PRs
  • Now working on uploading the patches to python-pyscss & python-pytest-benchmark packages removing the dependency on python-pathlib.


  • worked on debugging/diagnosing enigmail in preparation for making it DFSG-free again (see #901556)
  • got pointers from Lincoln about understanding flow control in asynchronous javascript for debugging the failing autopkgtest suites
  • got pointers from Elana on emulating's autopkgtest infrastructure so i have a better chance of replicating the failures seen on that platform


  • Moved python-requests-oauthlib to salsa
  • Updated it to 1.0 (new release), pending a couple final checks.


  • Worked with Lincoln on both bugs
  • Opened #902323 about removing python-pathlib
  • Working on new pymssql upstream release / restoring it to unstable

By the numbers

All in all, we completed 6 uploads, worked on 8 bugs, filed 3 bugs, submitted 3 patches or pull requests, and closed 2 bugs. Go us! Thanks to everyone for contributing to a very productive effort.

See you all next time.

Planet DebianChris Lamb: Free software activities in June 2018

Here is my monthly update covering what I have been doing in the free software world during June 2018 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month:


Patches contributed

Debian LTS

This month I have been worked 18 hours on Debian Long Term Support (LTS) and 7 hours on its sister Extended LTS project. In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, responding to user questions, etc.
  • A fair amount of initial setup and administraton to accomodate the introduction for the new "Extended LTS" initiative as well as for the transition of LTS moving from supporting Debian wheezy to jessie:
    • Fixing various shared scripts, including adding pushing to the remote repository for ELAs [...] and updating hard-coded wheezy references [...]. I also added instructions on exactly how to use the kernel offered by Extended LTS [...].
    • Updating, expanding and testing my personal scripts and workflow to also work for the new "Extended" initiative.
  • Provided some help on updating the Mercurial packages. [...]
  • Began work on updating/syncing the ca-certificates packages in both LTS and Extended LTS.
  • Issued DLA 1395-1 to fix two remote code execution vulnerabilities in php-horde-image, the image processing library for the Horde <> groupware tool. The original fix applied upstream has a regression in that it ignores the "force aspect ratio" option which I have fixed upstream .
  • Issued ELA 9-1 to correct an arbitrary file write vulnerability in the archiver plugin for the Plexus compiler system — a specially-crafted .zip file could overwrite any file on disk, leading to a privilege esclation.
  • During the overlap time between the support of wheezy and jessie I took the opportunity to address a number of vulnerabilities in all suites for the Redis key-value database, including CVE-2018-12326, CVE-2018-11218 & CVE-2018-11219) (via #902410 & #901495).


  • redis:
    • 4.0.9-3 — Make /var/log/redis, etc. owned by the adm group. (#900496)
    • 4.0.10-1 — New upstream security release (#901495). I also uploaded this to stretch-backports and backported the packages to stretch.
    • Proposed 3.2.6-3+deb9u2 for inclusion in the next Debian stable release to address an issue in the systemd .service file. (#901811, #850534 & #880474)
  • lastpass-cli (1.3.1-1) — New upstream release, taking over maintership and completely overhauling the packaging. (#898940, #858991 & #842875)
  • python-django:
    • 1.11.13-2 — Fix compatibility with Python 3.7. (#902761)
    • 2.1~beta1-1 — New upstream release (to experimental).
  • installation-birthday (11) — Fix an issue in calcuclating the age of the system by always prefering the oldest mtime we can find. (#901005
  • bfs (1.2.2-1) — New upstream release.
  • libfiu (0.96-4) — Apply upstream patch to make the build more robust with --as-needed. (#902363)
  • I also sponsored an upload of yaml-mode (0.0.13-1) for Nicholas Steeves.

Debian bugs filed

  • cryptsetup-initramfs: "ERROR: Couldn't find sysfs hierarchy". (#902183)
  • git-buildpackage: Assumes capable UTF-8 locale. (#901586)
  • kitty: Render and ship HTML versions of asciidoc. (#902621)
  • redis: Use the system Lua to avoid an embedded code copy. (#901669)

Planet DebianPetter Reinholdtsen: The worlds only stone power plant?

So far, at least hydro-electric power, coal power, wind power, solar power, and wood power are well known. Until a few days ago, I had never heard of stone power. Then I learn about a quarry in a mountain in Bremanger i Norway, where the Bremanger Quarry company is extracting stone and dumping the stone into a shaft leading to its shipping harbour. This downward movement in this shaft is used to produce electricity. In short, it is using falling rocks instead of falling water to produce electricity, and according to its own statements it is producing more power than it is using, and selling the surplus electricity to the Norwegian power grid. I find the concept truly amazing. Is this the worlds only stone power plant?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.8.600.0.0

armadillo image

A new RcppArmadillo release 0.8.600.0.0, based on the new Armadillo release 8.600.0 from this week, just arrived on CRAN.

It follows our (and Conrad’s) bi-monthly release schedule. We have made interim and release candidate versions available via the GitHub repo (and as usual thoroughly tested them) but this is the real release cycle. A matching Debian release will be prepared in due course.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 479 other packages on CRAN.

A high-level summary of changes follows (which omits the two rc releases leading up to 8.600.0). Conrad did his usual impressive load of upstream changes, but we are also grateful for the RcppArmadillo fixes added by Keith O’Hara and Santiago Olivella.

Changes in RcppArmadillo version 0.8.600.0.0 (2018-06-28)

  • Upgraded to Armadillo release 8.600.0 (Sabretooth Rugrat)

    • added hess() for Hessenberg decomposition

    • added .row(), .rows(), .col(), .cols() to subcube views

    • expanded .shed_rows() and .shed_cols() to handle cubes

    • expanded .insert_rows() and .insert_cols() to handle cubes

    • expanded subcube views to allow non-contiguous access to slices

    • improved tuning of sparse matrix element access operators

    • faster handling of tridiagonal matrices by solve()

    • faster multiplication of matrices with differing element types when using OpenMP

Changes in RcppArmadillo version 0.8.500.1.1 (2018-05-17) [GH only]

  • Upgraded to Armadillo release 8.500.1 (Caffeine Raider)

    • bug fix for banded matricex
  • Added slam to Suggests: as it is used in two unit test functions [CRAN requests]

  • The RcppArmadillo.package.skeleton() function now works with example_code=FALSE when pkgKitten is present (Santiago Olivella in #231 fixing #229)

  • The LAPACK tests now cover band matrix solvers (Keith O'Hara in #230).

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianGunnar Wolf: Want to set up a Tor node in Mexico? Hardware available

Hi friends,

Thanks to the work I have been carrying out with the "Derechos Digitales" NGO, I have received ten Raspberry Pi 3B computers, to help the growth of Tor nodes in Latin America.

The nodes can be intermediate (relays) or exit nodes. Most of us will only be able to connect relays, but if you have the possibility to set up an exit node, that's better than good!

Both can be set up in any non-filtered Internet connection that gives a publicly reachable IP address. I have to note that, although we haven't done a full ISP survey in Mexico (and it would be a very important thing to do — If you are interested in helping with that, please contact me!), I can tell you that connections via Telmex (be it via their home service, Infinitum, or their corporate brand, Uninet) are not good because the ISP filters most of the Tor Directory Authorities.

What do you need to do? Basically, mail me ( sending a copy to Ignacio (, the person working at this NGO who managed to send me said computers. Oh, of course - And you have to be (physically) in Mexico.

I have ten computers ready to give out to whoever wants some. I am willing and even interested in giving you the needed tech support to do this. Who says "me"?

CryptogramFriday Squid Blogging: Fried Squid with Turmeric

Good-looking recipe.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramConservation of Threat

Here's some interesting research about how we perceive threats. Basically, as the environment becomes safer we basically manufacture new threats. From an essay about the research:

To study how concepts change when they become less common, we brought volunteers into our laboratory and gave them a simple task ­-- to look at a series of computer-generated faces and decide which ones seem "threatening." The faces had been carefully designed by researchers to range from very intimidating to very harmless.

As we showed people fewer and fewer threatening faces over time, we found that they expanded their definition of "threatening" to include a wider range of faces. In other words, when they ran out of threatening faces to find, they started calling faces threatening that they used to call harmless. Rather than being a consistent category, what people considered "threats" depended on how many threats they had seen lately.

This has a lot of implications in security systems where humans have to make judgments about threat and risk: TSA agents, police noticing "suspicious" activities, "see something say something" campaigns, and so on.

The academic paper.

Planet DebianNeil Williams: Automation & Risk

First of two posts reproducing some existing content for a wider audience due to delays in removing viewing restrictions on the originals. The first is a bit long... Those familiar with LAVA may choose to skip forward to Core elements of automation support.

A summary of this document was presented by Steve McIntyre at Linaro Connect 2018 in Hong Kong. A video of that presentation and the slides created from this document are available online:

Although the content is based on several years of experience with LAVA, the core elements are likely to be transferable to many other validation, CI and QA tasks.

I recognise that this document may be useful to others, so this blog post is under CC BY-SA 3.0: See also

Automation & Risk


Linaro created the LAVA (Linaro Automated Validation Architecture) project in 2010 to automate testing of software using real hardware. Over the seven years of automation in Linaro so far, LAVA has also spread into other labs across the world. Millions of test jobs have been run, across over one hundred different types of devices, ARM, x86 and emulated. Varied primary boot methods have been used alone or in combination, including U-Boot, UEFI, Fastboot, IoT, PXE. The Linaro lab itself has supported over 150 devices, covering more than 40 different device types. Major developments within LAVA include MultiNode and VLAN support. As a result of this data, the LAVA team have identified a series of automated testing failures which can be traced to decisions made during hardware design or firmware development. The hardest part of the development of LAVA has always been integrating new device types, arising from issues with hardware design and firmware implementations. There are a range of issues with automating new hardware and the experience of the LAVA lab and software teams has highlighted areas where decisions at the hardware design stage have delayed deployment of automation or made the task of triage of automation failures much harder than necessary.

This document is a summary of our experience with full background and examples. The aim is to provide background information about why common failures occur, and recommendations on how to design hardware and firmware to reduce problems in the future. We describe some device design features as hard requirements to enable successful automation, and some which are guaranteed to block automation. Specific examples are used, naming particular devices and companies and linking to specific stories. For a generic summary of the data, see Automation and hardware design.

What is LAVA?

LAVA is a continuous integration system for deploying operating systems onto physical and virtual hardware for running tests. Tests can be simple boot testing, bootloader testing and system level testing, although extra hardware may be required for some system tests. Results are tracked over time and data can be exported for further analysis.

LAVA is a collection of participating components in an evolving architecture. LAVA aims to make systematic, automatic and manual quality control more approachable for projects of all sizes.

LAVA is designed for validation during development - testing whether the code that engineers are producing “works”, in whatever sense that means. Depending on context, this could be many things, for example:

  • testing whether changes in the Linux kernel compile and boot
  • testing whether the code produced by gcc is smaller or faster
  • testing whether a kernel scheduler change reduces power consumption for a certain workload etc.

LAVA is good for automated validation. LAVA tests the Linux kernel on a range of supported boards every day. LAVA tests proposed android changes in gerrit before they are landed, and does the same for other projects like gcc. Linaro runs a central validation lab in Cambridge, containing racks full of computers supplied by Linaro members and the necessary infrastucture to control them (servers, serial console servers, network switches etc.)

LAVA is good for providing developers with the ability to run customised test on a variety of different types of hardware, some of which may be difficult to obtain or integrate. Although LAVA has support for emulation (based on QEMU), LAVA is best at providing test support for real hardware devices.

LAVA is principally aimed at testing changes made by developers across multiple hardware platforms to aid portability and encourage multi-platform development. Systems which are already platform independent or which have been optimised for production may not necessarily be able to be tested in LAVA or may provide no overall gain.

What is LAVA not?

LAVA is designed for Continuous Integration not management of a board farm.

LAVA is not a set of tests - it is infrastructure to enable users to run their own tests. LAVA concentrates on providing a range of deployment methods and a range of boot methods. Once the login is complete, the test consists of whatever scripts the test writer chooses to execute in that environment.

LAVA is not a test lab - it is the software that can used in a test lab to control test devices.

LAVA is not a complete CI system - it is software that can form part of a CI loop. LAVA supports data extraction to make it easier to produce a frontend which is directly relevant to particular groups of developers.

LAVA is not a build farm - other tools need to be used to prepare binaries which can be passed to the device using LAVA.

LAVA is not a production test environment for hardware - LAVA is focused on developers and may require changes to the device or the software to enable automation. These changes are often unsuitable for production units. LAVA also expects that most devices will remain available for repeated testing rather than testing the software with a changing set of hardware.

The history of automated bootloader testing

Many attempts have been made to automate bootloader testing and the rest of this document cover the issues in detail. However, it is useful to cover some of the history in this introduction, particularly as that relates to ideas like SDMux - the SD card multiplexer which should allow automated testing of bootloaders like U-Boot on devices where the bootloader is deployed to an SD card. The problem of SDMux details the requirements to provide access to SD card filesystems to and from the dispatcher and the device. Requirements include: ethernet, no reliance on USB, removable media, cable connections, unique serial numbers, introspection and interrogation, avoiding feature creep, scalable design, power control, maintained software and mounting holes. Despite many offers of hardware, no suitable hardware has been found and testing of U-Boot on SD cards is not currently possible in automation. The identification of the requirements for a supportable SDMux unit are closely related to these device requirements.

Core elements of automation support


The ability to deploy exactly the same software to the same board(s) and running exactly the same tests many times in a row, getting exactly the same results each time.

For automation to work, all device functions which need to be used in automation must always produce the same results on each device of a specific device type, irrespective of any previous operations on that device, given the same starting hardware configuration.

There is no way to automate a device which behaves unpredictably.


The ability to run a wide range of test jobs, stressing different parts of the overall deployment, with a variety of tests and always getting a Complete test job. There must be no infrastructure failures and there should be limited variability in the time taken to run the test jobs to avoid the need for excessive Timeouts.

The same hardware configuration and infrastructure must always behave in precisely the same way. The same commands and operations to the device must always generate the same behaviour.


The device must support deployment of files and booting of the device without any need for a human to monitor or interact with the process. The need to press buttons is undesirable but can be managed in some cases by using relays. However, every extra layer of complexity reduces the overall reliability of the automation process and the need for buttons should be limited or eliminated wherever possible. If a device uses on LEDs to indicate the success of failure of operations, such LEDs must only be indicative. The device must support full control of that process using only commands and operations which do not rely on observation.


All methods used to automate a device must have minimal footprint in terms of load on the workers, complexity of scripting support and infrastructure requirements. This is a complex area and can trivially impact on both reliability and reproducibility as well as making it much more difficult to debug problems which do arise. Admins must also consider the complexity of combining multiple different devices which each require multiple layers of support.

Remote power control

Devices MUST support automated resets either by the removal of all power supplied to the DUT or a full reboot or other reset which clears all previous state of the DUT.

Every boot must reliably start, without interaction, directly from the first application of power without the limitation of needing to press buttons or requiring other interaction. Relays and other arrangements can be used at the cost of increasing the overall complexity of the solution, so should be avoided wherever possible.

Networking support

Ethernet - all devices using ethernet interfaces in LAVA must have a unique MAC address on each interface. The MAC address must be persistent across reboots. No assumptions should be made about fixed IP addresses, address ranges or pre-defined routes. If more than one interface is available, the boot process must be configurable to always use the same interface every time the device is booted. WiFi is not currently supported as a method of deploying files to devices.

Serial console support

LAVA expects to automate devices by interacting with the serial port immediately after power is applied to the device. The bootloader must interact with the serial port. If a serial port is not available on the device, suitable additional hardware must be provided before integration can begin. All messages about the boot process must be visible using the serial port and the serial port should remain usable for the duration of all test jobs on the device.


Devices supporting primary SSH connections have persistent deployments and this has implications, some positive, some negative - depending on your use case.

  • Fixed OS - the operating system (OS) you get is the OS of the device and this must not be changed or upgraded.
  • Package interference - if another user installs a conflicting package, your test can fail.
  • Process interference - another process could restart (or crash) a daemon upon which your test relies, so your test will fail.
  • Contention - another job could obtain a lock on a constrained resource, e.g. dpkg or apt, causing your test to fail.
  • Reusable scripts - scripts and utilities your test leaves behind can be reused (or can interfere) with subsequent tests.
  • Lack of reproducibility - an artifact from a previous test can make it impossible to rely on the results of a subsequent test, leading to wasted effort with false positives and false negatives.
  • Maintenance - using persistent filesystems in a test action results in the overlay files being left in that filesystem. Depending on the size of the test definition repositories, this could result in an inevitable increase in used storage becoming a problem on the machine hosting the persistent location. Changes made by the test action can also require intermittent maintenance of the persistent location.

Only use persistent deployments when essential and always take great care to avoid interfering with other tests. Users who deliberately or frequently interfere with other tests can have their submit privilege revoked.

The dangers of simplistic testing

Connect and test

Seems simple enough - it doesn’t seem as if you need to deploy a new kernel or rootfs every time, no need to power off or reboot between tests. Just connect and run stuff. After all, you already have a way to manually deploy stuff to the board. The biggest problem with this method is Persistence as above - LAVA keeps the LAVA components separated from each other but tests frequently need to install support which will persist after the test, write files which can interfere with other tests or break the manual deployment in unexpected ways when things go wrong. The second problem within this fallacy is simply the power drain of leaving the devices constantly powered on. In manual testing, you would apply power at the start of your day and power off at the end. In automated testing, these devices would be on all day, every day, because test jobs could be submitted at any time.

ssh instead of serial

This is an over-simplification which will lead to new and unusual bugs and is only a short step on from connect & test with many of the same problems. A core strength of LAVA is demonstrating differences between types of devices by controlling the boot process. By the time the system has booted to the point where sshd is running, many of those differences have been swallowed up in the boot process.

Test everything at the same time

Issues here include:

Breaking the basic scientific method of test one thing at a time

The single system contains multiple components, like the kernel and the rootfs and the bootloader. Each one of those components can fail in ways which can only be picked up when some later component produces a completely misleading and unexpected error message.


Simply deploying the entire system for every single test job wastes inordinate amounts of time when you do finally identify that the problem is a configuration setting in the bootloader or a missing module for the kernel.


The larger the deployment, the more complex the boot and the tests become. Many LAVA devices are prototypes and development boards, not production servers. These devices will fail in unpredictable places from time to time. Testing a kernel build multiple times is much more likely to give you consistent averages for duration, performance and other measurements than if the kernel is only tested as part of a complete system.Automated recovery - deploying an entire system can go wrong, whether an interrupted copy or a broken build, the consequences can mean that the device simply does not boot any longer.

Every component involved in your test must allow for automated recovery

This means that the boot process must support being interrupted before that component starts to load. With a suitably configured bootloader, it is straightforward to test kernel builds with fully automated recovery on most devices. Deploying a new build of the bootloader itself is much more problematic. Few devices have the necessary management interfaces with support for secondary console access or additional network interfaces which respond very early in boot. It is possible to chainload some bootloaders, allowing the known working bootloader to be preserved.

I already have builds

This may be true, however, automation puts extra demands on what those builds are capable of supporting. When testing manually, there are any number of times when a human will decide that something needs to be entered, tweaked, modified, removed or ignored which the automated system needs to be able to understand. Examples include /etc/resolv.conf and customised tools.

Automation can do everything

It is not possible to automate every test method. Some kinds of tests and some kinds of devices lack critical elements that do not work well with automation. These are not problems in LAVA, these are design limitations of the kind of test and the device itself. Your preferred test plan may be infeasible to automate and some level of compromise will be required.

Users are all admins too

This will come back to bite! However, there are other ways in which this can occur even after administrators have restricted users to limited access. Test jobs (including hacking sessions) have full access to the device as root. Users, therefore, can modify the device during a test job and it depends on the device hardware support and device configuration as to what may happen next. Some devices store bootloader configuration in files which are accessible from userspace after boot. Some devices lack a management interface that can intervene when a device fails to boot. Put these two together and admins can face a situation where a test job has corrupted, overridden or modified the bootloader configuration such that the device no longer boots without intervention. Some operating systems require a debug setting to be enabled before the device will be visible to the automation (e.g. the Android Debug Bridge). It is trivial for a user to mistakenly deploy a default or production system which does not have this modification.


LAVA is aimed at kernel and system development and testing across a wide variety of hardware platforms. By the time the test has got to the level of automating a GUI, there have been multiple layers of abstraction between the hardware, the kernel, the core system and the components being tested. Following the core principle of testing one element at a time, this means that such tests quickly become platform-independent. This reduces the usefulness of the LAVA systems, moving the test into scope for other CI systems which consider all devices as equivalent slaves. The overhead of LAVA can become an unnecessary burden.

CI needs a timely response - it takes time for a LAVA device to be re-deployed with a system which has already been tested. In order to test a component of the system which is independent of the hardware, kernel or core system a lot of time has been consumed before the “test” itself actually begins. LAVA can support testing pre-deployed systems but this severely restricts the usefulness of such devices for actual kernel or hardware testing.

Automation may need to rely on insecure access. Production builds (hardware and software) take steps to prevent systems being released with known login identities or keys, backdoors and other security holes. Automation relies on at least one of these access methods being exposed, typically a way to access the device as the root or admin user. User identities for login must be declared in the submission and be the same across multiple devices of the same type. These access methods must also be exposed consistently and without requiring any manual intervention or confirmation. For example, mobile devices must be deployed with systems which enable debug access which all production builds will need to block.

Automation relies on remote power control - battery powered devices can be a signficant problem in this area. On the one hand, testing can be expected to involve tests of battery performance, low power conditions and recharge support. However, testing will also involve broken builds and failed deployments where the only recourse is to hard reset the device by killing power. With a battery in the loop, this becomes very complex, sometimes involving complex electrical bodges to the hardware to allow the battery to be switched out of the circuit. These changes can themselves change the performance of the battery control circuitry. For example, some devices fail to maintain charge in the battery when held in particular states artificially, so the battery gradually discharges despite being connected to mains power. Devices which have no battery can still be a challenge as some are able to draw power over the serial circuitry or USB attachments, again interfering with the ability of the automation to recover the device from being “bricked”, i.e. unresponsive to the control methods used by the automation and requiring manual admin intervention.

Automation relies on unique identification - all devices in an automation lab must be uniquely identifiable at all times, in all modes and all active power states. Too many components and devices within labs fail to allow for the problems of scale. Details like serial numbers, MAC addresses, IP addresses and bootloader timeouts must be configurable and persistent once configured.

LAVA is not a complete CI solution - even including the hardware support available from some LAVA instances, there are a lot more tools required outside of LAVA before a CI loop will actually work. The triggers from your development workflow to the build farm (which is not LAVA), the submission to LAVA from that build farm are completely separate and outside the scope of this documentation. LAVA can help with the extraction of the results into information for the developers but LAVA output is generic and most teams will benefit from some “frontend” which extracts the data from LAVA and generates relevant output for particular development teams.

Features of CI


How often is the loop to be triggered?

Set up some test builds and test jobs and run through a variety of use cases to get an idea of how long it takes to get from the commit hook to the results being available to what will become your frontend.

Investigate where the hardware involved in each stage can be improved and analyse what kind of hardware upgrades may be useful.

Reassess the entire loop design and look at splitting the testing if the loop cannot be optimised to the time limits required by the team. The loop exists to serve the team but the expectations of the team may need to be managed compared to the cost of hardware upgrades or finite time limits.


How many branches, variants, configurations and tests are actually needed?

Scale has a direct impact on the affordability and feasibility of the final loop and frontend. Ensure that the build infrastructure can handle the total number of variants, not just at build time but for storage. Developers will need access to the files which demonstrate a particular bug or regression

Scale also provides benefits of being able to ignore anomalies.

Identify how many test devices, LAVA instances and Jenkins slaves are needed. (As a hint, start small and design the frontend so that more can be added later.)


The development of a custom interface is not a small task

Capturing the requirements for the interface may involve lengthy discussions across the development team. Where there are irreconcilable differences, a second frontend may become necessary, potentially pulling the same data and presenting it in a radically different manner.

Include discussions on how or whether to push notifications to the development team. Take time to consider the frequency of notification messages and how to limit the content to only the essential data.

Bisect support can flow naturally from the design of the loop if the loop is carefully designed. Bisect requires that a simple boolean test can be generated, built and executed across a set of commits. If the frontend implements only a single test (for example, does the kernel boot?) then it can be easy to identify how to provide bisect support. Tests which produce hundreds of results need to be slimmed down to a single pass/fail criterion for the bisect to work.


This may take the longest of all elements of the final loop

Just what results do the developers actually want and can those results be delivered? There may be requirements to aggregate results across many LAVA instances, with comparisons based on metadata from the original build as well as the LAVA test.

What level of detail is relevant?

Different results for different members of the team or different teams?

Is the data to be summarised and if so, how?


A frontend has the potential to become complex and need long term maintenance and development

Device requirements

At the hardware design stage, there are considerations for the final software relating to how the final hardware is to be tested.


All units of all devices must uniquely identify to the host machine as distinct from all other devices which may be connected at the same time. This particularly covers serial connections but also any storage devices which are exported, network devices and any other method of connectivity.

Example - the WaRP7 integration has been delayed because the USB mass storage does not export a filesystem with a unique identifier, so when two devices are connected, there is no way to distinguish which filesystem relates to which device.

All unique identifiers must be isolated from the software to be deployed onto the device. The automation framework will rely on these identifiers to distinguish one device from up to a dozen identical devices on the same machine. There must be no method of updating or modifying these identifiers using normal deployment / flashing tools. It must not be possible for test software to corrupt the identifiers which are fundamental to how the device is identified amongst the others on the same machine.

All unique identifiers must be stable across multiple reboots and test jobs. Randomly generated identifiers are never suitable.

If the device uses a single FTDI chip which offers a single UART device, then the unique serial number of that UART will typically be a permanent part of the chip. However, a similar FTDI chip which provides two or more UARTs over the same cable would not have serial numbers programmed into the chip but would require a separate piece of flash or other storage into which those serial numbers can be programmed. If that storage is not designed into the hardware, the device will not be capable of providing the required uniqueness.

Example - the WaRP7 exports two UARTs over a single cable but fails to give unique identifiers to either connection, so connecting a second device disconnects the first device when the new tty device replaces the existing one.

If the device uses one or more physical ethernet connector(s) then the MAC address for each interface must not be generated randomly at boot. Each MAC address needs to be:

  • persistent - each reboot must always use the same MAC address for each interface.
  • unique - every device of this type must use a unique MAC address for each interface.

If the device uses fastboot, then the fastboot serial number must be unique so that the device can be uniquely identified and added to the correct container. Additionally, the fastboot serial number must not be modifiable except by the admins.

Example - the initial HiKey 960 integration was delayed because the firmware changed the fastboot serial number to a random value every time the device was rebooted.


Automation requires more than one device to be deployed - the current minimum is five devices. One device is permanently assigned to the staging environment to ensure that future code changes retain the correct support. In the early stages, this device will be assigned to one of the developers to integrate the device into LAVA. The devices will be deployed onto machines which have many other devices already running test jobs. The new device must not interfere with those devices and this makes some of the device requirements stricter than may be expected.

  • The aim of automation is to create a homogenous test platform using heterogeneous devices and scalable infrastructure.

  • Do not complicate things.

  • Avoid extra customised hardware

    Relays, hardware modifications and mezzanine boards all increase complexity

    Examples - X15 needed two relay connections, the 96boards initially needed a mezzanine board where the design was rushed, causing months of serial disconnection issues.

  • More complexity raises failure risk nonlinearly

    Example - The lack of onboard serial meant that the 96boards devices could not be tested in isolation from the problematic mezzanine board. Numerous 96boards devices were deemed to be broken when the real fault lay with intermittent failures in the mezzanine. Removing and reconnecting a mezzanine had a high risk of damaging the mezzanine or the device. Once 96boards devices moved to direct connection of FTDI cables into the connector formerly used by the mezzanine, serial disconnection problems disappeared. The more custom hardware has to be designed / connected to a device to support automation, the more difficult it is to debug issues within that infrastructure.

  • Avoid unreliable protocols and connections

    Example. WiFi is not a reliable deployment method, especially inside a large lab with lots of competing signals and devices.

  • This document is not demanding enterprise or server grade support in devices.

    However, automation cannot scale with unreliable components.

    Example - HiKey 6220 and the serial mezzanine board caused massively complex problems when scaled up in LKFT.

  • Server support typically includes automation requirements as a subset:

    RAS, performance, efficiency, scalability, reliability, connectivity and uniqueness

  • Automation racks have similar requirements to data centres.

  • Things need to work reliably at scale

Scale issues also affect the infrastructure which supports the devices as well as the required reliability of the instance as a whole. It can be difficult to scale up from initial development to automation at scale. Numerous tools and utilities prove to be uncooperative, unreliable or poorly isolated from other processes. One result can be that the requirements of automation look more like the expectations of server-type hardware than of mobile hardware. The reality at scale is that server-type hardware has already had fixes implemented for scalability issues whereas many mobile devices only get tested as standalone units.

Connectivity and deployment methods

  • All test software is presumed broken until proven otherwise
  • All infrastructure and device integration support must be proven to be stable before tests can be reliable
  • All devices must provide at least one method of replacing the current software with the test software, at a level lower than you're testing.

The simplest method to automate is TFTP over physical ethernet, e.g. U-Boot or UEFI PXE. This also puts the least load on the device and automation hardware when delivering large images

Manually writing software to SD is not suitable for automation. This tends to rule out many proposed methods for testing modified builds or configurations of firmware in automation.

See for more information on how the requirements of automation affect the hardware design requirements to provide access to SD card filesystems to and from the dispatcher and the device.

Some deployment methods require tools which must be constrained within an LXC. These include but are not limited to:

  • fastboot - due to a common need to have different versions installed for different hardware devices

    Example - Every fastboot device suffers from this problem - any running fastboot process will inspect the entire list of USB devices and attempt to connect to each one, locking out any other fastboot process which may be running at the time, which sees no devices at all.

  • IoT deployment - some deployment tools require patches for specific devices or use tools which are too complex for use on the dispatcher.

    Example - the TI CC3220 IoT device needs a patched build of OpenOCD, the WaRP7 needs a custom flashing tool compiled from a github repository.

Wherever possible, existing deployment methods and common tools are strongly encouraged. New tools are not likely to be as reliable as the existing tools.

Deployments must not make permanent changes to the boot sequence or configuration.

Testing of OS installers may require modifying the installer to not install an updated bootloader or modify bootloader configuration. The automation needs to control whether the next reboot boots the newly deployed system or starts the next test job, for example when a test job has been cancelled, the device needs to be immediately ready to run a different test job.


Automation requires driving the device over serial instead of via a touchscreen or other human interface device. This changes the way that the test is executed and can require the use of specialised software on the device to translate text based commands into graphical inputs.

It is possible to test video output in automation but it is not currently possible to drive automation through video input. This includes BIOS-type firmware interaction. UEFI can be used to automatically execute a bootloader like Grub which does support automation over serial. UEFI implementations which use graphical menus cannot be supported interactively.


The objective is to have automation support which runs test jobs reliably. Reproducible failures are easy to fix but intermittent faults easily consume months of engineering time and need to be designed out wherever possible. Reliable testing means only 3 or 4 test job failures per week due to hardware or infrastructure bugs across an entire test lab (or instance). This can involve thousands of test jobs across multiple devices. Some instances may have dozens of identical devices but they still need not to exceed the same failure rate.

All devices need to reach the minimum standard of reliability, or they are not fit for automation. Some of these criteria might seem rigid, but they are not exclusive to servers or enterprise devices. To be useful mobile and IoT devices need to meet the same standards, even though the software involved and the deployment methods might be different. The reason is that the Continuous Integration strategy remains the same for all devices. The problem is the same, regardless of underlying considerations.

A developer makes a change; that change triggers a build; that build triggers a test; that test reports back to the developer whether that change worked or had unexpected side effects.

  • False positive and false negatives are expensive in terms of wasted engineering time.
  • False positives can arise when not enough of the software is fully tested, or if the testing is not rigorous enough to spot all problems.
  • False negatives arise when the test itself is unreliable, either because of the test software or the test hardware.

This becomes more noticeable when considering automated bisections which are very powerful in tracking the causes of potential bugs before the product gets released. Every test job must give a reliable result or the bisection will not reliably identify the correct change.

Automation and Risk

Linaro kernel functional test framework (LKFT)

We have seen with LKFT that complexity has a non-linear relationship with the reliability of any automation process. This section aims to set out some guidelines and recommendations on just what is acceptable in the tools needed to automate testing on a device. These guidelines are based on our joint lab and software team experiences with a wide variety of hardware and software.

Adding or modifying any tool has a risk of automation failure

Risk increases non-linearly with complexity. Some of this risk can be mitigated by testing the modified code and the complete system.

Dependencies installed count as code in terms of the risks of automation failure

This is a key lesson learnt from our experiences with LAVA V1. We added a remote worker method, which was necessary at the time to improve scalability. But it massively increased the risk of automation failure simply due to the extra complexity that came with the chosen design.These failures did not just show up in the test jobs which actively used the extra features and tools; they caused problems for all jobs running on the system.

The ability in LAVA V2 to use containers for isolation is a key feature

For the majority of use cases, the small extension of the runtime of the test to set up and use a container is negligible. The extra reliability is more than worth the extra cost.

Persistent containers are themselves a risk to automation

Just as with any persistent change to the system.

Pre-installing dependencies in a persistent container does not necessarily lower the overall risk of failure. It merely substitutes one element of risk for another.

All code changes need to be tested

In unit tests and in functional tests. There is a dividing line where if something is installed as a dependency of LAVA, then when that something goes wrong, LAVA engineers will be pressured into fixing the code of that dependency whether or not we have any particular experience of that language, codebase or use case. Moving that code into a container moves that burden but also makes triage of that problem much easier by allowing debug builds / options to be substituted easily.

Complexity also increases the difficulty of debugging, again in a nonlinear fashion

A LAVA dependency needs a higher bar in terms of ease of triage.

Complexity cannot be easily measured

Although there are factors which contribute.


Large programs which appear as a single monolith are harder to debug than the UNIX model of one utility joined with other utilities to perform a wider task. (This applies to LAVA itself as much as any one dependency - again, a lesson from V1.)

Feature creep

Continually adding features beyond the original scope makes complex programs worse. A smaller codebase will tend to be simpler to triage than a large codebase, even if that codebase is not monolithic.

Targeted utilities are less risky than large environments

A program which supports protocol after protocol after protocol will be more difficult to maintain than 3 separate programs for each protocol. This only gets worse when the use case for that program only requires the use of one of the many protocols supported by the program. The fact that the other protocols are supported increases the complexity of the program beyond what the use case actually merits.

Metrics in this area are impossible

The risks are nonlinear, the failures are typically intermittent. Even obtaining or applying metrics takes up huge amounts of engineering time.

Mismatches in expectations

The use case of automation rarely matches up with the more widely tested use case of the upstream developers. We aren't testing the code flows typically tested by the upstream developers, so we find different bugs, raising the level of risk. Generally, the simpler it is to deploy a device in automation, the closer the test flow will be to the developer flow.

Most programs are written for the single developer model

Some very widely used programs are written to scale but this is difficult to determine without experience of trying to run it at scale.

Some programs do require special consideration

QEMU would fail most of these guidelines above, so there are mitigating factors:

  • Programs which can be easily restricted to well understood use cases lower the risk of failure. Not all use cases of the same program not need to be covered.
  • Programs which have excellent community and especially in-house support also lower the risk of failure. (Having QEMU experts in Linaro is a massive boost for having QEMU as a dispatcher dependency.)

Unfamiliar languages increase the difficulty of triage

This may affect dependencies in unexpected ways. A program which has lots of bindings into a range of other languages becomes entangled in transitions and bugs in those other languages. This commonly delays the availability of the latest version which may have a critical fix for one use case but which fails to function at all in what may seem to be an unrelated manner.

The dependency chain of the program itself increases the risk of failure in precisely the same manner as the program

In terms of maintenance, this can include the build dependencies of the program as those affect delivery / availability of LAVA in distributions like Debian.

Adding code to only one dispatcher amongst many increases the risk of failure on the instance as a whole

By having an untested element which is at variance to the rest of the system.

Conditional dependencies increase the risk

Optional components can be supported but only increase the testing burden by extending the matrix of installations.

Presence of the code in Debian main can reduce the risk of failure

This does not outweigh other considerations - there are plenty of packages in Debian (some complex, some not) which would be an unacceptable risk as a dependency of the dispatcher, fastboot for one. A small python utility from github can be a substantially lower risk than a larger program from Debian which has unused functionality.

Sometimes, "complex" simply means "buggy" or "badly designed"

fastboot is not actually a complex piece of code but we have learnt that it does not currently scale. This is a result of the disparity between the development model and the automation use case. Disparities like that actually equate to complexity, in terms of triage and maintenance. If fastboot was more complex at the codebase level, it may actually become a lower risk than currently.

Linaro as a whole does have a clear objective of harmonising the ecosystem

Adding yet another variant of existing support is at odds with the overall objective of the company. Many of the tools required in automation have no direct affect on the distinguishing factors for consumers. Adding another one "just because" is not a good reason to increase the risk of automation failure. Just as with standards.

Having the code on the dispatcher impedes development of that code

Bug fixes will take longer to be applied because the fix needs to go through a distribution or other packaging process managed by the lab admins. Applying a targeted fix inside an LXC is useful for proving that the fix works.

Not all programs can work in an LXC

LAVA also provides ways to test using those programs by deploying the code onto a test device. e.g. the V2 support for fastmodels involves only deploying the fastmodel inside a LAVA Test Shell on a test device, e.g. x86 or mustang or Juno.

Speed of running a test job in LAVA is important for CI

The goal of speed must give way to the requirement for reliability of automation

Resubmitting a test job due to a reliability failure is more harmful to the CI process than letting tests take longer to execute without such failures. Test jobs which run quickly are easier to parallelize by adding more test hardware.

Modifying software on the device

Not all parts of the software stack can be replaced automatically, typically the firmware and/or bootloader will need to be considered carefully. The boot sequence will have important effects on what kind of testing can be done automatically. Automation relies on being able to predict the behaviour of the device, interrupt that default behaviour and then execute the test. For most devices, everything which executes on the device prior to the first point at which the boot sequence can be interrupted can be considered as part of the primary boot software. None of these elements can be safely replaced or modified in automation.

The objective is to deploy the device such that as much of the software stack can be replaced as possible whilst preserving the predictable behaviour of all devices of this type so that the next test job always gets a working, clean device in a known state.

Primary boot software

For many devices, this is the bootloader, e.g. U-Boot, UEFI or fastboot.

Some devices include support for a Baseboard management controller or BMC which allows the bootloader and other firmware to be updated even if the device is bricked. The BMC software itself then be considered as the primary boot software, it cannot be safely replaced.

All testing of the primary boot software will need to be done by developers using local devices. SDMux was an idea which only fitted one specific set of hardware, the problem of testing the primary boot software is a hydra. Adding customised hardware to try to sidestep the primary boot software always increases the complexity and failure rates of the devices.

It is possible to divide the pool of devices into some which only ever use known versions of the primary boot software controlled by admins and other devices which support modifying the primary boot software. However, this causes extra work when processing the results, submitting the test jobs and administering the devices.

A secondary problem here is that it is increasingly common for the methods of updating this software to be esoteric, hacky, restricted and even proprietary.

  • Click-through licences to obtain the tools

  • Greedy tools which hog everything in /dev/bus/usb

  • NIH tools which are almost the same as existing tools but add vendor-specific "functionality"

  • GUI tools

  • Changing jumpers or DIP switches,

    Often in inaccessible locations which require removal of other ancillary hardware

  • Random, untrusted, compiled vendor software running as root

  • The need to press and hold buttons and watch for changes in LED status.

We've seen all of these - in various combinations - just in 2017, as methods of getting devices into a mode where the primary boot software can be updated.

Copyright 2018 Neil Williams

Available under CC BY-SA 3.0:

Planet DebianAna Beatriz Guerrero Lopez: Introducing debos, a versatile images generator

In Debian and derivative systems, there are many ways to build images. The simplest tool of choice is often debootstrap. It works by downloading the .deb files from a mirror and unpacking them into a directory which can eventually be chrooted into.

More often than not, we want to make some customization on this image, install some extra packages, run a script, add some files, etc

debos is a tool to make this kind of trivial tasks easier. debos works using recipe files in YAML listing the actions you want to perform in your image sequentially and finally, choosing the output formats.

As opposite to debootstrap and other tools, debos doesn't need to be run as root for making actions that require root privileges in the images. debos uses fakemachine a library that setups qemu-system allowing you to work in the image with root privileges and to create images for all the architectures supported by qemu user. However, for this to work, make sure your user has permission to use /dev/kvm.

Let's see how debos works with a simple example. If we wanted to create an arm64 image for Debian Stretch customized, we would follow these steps:

  • debootstrap the image
  • install the packages we need
  • create an user
  • setup our preferred hostname
  • run a script creating an user
  • copy a file adding the user to sudoers
  • creating a tarball with the final image

This would translate into a debos recipe like this one:

{{- $architecture := or .architecture "arm64" -}}
{{- $suite := or .suite "stretch" -}}
{{ $image := or .image (printf "debian-%s-%s.tgz" $suite $architecture) }}

architecture: {{ $architecture }}

  - action: debootstrap
    suite: {{ $suite }}
      - main
    variant: minbase

  - action: apt
    recommends: false
      - adduser
      - sudo

  - action: run
    description: Set hostname
    chroot: true
    command: echo debian-{{ $suite }}-{{ $architecture }} > /etc/hostname

  - action: run
    chroot: true
    script: scripts/

  - action: overlay
    description: Add sudo configuration
    source: overlays/sudo

  - action: pack
    file: {{ $image }}
    compression: gz

(The files used in this example are available from this git repository)

We run debos on the recipe file:

$ debos simple.yaml

The result will be a tarball named debian-stretch-arm64.tar.gz. If you check the top two lines of the recipe, you can see that the recipe defaults to architecture arm64 and Debian stretch. We can override these defaults when running debos:

$ debos -t suite:"buster" -t architecture:"amd64" simple.yaml

This time the result will be a tarball debian-buster-amd64.tar.gz.

The recipe allows some customization depending on the parameters. We could install packages depending on the target architecture, for example, installing python-libsoc in armhf and arm64:

- action: apt
  recommends: false
    - adduser
    - sudo
{{- if eq $architecture "armhf" "arm64" }}
    - python-libsoc
{{- end }}

What happens if in addition to a tarball we would like to create a filesystem image? This could be done adding two more actions to our example, a first action creating the image partition with the selected filesystem and a second one deploying the image in the filesystem:

- action: image-partition
  imagename: {{ $ext4 }}
  imagesize: 1GB
  partitiontype: msdos
    - mountpoint: /
      partition: root
    - name: root
      fs: ext4
      start: 0%
      end: 100%
      flags: [ boot ]

- action: filesystem-deploy
  description: Deploying filesystem onto image

{{ $ext4 }} should be defined in the top of the file as follows:

{{ $ext4 := or .image (printf "debian-%s-%s.ext4" $suite $architecture) }}

We could even make this step optional and make the recipe by default to only create the tarball and add the filesystem image only adding an option to debos:

$ debos -t type:"full" full.yaml

The final debos recipe will look like this:

{{- $architecture := or .architecture "arm64" -}}
{{- $suite := or .suite "stretch" -}}
{{ $type := or .type "min" }}
{{ $image := or .image (printf "debian-%s-%s.tgz" $suite $architecture) }}
{{ $ext4 := or .image (printf "debian-%s-%s.ext4" $suite $architecture) }}

architecture: {{ $architecture }}

  - action: debootstrap
    suite: {{ $suite }}
      - main
    variant: minbase

  - action: apt
    recommends: false
      - adduser
      - sudo
{{- if eq $architecture "armhf" "arm64" }}
      - python-libsoc
{{- end }}

  - action: run
    description: Set hostname
    chroot: true
    command: echo debian-{{ $suite }}-{{ $architecture }} > /etc/hostname

  - action: run
    chroot: true
    script: scripts/

  - action: overlay
    description: Add sudo configuration
    source: overlays/sudo

  - action: pack
    file: {{ $image }}
    compression: gz

{{ if eq $type "full" }}
  - action: image-partition
    imagename: {{ $ext4 }}
    imagesize: 1GB
    partitiontype: msdos
      - mountpoint: /
        partition: root
      - name: root
        fs: ext4
        start: 0%
        end: 100%
        flags: [ boot ]

  - action: filesystem-deploy
    description: Deploying filesystem onto image

debos also provides some other actions that haven't been covered in the example above:

  • download allows to download a single file from the internet
  • raw can directly write a file to the output image at a given offset
  • unpack can be used to unpack files from archive in the filesystem
  • ostree-commit create an OSTree commit from rootfs
  • ostree-deploy deploy an OSTree branch to the image

The example in this blog post is simple and short on purpose. Combining the actions presented above, you could also include a kernel and install a bootloader to make a bootable image. Upstream is planning to add more examples soon to the debos recipes repository.

debos is a project from Sjoerd Simons at Collabora, it's still missing some features but it's actively being developed and there are big plans for the future!

Worse Than FailureError'd: Testing English in Production

Philip G. writes, "I found this gem when I was on the 'Windows USB/DVD Download Tool' page (yes, I know Rufus is better) and I decided to increment the number in the URL."


"Using a snowman emoji as a delimiter...yeah, I guess you could do that," writes George.


Seb wrote, "These signup incentives are just a little too variable for my tastes..."


"Wow. Vodafone UK really isn't selling the battery life of the Samsung Galaxy J3...or maybe they're just being honest?" Steve M. writes.


"Nice to see the Acer website here in South Africa being up front about their attempts at upselling," wrote Gabriel S.


"Thank you $wargaming_company_title$ for your friendly notice, I'll spend my $wot_gold_amount$ in $wot_gold_suggestion$.", Tassu writes.


[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Google AdsenseAdSense now understands Telugu

Today, we’re excited to announce the addition of Telugu, a language spoken by over 70 million in India and many other countries around the world, to the family of AdSense supported languages. With this launch, publishers can now monetize their Telugu content and advertisers can connect to a Telugu speaking audience with relevant ads.

To start monetizing your Telugu content website with Google AdSense:

Check the AdSense program policies and make sure your website is compliant.
Sign up for an AdSense account.
Add the AdSense code to start displaying relevant ads to your users.

Welcome to AdSense! Sign up now.

Posted by:
The AdSense Internationalization Team

Planet Debianbisco: Fourth GSoC Report

As announced in the last report, i started looking into SSO solutions and evaluated and tested them. At the begining my focus was on SAML integration, but i soon realized that OAuth2 would be more important.

I started with installing Lemonldap-NG. LL-NG is a WebSSO solution writting in perl that uses ModPerl or FastCGI for delivering Webcontent. There is a Debian package in stable, so the installation was no problem at all. The configuration was a bit harder, as LL-NG has a complex architecture with different vhosts. But after some fiddling i managed to connect the installation to our test LDAP instance and was able to authenticate against the LL-NG portal. Then i started to research how to integrate an OAuth2 client. For the tests i had on the one hand a gitlab installation that i tried to connect to the OAuth2 providers using the omniauth-oauth2-generic strategy. To have a bit more fine grained control over the OAuth2 client configuration i also used the python requests-oauthlib module and modified the web app example from their documentation to my needs. After some fiddling and a bit of back and forth on the lemonldap-ng mailinglist i managed both test clients to authenticate against LL-NG.

Lemonldap-NG Screenshot

The second solution i tested was Keycloak, an identity and access management solution written in java by Redhat. There is no debian package, but nonetheless it was very easy to get it running. It is enough to install jre-default from the package repositories and then run the standalone script from the extracted keycloak folder. Because keycloak only listens on localhost and i didn’t want to get into configuring the java webserver stuff, i installed nginx and configured is as a proxy. In Keycloak too the first step was to configure the LDAP backend. When i was able to successfully login using my LDAP credentials, i looked into configuring an OAuth2 client, which wasn’t that hard either.

Keycloak Screenshot

The third solution i looked into was Glewlwyd, written by babelouest. There is a Debian package in buster, so i added the buster sources, set up apt pinning and installed the needed packages. Glewlwyd is a system service that listens on localhost:4593, so i also used nginx in this case. The configuration for the LDAP backend is done in the configuration file which is on Debian /etc/glewlwyd/glewlwyd-debian.conf. Glewlwyd provides a webinterface for managing users and clients and it is possible to store all the values in LDAP.

Keycloak Screenshot

The next steps will be to test the last candidate, which is ipsilon and also test all the solutions for some important features, like multiple backends and exporting of configurable attributes. Last but not least i want to create a table to have an overview of all the features and drawbacks of the solutions. All the evaluations are public in a salsa repository

I also carried on doing some work on nacho, though most of the issues that have to be fixed are rather small. I reguarly stumble upon texts about Python or Django, like for example the Django NewbieMistakes and try to read all of them and use that for improving on my work.

Rondam RamblingsI have no words

So I'll let Lili Loofbourow speak for me.

Planet DebianLaura Arjona Reina: Debian and free software personal misc news

Many of them probably are worth a blog post each, but it seems I cannot find the time or motivation to craft nice blog posts for now, so here’s a quick update of some of the things that happened and happen in my digital life:

  • Debian Jessie became LTS and I still didn’t upgrade my home server to stable. Well, I could say myself that now I have 2 more years to try to find the time (thanks LTS team!) and that the machine just works (and that’s probably the reason for not finding the motivation to upgrade it or to put time on it (thanks Debian and the software projects of the services I run there!)) but I have to find the way to give some love to my home server during this summer, otherwise I won’t be able to do it probably until the next summer.


  • is down since several weeks, and I’m afraid it probably won’t come back. This means my personal account in GNU Social is not working, and the Debian one ( is not working either. I would like to find another good instance where to create both accounts (I would like to selfhost but it’s not realistic, e.g. see above point). Considering both GNU Social and Mastodon networks, but I still need to do some research on uptimes, number of users, workforce behind the instances, who’s there, etc. Meanwhile, my few social network updates are posted in as always, and for Debian news you can follow (it provides RSS feed), or When I resurrect @debian in the fediverse I’ll publicise it and I hope followers find us again.


  • We recently migrated the Debian website from CVS to git: I am very happy and thankful to all the people that helped to make it possible. I think that most of the contributors adapted well to the changes (also because keeping the used workflows was a priority), but if you feel lost or want to comment on anything, just tell. We don’t want to loose anybody, and we’re happy to welcome and help anybody who wants to get involved.


  • Alioth’s shutdown and the Debian website migration triggered a lot of reviews in the website content (updating links and paragraphs, updating translations…) and scripts. Please be patient and help if you can (e.g. contact your language team, or have a look at the list of bugs: tagged or the bulk list). I will try to do remote “DebCamp18” work and attack some of them, but I’m also considering organising or attending a BSP in September/October. We’ll see.


  • In the Spanish translation team, I am very happy that we have several regular contributors, translating and/or reviewing. In the last months I did less translation work than what I would like, but I try not to loose pace and I hope to put more time on translations and reviews during this summer, at least in the website and in package descriptions.


  • One more year, I’m not coming to DebConf. This year my schedule/situation was clear from long ago, so it’s been easier to just accept that I cannot go, and continue being involved somehow. It’s sad not being able to celebrate the migration with web team mates, but I hope they celebrate anyway! I am a bit behind with DebConf publicity work but I will try to catch up soon, and for DebConf itself I will try to do the microblogging coverage as former years, and also participate in the IRC and watching the streaming, thanks timezones and siesta, I guess 😉


  • Since January I am enjoying my new phone (the Galaxy S III broke, and I bought a BQ Aquaris U+) with Lineage OS 14.x and F-Droid. I keep on having a look each several days to the F-Droid tab that shows the news and updated apps and it’s amazing the activity and life of the projects. A non exhaustive list of the free software apps that I use: AdAway, Number Guesser (I play with my son to this), Conversations, Daily Dozen, DavDroid, F-Droid, Fennec F-Droid, Hacker’s Keyboard, K-9 Mail, KDE Connect, Kontalk, LabCoat, Document Reader, LibreOffice Viewer (old but it works), Memetastic, NewPipe, OSMAnd~, PassAndroid, Periodical, Puma, Quasseldroid, QuickDic, RadioDroid, Reader for Pepper and Carrot, Red Moon, RedReader, Ring, Slight Backup, Termux. Some other apps that I don’t use them all the time but I find it’s nice to have them are AFWall+, Atomic (for when my Quassel server is down), Call Recorder, Pain Diary, Yalp Store. My son decided not to play games in phones/tablets so we removed Anuto TD, Apple Finger, Seafood Berserker, Shattered Pixel Dungeon and Turo (I appreciate the games but I only play some times, if another person plays too, just to share the game). My only non-free apps: the one that gives me the time I need to wait at the bus stop, Wallapop (second hand, person to person buy/sell app), and Whatsapp. I have no Google services in the phone and no Location services available for those apps, but I give the bus stop number or the Postal code number by hand, and they work.


  • I am very very happy with my Lenovo X230 laptop, its keyboard and everything. It runs Debian stable for now, and Plasma Desktop. I only have 2 issues with it: (1) hibernation, and (2) smart card reader. About the hibernation: sometimes, when on battery, I close the lid and it seems it does not hibernate well because when I open the lid again it does not come back, the power button blinks slowly, and pressing it, typing something or moving the touchpad, have no effect. The only ‘solution’ is to long-press the power button so it abruptly shuts down (or take the battery off, with the same effect). After that, I turn on again and the filesystem complains about the unexpected shut down but it boots correctly. About the smart card reader: I have a C3PO LTC31 smart card reader and when I connect it via USB to use my GPG smart card, I need to restart pcsc service manually to be able to use it. If I don’t do that, the smart card is not recognised (Thunderbird or whatever program asks me repeatedly to insert the card). I’m not sure why is that, and if it’s related to my setup, or to this particular reader. I have another reader (other model) at work, but always forget to switch them to make tests. Anyway I can live with it until I find time to research more.

There are probably more things that I forget, but this post became too long already. Bye!



Planet DebianJonathan McDowell: Thoughts on the acquisition of GitHub by Microsoft

Back at the start of 2010, I attended in Wellington. One of the events I attended was sponsored by GitHub, who bought me beer in a fine Wellington bar (that was very proud of having an almost complete collection of BrewDog beers, including some Tactical Nuclear Penguin). I proceeded to tell them that I really didn’t understand their business model and that one of the great things about git was the very fact it was decentralised and we didn’t need to host things in one place any more. I don’t think they were offended, and the announcement Microsoft are acquiring GitHub for $7.5 billion proves that they had a much better idea about this stuff than me.

The acquisition announcement seems to have caused an exodus. GitLab reported over 13,000 projects being migrated in a single hour. IRC and Twitter were full of people throwing up their hands and saying it was terrible. Why is this? The fear factor seemed to come from was who was doing the acquiring. Microsoft. The big, bad Linux is a cancer folk. I saw a similar, though more muted, reaction when LinkedIn were acquired.

This extremely negative reaction to Microsoft seems bizarre to me these days. I’m well aware of their past, and their anti-competitive practises (dating back to MS-DOS vs DR-DOS). I’ve no doubt their current embrace of Free Software is ultimately driven by business decisions rather than a sudden fit of altruism. But I do think their current behaviour is something we could never have foreseen 15+ years ago. Did you ever think Microsoft would be a contributor to the Linux kernel? Is it fair to maintain such animosity? Not for me to say, I guess, but I think that some of it is that both GitHub and LinkedIn were services that people were already uneasy about using, and the acquisition was the straw that broke the camel’s back.

What are the issues with GitHub? I previously wrote about the GitHub TOS changes, stating I didn’t think it was necessary to fear the TOS changes, but that the centralised nature of the service was potentially something to be wary of. joeyh talked about this as long ago as 2011, discussing the aspects of the service other than the source code hosting that were only API accessible, or in some other way more restricted than a git clone away. It’s fair criticism; the extra features offered by GitHub are very much tied to their service. And yet I don’t recall the same complaints about SourceForge, long the home of choice for Free Software projects. Its problems seem to be more around a dated interface, being slow to enable distributed VCSes and the addition of advertising. People left because there were much better options, not because of idiological differences.

Let’s look at the advantages GitHub had (and still has) to offer. I held off on setting up a GitHub account for a long time. I didn’t see the need; I self-hosted my Git repositories. I had the ability to setup mailing lists if I needed them (and my projects generally aren’t popular enough that they did). But I succumbed in 2015. Why? I think it was probably as part of helping to run an OpenHatch workshop, trying to get people involved in Free software. That may sound ironic, but helping out with those workshops helped show me the benefit of the workflow GitHub offers. The whole fork / branch / work / submit a pull request approach really helps lower the barrier to entry for people getting started out. Suddenly fixing an annoying spelling mistake isn’t a huge thing; it’s easy to work in your own private playground and then make that work available to upstream and to anyone else who might be interested.

For small projects without active mailing lists that’s huge. Even for big projects that can be a huge win. And it’s not just useful to new contributors. It lowers the barrier for me to be a patch ‘n run contributor. Now that’s not necessarily appealing to some projects, because they’d rather get community involvement. And I get that, but I just don’t have the time to be active in all the projects I feel I can offer something to. Part of that ease is the power of git, the fact that a clone is a first class repo, capable of standing alone or being merged back into the parent. But another part is the interface GitHub created, and they should get some credit for that. It’s one of those things that once you’re presented with it it makes sense, but no one had done it quite as slickly up to that point. Submissions via mailing lists are much more likely to get lost in the archives compared to being able to see a list of all outstanding pull requests on GitHub, and the associated discussion. And subscribe only to that discussion rather than everything.

GitHub also seemed to appear at the right time. It, like SourceForge, enabled easy discovery of projects. Crucially it did this at a point when web frameworks were taking off and a whole range of developers who had not previously pull large chunks of code from other projects were suddenly doing so. And writing frameworks or plugins themselves and feeling in the mood to share them. GitHub has somehow managed to hit critical mass such that lots of code that I’m sure would have otherwise never seen the light of day are available to all. Perhaps the key was that repos were lightweight setups under usernames, unlike the heavier SourceForge approach of needing a complete project setup per codebase you wanted to push. Although it’s not my primary platform I engage with GitHub for my own code because the barrier is low; it’s couple of clicks on the website and then I just push to it like my other remote repos.

I seem to be coming across as a bit of a GitHub apologist here, which isn’t my intention. I just think the knee-jerk anti GitHub reaction has been fascinating to observe. I signed up to GitLab around the same time as GitHub, but I’m not under any illusions that their hosted service is significantly different from GitHub in terms of having my data hosted by a third party. Nothing that’s up on either site is only up there, and everything that is is publicly available anyway. I understand that as third parties they can change my access at any point in time, and so I haven’t built any infrastructure that assumes their continued existence. That said, why would I not take advantage of their facilities when they happen to be of use to me?

I don’t expect my use of GitHub to significantly change now they’ve been acquired.

Planet DebianDaniel Stender: Dynamic inventories for Ansible using Python

Ansible not only accepts static machine inventories represented in an inventory file, but it is capable of leveraging also dynamic inventories. To use that mechanism the only thing which is needed is a program resp. a script which creates the particular machines which are needed for a certain project and returns their addresses as a JSON object, which represents an inventory like an inventory file does it. This makes it possible to created specially crafted tools to set up the number of cloud machines which are needed for an Ansible project, and the mechanism theoretically is open to any programming language. Instead of selecting an inventory project with the option -i like with ansible-playbook, just give the name of the program you’ve set up, and Ansible executes it and evaluates the inventory which is given back by that.

Here’s a little example of an dynamic inventory for Ansible written in Python. The script uses the python-digitalocean library in Debian ( to launch a couple of DigitalOcean droplets for a particular Ansible project:

#!/usr/bin/env python
import os
import sys
import json
import digitalocean
import ConfigParser

config = ConfigParser.ConfigParser() + '/inventory.cfg')
nom = config.get('digitalocean', 'number_of_machines')
keyid = config.get('digitalocean', 'key-id')
    token = os.environ['DO_ACCESS_TOKEN']
except KeyError:
    token = config.get('digitalocean', 'access-token')

manager = digitalocean.Manager(token=token)

def get_droplets():
    droplets = manager.get_all_droplets(tag_name='ansible-demo')
    if not droplets:
        return False
    elif len(droplets) != 0 and len(droplets) != int(nom):
        print "The number of already set up 'ansible-demo' differs"
    elif len(droplets) == int(nom):
        return droplets

key = manager.get_ssh_key(keyid)
tag = digitalocean.Tag(token=token, name='ansible-demo')

def create_droplet(name):
    droplet = digitalocean.Droplet(token=token,
    return True

if get_droplets() is False:
    for node in range(int(nom))[1:]:

droplets = get_droplets()
inventory = {}
hosts = {}
machines = []
for droplet in droplets:
    if 'load-balancer' in
hosts = {}
machines = []
for droplet in droplets:
    if 'wordpress' in

print json.dumps(inventory)

It’s a simple basic script to demonstrate how you can craft something for your own needs to leverage dynamic inventories for Ansible. The parameter of droplets like the size (512mb) the image (debian-8-x64) and the region (fra1) are hard coded, and can be changed easily if wanted. Other things needed like the total number of wanted machines, the access token for the DigitalOcean API and the ID of the public SSH key which is going to be applied to the virtual machines is evaluated using a simple configuration file (inventory.cfg):

access-token = 09c43afcbdf4788c611d5a02b5397e5b37bc54c04371851
number_of_machines = 4
key-id = 21699531

The script of course can be executed independently of Ansible. The first time you execute it, it creates the number of machines which is wanted (consisting of always of one load-balancer node and – given the total number of machines, which is four – three wordpress-nodes), and gives back the IP adresses of the newly created machines being put into groups:

$ ./ 
{"wordpress-nodes": {"hosts": ["", "", ""]}, "load-balancer": {"hosts": [""]}}


Any consecutive execution of this script recognizes that the wanted machines already have been created, and just returns this inventory the same way one more time:

$ ./ 
{"wordpress-nodes": {"hosts": ["", "", ""]}, "load-balancer": {"hosts": [""]}}

If you delete the droplets then, and run the script again, a new set of machines gets created:

$ for i in $(doctl compute droplet list | awk '/ansible-demo/{print $(1)}'); do doctl compute droplet delete $i; done
$ ./ 
{"wordpress-nodes": {"hosts": ["", "", ""]}, "load-balancer": {"hosts": [""]}}

As you can see, the JSON object1 which is given back represents an Ansible inventory, the same inventory represented in a file it would have this form:



Like said, you can use this “one-trick pony” Python script instead of an inventory file, just given the name of that, and the Ansible CLI tool runs it and works on the inventory which is given back:

$ ansible wordpress-nodes -i ./ -m ping -u root --private-key=~/.ssh/id_digitalocean | SUCCESS => {
    "changed": false, 
    "ping": "pong"
} | SUCCESS => {
    "changed": false, 
    "ping": "pong"
} | SUCCESS => {
    "changed": false, 
    "ping": "pong"

Note: the script doesn’t yet supports a waiter mechanism but completes as soon as there are IP adresses available. It always could take a little while until the newly created machines are completely created, booted, and accessible via SSH, thus there could be errors on the hosts not being accessible. In that case, just wait a few seconds and run the Ansible command again.

  1. For the exact structure of the JSON object I’m drawing from: [return]

Planet DebianShirish Agarwal: Abuse of childhood

The blog post is in homage to any abuse victims and more directly to parents and children being separated by policies formed by a Government whose chief is supposed to be ‘The leader of the free world’. I sat on the blog post for almost a week even though I got it proof-read by two women, Miss S and Miss K to see if there is or was anything wrongful about the post. Both the women gave me their blessings as it’s something to be shared.

I am writing this blog post writing from my house in a safe environment, having chai (tea), listening to some of my favorite songs, far from trauma some children are going through.

I have been disturbed by the news of families and especially young children being separated from their own families because of state policy. I was pretty hesitant to write this post as we are told to only share our strengths and not our weaknesses or traumas of the past. I partly want to share so people who might be on the fence of whether separating families is a good idea or not might have something more to ponder over. The blog post is not limited to the ongoing and proposed U.S. Policy called Separations but all and any situations involving young children and abuse.

The first experience was when my cousin sister and her family came to visit me and mum. We often do not get relatives or entertain them due to water shortage issues. It’s such a common such issue all over India that nobody bats an eye over, we will probably talk about it in some other blog post if need be.

The sister who came, she has two daughters. The older one knew me and mum and knew that both of us have a penchant for pulling legs but at the same time like to spoil Didi and her. All of us are foodies so we have a grand time. The younger one though was unknown and I were unknown to her. In playfulness, we said we would keep the bigger sister with us and she was afraid. She clung to her sister like anything. Even though we tried to pacify her but she wasn’t free with us till the time she was safely tucked in with her sister in the family car along with her mum and dad.

While this is a small incident, it triggered a memory kept hidden over 35+ years back. I was perhaps 4-5 years old. I was being bought up by a separated working mum who had a typical government 9-5 job. My grandparents were (mother’s side) used to try and run the household in her absence, my grandmother doing all household chores, my grandfather helping here and there, while all outside responsibilities were his.

In this, there was a task to put me in school. Mum probably talked to some of her colleagues or somebody or the other suggested St. Francis, a Catholic missionary school named after one of the many saints named Saint Francis. It is and was a school nearby. There was a young man who used to do odd-jobs around the house and was trusted by all who was a fan of ( Amitabh Bachchan) and who is/was responsible for my love for first-day first shows of his movies. A genuinely nice elderly brother kind of person with whom I have had lot of beautiful memories of childhood.

Anyways, his job was to transport me back and fro to the school which he did without fail. The trouble started for me in school, I do not know the reason till date, maybe I was a bawler or whatever, I was kept in a dark, dank toilet for a year (minus the holidays). The first time I went to the dark, foreboding place, I probably shat and vomited for which I was beaten quite a bit. I learnt that if I were sent to the dark room, I had to put my knickers somewhere top where they wouldn’t get dirty so I would not get beaten. Sometimes I was also made to clean my vomit or shit which made the whole thing more worse. I would be sent to the room regularly and sometimes beaten. The only nice remembrance I had were the last hour before school used to be over as I was taken out of the toilet, made presentable and was made to sit near the window-sill from where I could see trains running by. I dunno whether it was just the smell of free, fresh air plus seeing trains and freedom got somehow mixed and a train-lover was born.

I don’t know why I didn’t ever tell my mum or anybody else about the abuse happening with me. Most probably because the teacher may have threatened me with something or the other. Somehow the year ended and I was failed. The only thing probably mother and my grandparents saw and felt that I had grown a bit thinner.

Either due to mother’s intuition or because I had failed I was made to change schools. While I was terrified of the change because I thought there was something wrong with me and things will be worse, it was actually the opposite. While corporal punishment was still the norm, there wasn’t any abuse unlike in the school before. In the eleven years I spent in the school, there was only one time that I was given toilet duty and that too because I had done something naughty like pulling a girl’s hair or something like that, and it was one or two students next to me. Rather than clean the toilets we ended up playing with water.

I told part of my experience to mum about a year, year and half after I was in the new school half-expecting something untoward to happen as the teacher has said. The only thing I remember from that conversation was shock registering on her face. I didn’t tell her about the vomit and shit part as I was embarrassed about it. I had nightmares about it till I was in my teens when with treks and everything I understood that even darkness can be a friend, just like light is.

For the next 13 odd years till I asked her to stop checking on me, she used to come to school every few months, talk to teachers and talk with class-mates. The same happened in college till I asked her to stop checking as I used to feel embarrassed when other class-mates gossiped.

It was only years after when I began working I understood what she was doing all along. She was just making sure I was ok.

The fact that it took me 30+ years to share this story/experience with the world at large also tells that somewhere I still feel a bit scarred, on the soul.

If you are feeling any sympathy or empathy towards me, while I’m thankful for it. It would be much better served to direct it towards those who are in a precarious vulnerable situation like I was. It doesn’t matter what politics you believe in or peddle in, separating children from their parents is immoral as a being forget even a human being. Even in the animal world, we see how predators only attack those young whose fathers and mothers are not around to protect them.

As in any story/experience/tale there are lessons or takeaways that I hope most parents teach their young ones, especially Indian or Asiatic parents at large –

1. Change the rule of ‘Respect all elders and obey them no matter what’ to ‘Respect everybody including yourself’ should be taught from parents to their children. This will boost their self-confidence a bit and also be share any issues that happen with them.

2. If somebody threatens you or threatens family immediately inform us (i.e. the parents).

3. The third one is perhaps the most difficult ‘telling the truth without worrying about consequences’. In Indian families we learn about ‘secrets’ and ‘modifying truth’ from our parents and elders. That needs to somehow change.

4. Few years ago, Aamir Khan (a film actor) with people specializing in working with children talked and shared about ‘Good touch, bad touch’ as a prevention method, maybe somebody could also do something similar for such kinds of violence.

At the end I recently came across an article and also Terminal.

Planet DebianAntoine Beaupré: My free software activities, June 2018

It's been a while since I haven't done a report here! Since I need to do one for LTS, I figured I would also catchup you up with the work I've done in the last three months. Maybe I'll make that my new process: quarterly reports would reduce the overhead on my side with little loss on you, my precious (few? many?) readers.

Debian Long Term Support (LTS)

This is my monthly Debian LTS report.

I omitted doing a report in May because I didn't spend a significant number of hours, so this also covers a handful of hours of work in May.

May and June were strange months to work on LTS, as we made the transition between wheezy and jessie. I worked on all three LTS releases now, and I must have been absent from the last transition because I felt this one was a little confusing to go through. Maybe it's because I was on frontdesk duty during that time...

For a week or two it was unclear if we should have worked on wheezy, jessie, or both, or even how to work on either. I documented which packages needed an update from wheezy to jessie and proposed a process for the transition period. This generated a good discussion, but I am not sure we resolved the problems we had this time around in the long term. I also sent patches to the security team in the hope they would land in jessie before it turns into LTS, but most of those ended up being postponed to LTS.

Most of my work this month was spent actually working on porting the Mercurial fixes from wheezy to jessie. Technically, the patches were ported from upstream 4.3 and led to some pretty interesting results in the test suite, which fails to build from source non-reproducibly. Because I couldn't figure out how to fix this in the alloted time, I uploaded the package to my usual test location in the hope someone else picks it up. The test package fixes 6 issues (CVE-2018-1000132, CVE-2017-9462, CVE-2017-17458 and three issues without a CVE).

I also worked on cups in a similar way, sending a test package to the security team for 2 issues (CVE-2017-18190, CVE-2017-18248). Same for Dokuwiki, where I sent a patch single issue (CVE-2017-18123). Those have yet to be published, however, and I will hopefully wrap that up in July.

Because I was looking for work, I ended up doing meta-work as well. I made a prototype that would use the embedded-code-copies file to populate data/CVE/list with related packages as a way to address a problem we have in LTS triage, where package that were renamed between suites do not get correctly added to the tracker. It ended up being rejected because the changes were too invasive, but led to Brian May suggesting another approach, we'll see where that goes.

I've also looked at splitting up that dreaded data/CVE/list but my results were negative: it looks like git is very efficient at splitting things up. While a split up list might be easier on editors, it would be a massive change and was eventually refused by the security team.

Other free software work

With my last report dating back to February, this will naturally be a little imprecise, as three months have passed. But let's see...


I wrote eigth articles in the last three months, for an average of three monthly articles. I was aiming at an average of one or two a week, so I didn't get reach my goal. My last article about Kubecon generated a lot of feedback, probably the best I have ever received. It seems I struck a chord for a lot of people, so that certainly feels nice.


Usual maintenance work, but we at last finally got access to the Linkchecker organization on GitHub, which meant a bit of reorganizing. The only bit missing now it the PyPI namespace, but that should also come soon. The code of conduct and contribution guides were finally merged after we clarified project membership. This gives us issue templates which should help us deal with the constant flow of issues that come in every day.

The biggest concern I have with the project now is the C parser and the outdated Windows executable. The latter has been removed from the website so hopefully Windows users won't report old bugs (although that means we won't gain new Windows users at all) and the former might be fixed by a port to BeautifulSoup.

Email over SSH

I did a lot of work to switch away from SMTP and IMAP to synchronise my workstation and laptops with my mailserver. Having the privilege of running my own server has its perks: I have SSH access to my mail spool, which brings the opportunity for interesting optimizations.

The first I have done is called rsendmail. Inspired by work from Don Armstrong and David Bremner, rsendmail is a Python program I wrote from scratch to deliver email over a pipe, securely. I do not trust the sendmail command: its behavior can vary a lot between platforms (e.g. allow flushing the mailqueue or printing it) and I wanted to reduce the attack surface. It works with another program I wrote called sshsendmail which connects to it over a pipe. It integrates well into "dumb" MTAs like nullmailer but I also use it with the popular Postfix as well, without problems.

The second is to switch from OfflineIMAP to Syncmaildir (SMD). The latter allows synchronization over SSH only. The migration was a little difficult but I very much like the results: SMD is faster than OfflineIMAP and works transparently in the background.

I really like to use SSH for email. I used to have my email password stored all over the place: in my Postfix config, in my email clients' memory, it was a mess. With the new configuration, things just work unattended and email feels like a solved problem, at least the synchronization aspects of it.


As often happens, I've done some work on my Emacs configuration. I switched to a new Solarized theme, the bbatsov version which has support for a light and dark mode and generally better colors. I had problems with the cursor which are unfortunately unfixed.

I learned about and used the Emacs iPython Notebook project (EIN) and filed a feature request to replicate the "restart and run" behavior of the web interface. Otherwise it's real nice to have a decent editor to work on Python notebooks and I have used this to work on the terminal emulators series and the related source code

I have also tried to complete my conversion to Magit, a pretty nice wrapper around git for Emacs. Some of my usual git shortcuts have good replacements, but not all. For example, those are equivalent:

  • vc-annotate (C-x C-v g): magit-blame
  • vc-diff (C-x C-v =): magit-diff-buffer-file

Those do not have a direct equivalent:

  • vc-next-action (C-x C-q, or F6): anarcat/magic-commit-buffer, see below
  • vc-git-grep (F8): no replacement

I wrote my own replacement for "diff and commit this file" as the following function:

(defun anarcat/magit-commit-buffer ()
  "commit the changes in the current buffer on the fly

This is different than `magit-commit' because it calls `git
commit' without going through the staging area AKA index
first. This is a replacement for `vc-next-action'.

Tip: setting the git configuration parameter `commit.verbose' to
2 will show the diff in the changelog buffer for review. See
`git-config(1)' for more information.

An alternative implementation was attempted with `magit-commit':

  (let ((magit-commit-ask-to-stage nil))
    (magit-commit (list \"commit\" \"--\"
                        (file-relative-name buffer-file-name)))))

But it seems `magit-commit' asserts that we want to stage content
and will fail with: `(user-error \"Nothing staged\")'. This is
why this function calls `magit-run-git-with-editor' directly
  (magit-run-git-with-editor (list "commit" "--" (file-relative-name buffer-file-name))))

It's not very pretty, but it works... Mostly. Sometimes the magit-diff buffer becomes out of sync, but the --verbose output in the commitlog buffer still works.

I've also looked at git-annex integration. The magit-annex package did not work well for me: the file listing is really too slow. So I found the git-annex.el package, but did not try it out yet.

While working on all of this, I fell in a different rabbit hole: I found it inconvenient to "pastebin" stuff from Emacs, as it would involve selection a region, piping to pastebinit and copy-pasting the URL found in the *Messages* buffer. So I wrote this first prototype:

(defun pastebinit (begin end)
  "pass the region to pastebinit and add output to killring

TODO: prompt for possible pastebins (pastebinit -l) with prefix arg

Note that there's a `nopaste.el' project which already does this,
which we should use instead.
  (interactive "r")
  (message "use nopaste.el instead")
  (let ((proc (make-process :filter #'pastebinit--handle
                            :command '("pastebinit")
                            :connection-type 'pipe
                            :buffer nil
                            :name "pastebinit")))
    (process-send-region proc begin end)
    (process-send-eof proc)))

(defun pastebinit--handle (proc string)
  "handle output from pastebinit asynchronously"
  (let ((url (car (split-string string))))
    (kill-new url)
    (message "paste uploaded and URL added to kill ring: %s" url)))

It was my first foray into aynchronous process operations in Emacs: difficult and confusing, but it mostly worked. Those who know me know what's coming next, however: I found not only one, but two libraries for pastebins in Emacs: nopaste and (after patching nopaste to add asynchronous support and customize support of course) debpaste.el. I'm not sure where that will go: there is a proposal to add nopaste in Debian that was discussed a while back and I made a detailed report there.


I made a minor release of Monkeysign to cover for CVE-2018-12020 and its GPG sigspoof vulnerability. I am not sure where to take this project anymore, and I opened a discussion to possibly retire the project completely. Feedback welcome.


I wrote a new ikiwiki plugin called bootstrap to fix table markup to match what the Bootstrap theme expects. This was particularly important for the previous blog post which uses tables a lot. This was surprisingly easy and might be useful to tweak other stuff in the theme.

Random stuff

  • I wrote up a review of security of APT packages when compared with the TUF project, in TufDerivedImprovements
  • contributed to about 20 different repositories on GitHub, too numerous to list here

Krebs on SecurityPlant Your Flag, Mark Your Territory

Many people, particularly older folks, proudly declare they avoid using the Web to manage various accounts tied to their personal and financial data — including everything from utilities and mobile phones to retirement benefits and online banking services. The reasoning behind this strategy is as simple as it is alluring: What’s not put online can’t be hacked. But increasingly, adherents to this mantra are finding out the hard way that if you don’t plant your flag online, fraudsters and identity thieves may do it for you.

The crux of the problem is that while most types of customer accounts these days can be managed online, the process of tying one’s account number to a specific email address and/or mobile device typically involves supplying personal data that can easily be found or purchased online — such as Social Security numbers, birthdays and addresses.

Some examples of how being a modern-day Luddite can backfire are well-documented, such as when scammers create online accounts in someone’s name at the Internal Revenue Service, the U.S. Postal Service or the Social Security Administration.

Other examples may be far less obvious. Consider the case of a consumer who receives their home telephone service as part of a bundle through their broadband Internet service provider (ISP). Failing to set up a corresponding online account to manage one’s telecommunications services can provide a powerful gateway for fraudsters.

Carrie Kerskie is president of Griffon Force LLC, a company in Naples, Fla. that helps identity theft victims recover from fraud incidents. Kerskie recalled a recent case in which thieves purchased pricey items from a local jewelry store in the name of an elderly client who’d previously bought items at that location as gifts for his late wife.

In that incident, the perpetrator presented a MasterCard Black Card in the victim’s name along with a fake ID created in the victim’s name (but with the thief’s photo). When the jewelry store called the number on file to verify the transactions, the call came through to the impostor’s cell phone right there in the store.

Kerskie said a follow-up investigation revealed that the client had never set up an account at his ISP (Comcast) to manage it online. Multiple calls with the ISP’s customer support people revealed that someone had recently called Comcast pretending to be the 86-year-old client and established an online account.

“The victim never set up his account online, and the bad guy called Comcast and gave the victim’s name, address and Social Security number along with an email address,” Kerskie said. “Once that was set up, the bad guy logged in to the account and forwarded the victim’s calls to another number.”

Incredibly, Kerskie said, the fraudster immediately called Comcast to ask about the reason for the sudden account changes.

“While I was on the phone with Comcast, the customer rep told me to hold on a minute, that she’d just received a communication from the victim,” Kerskie recalled. “I told the rep that the client was sitting right beside me at the time, and that the call wasn’t from him. The minute we changed the call forwarding options, the fraudster called customer service to ask why the account had been changed.”

Two to three days after Kerskie helped the client clean up fraud with the Comcast account, she got a frantic call from the client’s daughter, who said she’d been trying her dad’s mobile phone but that he hadn’t answered in days. They soon discovered that dear old dad was just fine, but that he’d also neglected to set up an online account at his mobile phone provider.

“The bad guy had called in to the mobile carrier, provided his personal details, and established an online account,” Kerskie said. “Once they did that, they were able transfer his phone service to a new device.”


Many people naively believe that if they never set up their bank or retirement accounts for online access then cyber thieves can’t get access either. But Kerskie said she recently had a client who had almost a quarter of a million dollars taken from his bank account precisely because he declined to link his bank account to an online identity.

“What we found is that the attacker linked the client’s bank account to an American Express Gift card, but in order to do that the bad guy had to know the exact amount of the microdeposit that AMEX placed in his account,” Kerskie said. “So the bad guy called the 800 number for the victim’s bank, provided the client’s name, date of birth, and Social Security number, and then gave them an email address he controlled. In this case, had the client established an online account previously, he would have received a message asking to confirm the fraudulent transaction.”

After tying the victim’s bank account to a prepaid card, the fraudster began slowly withdrawing funds in $5,000 increments. All told, thieves managed to siphon almost $170,000 over a six month period. The victim’s accounts were being managed by a trusted acquaintance, but the withdrawals didn’t raise alarms because they were roughly in line with withdrawal amounts the victim had made previously.

“But because the victim didn’t notify the bank within 60 days of the fraudulent transactions as required by law, the bank only had to refund the last 60 days worth of fraudulent transactions,” Kerskie said. “We were ultimately able to help him recover most of it, but that was a whole other ordeal.”

Kerskie said many companies try to fight fraud on accounts belonging to customers who haven’t set up a corresponding online account by sending a letter via snail mail to those customers when account changes are made.

“But not everyone does that and if the thief who’s taking advantage of the situation is smart, he’ll simply set up an online account and change the billing address, so the customer never gets that notice,” Kerskie said.


Kerskie said it’s a good idea for people with older relatives to help those individuals ensure they have set up and manage online identities for their various accounts — even if those relatives never intend to access any of the accounts online. Helping those relatives place a security freeze on their credit files with the four major credit bureaus (and with another, little known bureau that many mobile providers rely upon for credit checks) can go a long way toward preventing new account fraud.

Adding two-factor authentication (whenever it is available) and/or establishing a customer-specific personal identification number (PIN) also can help secure online access. For those who can’t be convinced to use a password manager, even writing down all of the account details and passwords on a slip of paper can be helpful, provided the document is secured in a safe place.

This process is doubly important, Kerskie said, for parents and relatives who have just lost a spouse.

“When someone passes away, there’s often an obituary in the paper that offers a great deal of information about the deceased and any surviving family members,” she said. “And the bad guys absolutely love obits.”

Eschewing accounts on popular social media platforms also can have consequences, mainly because most people have enough information about themselves online that anyone can create an account in their name and start messaging friends and family members with various fraud schemes.

“I always tell people if you don’t want to set up an online account for social media that’s fine, but make sure you tell your friends and family, ‘If you ever get a social media request from me, just ignore it because I’ll never do that,'” Kerskie advised.

In summary, plant your flag online or — as Kerskie puts it — “mark your territory” — before fraudsters do it for you. And consider helping less Internet-savvy friends and family members to do the same.

“It can save a lot of headache,” she said. “The sad reality is that criminals very often only need to answer two or three questions to commit fraud in your name, whereas victims typically need to spend hours of their time and answer dozens of questions to undo the resulting fraud.”

Planet DebianDaniel Kahn Gillmor: Protecting Software Updates

In my work at the ACLU, we fight for civil rights and civil liberties. This includes the ability to communicate privately, free from surveillance or censorship, and to control your own information. These are principles that I think most free software developers would agree with. In that vein, we just released a guide to securing software update channels in collaboration with students from NYU Law School.

The guide focuses specifically on what people and organizations that distribute software can do to ensure that their software update processes and mechanisms are actually things that their users can reliably trust. The goal is to make these channels trustworthy, even in the face of attempts by government agencies to force software vendors to ship malware to their users.

Why software updates specifically? Every well-engineered system on today's Internet will have a software update mechanism, since there are inevitably bugs that need fixing, or new features added to improve the system for the users. But update channels also represent a risk: they are an unclosable hole that enables installation of arbitrary software, often at the deepest, most-privileged level of the machine. This makes them a tempting target for anyone who wants to force the user to run malware, whether that's a criminal organization, a corporate or political rival, or a government surveillance agency.

I'm pleased to say that Debian has already implemented many of the technical recommendations we describe, including leading the way on reproducible builds. But as individual developers we might also be targeted, as lamby points out, and it's worth thinking about how you'd defend your users from such a situation.

As an organization, it would be great to see Debian continue to expand its protections for its users by holding ourselves even more accountable in our software update mechanisms than we already do. In particular, I'd love to see work on binary transparency, similar to what Mozilla has been doing, but that ensures that the archive signing keys (which our users trust) can't be abused/misused/compromised without public exposure, and that allows for easy monitoring and investigation of what binaries we are actually publishing.

In addition to technical measures, if you think you might ever get a government request to compromise your users, please make sure you are in touch with a lawyer who has your back, who knows how to challenge requests in court, and who understands why software update channels should not be used for deliberately shipping malware. If you're facing such a situation, and you're in the USA and you don't have a lawyer yet yourself, you can reach out to the lawyers my workplace, the ACLU's Speech, Privacy, and Technology Project for help.

Protecting software update channels is the right thing for our users, and for free software -- Debian's priorities. So please take a look at the guidance, think about how it might affect you or the people that you work with, and start a conversation about what you can do to defend these systems that everyone is obliged to trust on today's communications.

TEDAn ambitious plan to explore our oceans, and more news from TED speakers


The past few weeks have brimmed over with TED-related news. Below, some highlights.

Exploring the ocean like never before. A school of ocean-loving TED speakers have teamed up to launch OceanX, an international initiative dedicated to discovering more of our oceans in an effort to “inspire a human connection to the sea.” The coalition is supported by Bridgewater Capital’s Ray Dalio, along with luminaries like ocean explorer Sylvia Earle and filmmaker James Cameron, and partners such as BBC Studios, the American Museum of Natural History and the National Geographic Society. The coalition is now looking for ideas for scientific research missions in 2019, exploring the Norwegian Sea and the Indian Ocean. Dalio’s son Mark leads the media arm of the venture; from virtual reality demonstrations in classrooms to film and TV releases like the BBC show Blue Planet II and its follow-up film Oceans: Our Blue Planet, OceanX plans to build an engaged global community that seeks to “enjoy, understand and protect our oceans.” (Watch Dalio’s TED Talk, Earle’s TED Talk and Cameron’s TED Talk.)

The Ebola vaccine that’s saving lives. In response to the recent Ebola outbreak in the Democratic Republic of the Congo, GAVI — the Vaccine Alliance, led by Seth Berkeley — has deployed thousands of experimental vaccines in an outbreak control strategy. The vaccines were produced as part of a partnership between GAVI and Merck, a pharmaceutical company, committed to proactively developing and producing vaccines in case of a future Ebola epidemic. In his TED Talk, Berkeley spoke of the drastic dangers of global disease and the preventative measures necessary to ensure we are prepared for future outbreaks. (Watch his TED Talk and read our in-depth interview with Berkeley.)

A fascinating new study on the halo effect. Does knowing someone’s political leanings change how you gauge their skills? Cognitive neurologist Tali Sharot and lawyer Cass R. Sunstein shared insights from their latest research answering the question in The New York Times. Alongside a team from University College London and Harvard Law School, Sharot conducted an experiment testing whether knowing someone’s political leanings affected how we would engage and trust in other non-political aspects of their lives. The study found that people were more willing to trust someone who had the same political beliefs as them — even in completely unrelated fields, like dentistry or architecture. These findings have wide-reaching implications and can further our understanding of the social and political landscape. (Watch Sharot’s TED Talk on optimism bias).

A new essay anthology on rape culture. Roxane Gay’s newest book, Not That Bad: Dispatches from Rape Culture, was released in May to critical and commercial acclaim. The essay collection, edited and introduced by Gay, features first-person narratives on the realities and effects of harassment, assault and rape. With essays from 29 contributors, including actors Gabrielle Union and Amy Jo Burns, and writers Claire Schwartz and Lynn Melnick, Not That Bad offers feminist insights into the national and global dialogue on sexual violence. (Watch Gay’s TED Talk.)

One million pairs of 3D-printed sneakers. At TED2015, Carbon founder and CEO Joseph DeSimone displayed the latest 3D printing technology, explaining its seemingly endless applications for reshaping the future of manufacturing. Now, Carbon has partnered with Adidas for a bold new vision to 3D-print 100,000 pairs of sneakers by the end of 2018, with plans to ramp up production to millions. The company’s “Digital Light Synthesis” technique, which uses light and oxygen to fabricate materials from pools of resin, significantly streamlines manufacturing from traditional 3D-printing processes — a technology Adidas considers “revolutionary.” (Watch DeSimone’s TED Talk.)

CryptogramManipulative Social Media Practices

The Norwegian Consumer Council just published an excellent report on the deceptive practices tech companies use to trick people into giving up their privacy.

From the executive summary:

Facebook and Google have privacy intrusive defaults, where users who want the privacy friendly option have to go through a significantly longer process. They even obscure some of these settings so that the user cannot know that the more privacy intrusive option was preselected.

The popups from Facebook, Google and Windows 10 have design, symbols and wording that nudge users away from the privacy friendly choices. Choices are worded to compel users to make certain choices, while key information is omitted or downplayed. None of them lets the user freely postpone decisions. Also, Facebook and Google threaten users with loss of functionality or deletion of the user account if the user does not choose the privacy intrusive option.


The combination of privacy intrusive defaults and the use of dark patterns, nudge users of Facebook and Google, and to a lesser degree Windows 10, toward the least privacy friendly options to a degree that we consider unethical. We question whether this is in accordance with the principles of data protection by default and data protection by design, and if consent given under these circumstances can be said to be explicit, informed and freely given.

I am a big fan of the Norwegian Consumer Council. They've published some excellent research.

Worse Than FailureCodeSOD: Foggy about Security

Maverick StClare’s company recently adopted a new, SaaS solution for resource planning. Like most such solutions, it was pushed from above without regard to how people actually worked, and thus required the users to enter highly structured data into free-form, validation-free, text fields. That was dumb, so someone asked Maverick: “Hey, could you maybe write a program to enter the data for us?”

Well, you’ll be shocked to learn that there was no API, but the web pages themselves all looked pretty simple and the design implied they hadn’t changed since IE4, so Maverick decided to take a crack at writing a scraper. Step one: log in. Easy, right? Maverick fired up a trace on the HTTPS traffic and sniffed the requests. He was happy to see that his password wasn’t sent in plain text. He was less happy to see that it wasn’t sent using any of the standard HTTP authentication mechanisms, and it certainly wasn’t hashed using any algorithm he recognized. He dug into the code, and found this:

function Foggy(svInput)
  // Any changes must be duplicated in the server-side version of this function.
  var svOutput = "";
  var ivRnd;
  var i;
  var ivLength = svInput.length;

  if (ivLength == 0 || ivLength > 158)
        svInput = svInput.replace(/"/g,"&qt;");
        return svInput;

  for (i = 0; i < ivLength; i++)
        ivRnd = Math.floor(Math.random() * 3);
        if (svInput.charCodeAt(i) == 32 || svInput.charCodeAt(i) == 34 || svInput.charCodeAt(i) == 62)
          ivRnd = 1;
        if (svInput.charCodeAt(i) == 33 || svInput.charCodeAt(i) == 58 || svInput.charCodeAt(i) == 59 || svInput.charCodeAt(i) + ivRnd > 255)
          ivRnd = 0;
        svOutput += String.fromCharCode(ivRnd+97);
        svOutput += String.fromCharCode(svInput.charCodeAt(i)+ivRnd);

  for (i = 0; i < Math.floor(Math.random() * 8) + 8; i++)
        ivRnd = Math.floor(Math.random() * 26);
        svOutput += String.fromCharCode(ivRnd+97);

  svOutput += String.fromCharCode(svInput.length + 96);
  return svOutput;

I… have so many questions. Why do they only replace quotes if the string is empty or greater than 158 characters? Why are there random numbers involved in their “hashing” algorithm? I’m foggy about this whole thing, indeed. And ah, protip: security through obscurity works better when nobody can see how you obfuscated things. All I can say is: “aWcjaacvc0b!cVahcgc0b!cHaubdcmb/gmzyrcoqhp”.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Mark ShuttleworthFraud alert – scams using my name and picture

I have recently become aware of a fraudulent investment scam which falsely states that I have launched new software known as a QProfit System promoted by Jerry Douglas. I’ve seen some phishing sites like and, and pop up ads on Facebook like this one:

I can’t comment on whether or not Jerry Douglas promotes a QProfit system and whether or not it’s fraud. But I can tell you categorically that there are many scams like this, and that this investment has absolutely nothing to do with me. I haven’t developed this software and I have no desire to defraud the South African government or anyone else. I’m doing what I can to get the fraudulent sites taken down. But please take heed and don’t fall for these scams.


Planet DebianBits from Debian: Debian Perl Sprint 2018

Three members of the Debian Perl team met in Hamburg between May 16 and May 20 2018 as part of the Mini-DebConf Hamburg to continue perl development work for Buster and to work on QA tasks across our 3500+ packages.

The participants had a good time and met other Debian friends. The sprint was productive:

  • 21 bugs were filed or worked on, many uploads were accepted.
  • The transition to Perl 5.28 was prepared, and versioned provides were again worked on.
  • Several cleanup tasks were performed, especially around the move from Alioth to Salsa in documentation, website, and wiki.
  • For src:perl, autopkgtests were enabled, and work on Versioned Provides has been resumed.

The full report was posted to the relevant Debian mailing lists.

The participants would like to thank the Mini-DebConf Hamburg organizers for providing the framework for our sprint, and all donors to the Debian project who helped to cover a large part of our expenses.

Planet DebianJonas Meurer: debian cryptsetup sprint report

Cryptsetup sprint report

The Cryptsetup team – consisting of Guilhem and Jonas – met on June 15 to 17 in order to work on the Debian cryptsetup packages. We ended up working three days (and nights) on the packages, refactored the whole initramfs integration, the SysVinit init scripts and the package build process and discussed numerous potential improvements as well as new features. The whole sprint was great fun and we enjoyed a lot sitting next to each other, being able to discuss design questions and implementation details in person instead of using clunky internet communication means. Besides, we had very nice and interesting chats, contacted other Debian folks from the Frankfurt area and met with jfs on Friday evening.

Splitting cryptsetup into cryptsetup-run and cryptsetup-initramfs

First we split the cryptsetup initramfs integration into a separate package cryptsetup-initramfs. The package that contains other Debian specific features like SysVinit scripts, keyscripts, etc. now is called cryptsetup-run and cryptsetup itself is a mere metapackage depending on both split off packages. So from now on, people can install cryptsetup-run if they don't need the cryptsetup initramfs integration. Once Buster is released we intend to rename cryptsetup-run to cryptsetup, which then will no longer have a strict dependency on cryptsetup-initramfs. This transition over two releases is necessary to avoid unexpected breakage on (dist-)upgrades. Meanwhile cryptsetup-initramfs ships a hook that upon generation of a new initramfs image detects which devices need to be unlocked early in the boot process and, in case it didn't find any, suggests the user to remove the package.

The package split allows us to define more fine-grained dependencies: since there are valid use case for wanting the cryptsetup binaries scripts but not the initramfs integration (in particular, on systems without encrypted root device), cryptsetup ≤2:2.0.2-1 was merely recommending initramfs-tools and busybox, while cryptsetup-initramfs now has hard dependencies on these packages.

We also updated the packages to latest upstream release and uploaded 2:2.0.3-1 on Friday shortly before 15:00 UTC. Due to the cryptsetup → cryptsetup-{run,initramfs} package split we hit the NEW queue, and it was manually approved by an ftpmaster… a mere 2h later. Kudos to them! That allowed us to continue with subsequent uploads during the following days, which was beyond our expectations for this sprint :-)

Extensive refactoring work

Afterwards we started working on and merging some heavy refactoring commits that touched almost all parts of the packages. First was a refactoring of the whole cryptsetup initramfs implementation that downsized both the cryptroot hook and script dramatically (less than half the size they were before). The logic to detect crypto disks was changed from parsing /etc/fstab to /proc/mounts and now the sysfs(5) block hierarchy is used to detect dm-crypt device dependencies. A lot of code duplication between the initramfs script and the SysVinit init script was removed by outsourcing common functions into a shared shell functions include file that is sourced by initramfs and SysVinit scripts. To complete the package refactoring, we also overhauled the build process by migrating it to the latest Debhelper 11 style. debian/rules as well was downsized to less than half the size and as an extra benefit we now run the upstream build-time testsuite during the package build process.

Some git statistics speak more than a thousand words:

$ git --no-pager diff --ignore-space-change --shortstat debian/2%2.0.2-1..debian/2%2.0.3-2 -- ./debian/
 92 files changed, 2247 insertions(+), 3180 deletions(-)
$ find ./debian -type f \! -path ./debian/changelog -print0 | xargs -r0 cat | wc -l
$ find ./debian -type f \! -path ./debian/changelog -printf x | wc -c

On CVE-2016-4484

Since 2:1.7.3-2, our initramfs boot script went to sleep for a full minute when the number of failed unlocking attempts exceeds the configured value (tries crypttab(5) option, which defaults to 3). This was added in order to defeat local brute force attacks, and mitigate one aspect of CVE-2016-4484; back then Jonas wrote a blog post to cover that story. Starting with 2:2.0.3-2 we changed this behavior and the script will now sleep for one second after each unsuccessful unlocking attempt. The new value should provide better user experience while still offering protection against local brute force attacks for very fast password hashing functions. The other aspect mentioned in the security advisory — namely the fact that the initramfs boot process drops to a root (rescue/debug) shell after the user fails to unlock the root device too many times — was not addressed at the time, and still isn't. initramfs-tools has a boot parameter panic=<sec> to disable the debug shell, and while setting this is beyond the scope of cryptsetup, we're planing to ask the initramfs-tools maintainers to change the default. (Of course setting panic=<sec> alone doesn't gain much, and one would need to lock down the full boot chain, including BIOS and boot loader.)

New features (work started)

Apart from the refactoring work we started/continued work on several new features:

  • We started to integrate luksSuspend support into system suspend. The idea is to luksSuspend all dm-crypt devices before suspending the machine in order to protect the storage in suspend mode. In theory, this seemed as simple as creating a minimal chroot in ramfs with the tools required to unlock (luksResume) the disks after machine resume, running luksSuspend from that chroot, putting the machine into suspend mode and running luksResume after it got resumed. Unfortunately it turned out to be way more complicated due to unpredictable race conditions between luksSuspend and machine suspend. So we ended up spending quite some time on debugging (and understanding) the issue. In the end it seems like the final sync() before machine suspend ( ) causes races in some cases as the dm-crypt device to be synced to is already luksSuspended. We ended up sending a request for help to the dm-crypt mailinglist but unfortunately so far didn't get a helpful response yet.
  • In order to get internationalization support for the messages and password prompts in the initramfs scripts, we patched gettext and locale support into initramfs-tools.
  • We started some preliminary work on adding beep support to the cryptsetup initramfs and sysVinit scripts for better accessibility support.

The above features are not available in the current Debian package yet, but we hope they will be included in a future release.

Bugs and Documentation

We also squashed quite some longstanding bugs and improved the crypttab(5) documentation. In total, we squashed 18 bugs during the sprint, the oldest one being from June 2013.

On the need for better QA

In addition to the many crypttab(5) options we also support a huge variety of block device stacks, such as LUKS-LVM2-MD combined in all ways one can possibly imagine. And that's a Debian addition hence something we, the cryptsetup package maintainers, have to develop and maintain ourselves. The many possibilities imply corner cases (it's not a surprise that complex or unusual setups can break in subtle ways) which motivated us to completely refactor the Debian-specific code, so it becomes easier to maintain.

While our final upload squashed 18 bugs, it also introduced new ones. In particular 2 rather serious regressions which fell through our tests. We have thorough tests for the most usual setups, as well as for some complex stacks we hand-crafted in order to detect corner cases, but this approach doesn't scale to covering the full spectrum of user setups: even with minimal sid installations the disk images would just take far too much space! Ideally we would have a automated test-suite, each test deploying a new transient sid VM with a particular setup. As the current and past regressions show, that's a beyond-the-scenes area we should work on. (In fact that's an effort we started already, but didn't touch during the sprint due to lack of time.)

More to come

There's some more things on our list that we didn't find time to work on. Apart from the unfinished new features we mentioned above, that's mainly the LUKS nuke feature that Kali Linux ships and the lack of keyscripts support to crypttab(5) in systemd.


In our eyes, the sprint was both a great success and great fun. We definitely want to repeat it anytime soon in order to further work on the open tasks and further improve the Debian cryptsetup package. There's still plenty of work to be done. We thank the Debian project and its generous donors for funding Guilhem's travel expenses.

Guilhem and Jonas, June 25th 2018

CryptogramIEEE Statement on Strong Encryption vs. Backdoors

The IEEE came out in favor of strong encryption:

IEEE supports the use of unfettered strong encryption to protect confidentiality and integrity of data and communications. We oppose efforts by governments to restrict the use of strong encryption and/or to mandate exceptional access mechanisms such as "backdoors" or "key escrow schemes" in order to facilitate government access to encrypted data. Governments have legitimate law enforcement and national security interests. IEEE believes that mandating the intentional creation of backdoors or escrow schemes -- no matter how well intentioned -- does not serve those interests well and will lead to the creation of vulnerabilities that would result in unforeseen effects as well as some predictable negative consequences

The full statement is here.

Worse Than FailureRepresentative Line: Got Your Number

You have a string. It contains numbers. You want to turn those numbers into all “0”s, presumably to anonymize them. You’re also an utter incompetent. What do you do?

You already know what they do. Jane’s co-worker encountered this solution, and she tells us that the language was “Visual BASIC, Profanity”.

Private Function ReplaceNumbersWithZeros(ByVal strText As String) As String
     ReplaceNumbersWithZeros = Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(strText, "1", "0"), "2", "0"), "3", "0"), "4", "0"), "5", "0"), "6", "0"), "7", "0"), "8", "0"), "9", "0")
End Function

Jane adds:

My co-worker found this function while researching some legacy code. Shortly after this discovery, it took us 15 minutes to talk him down off the ledge…and we’re on the ground floor.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianLouis-Philippe Véronneau: Montreal's Debian & Stuff June Edition

Hello world!

This is me inviting you to the next Montreal Debian & Stuff. This one will take place at Koumbit's offices in Montreal on June 30th from 10:00 to 17:00 EST.

The idea behind 'Debian & Stuff' is to have an informal gatherings of the local Debian community to work on Debian-related stuff - or not. Everyone is welcome to drop by and chat with us, hack on a nice project or just hang out!

We've been trying to have monthly meetings of the Debian community in Montreal since April, so this will be the third event in a row.

Chances are we'll take a break in July because of DebConf, but I hope this will become a regular thing!


Planet DebianPetter Reinholdtsen: Add-on to control the projector from within Kodi

My movie playing setup involve Kodi, OpenELEC (probably soon to be replaced with LibreELEC) and an Infocus IN76 video projector. My projector can be controlled via both a infrared remote controller, and a RS-232 serial line. The vendor of my projector, InFocus, had been sensible enough to document the serial protocol in its user manual, so it is easily available, and I used it some years ago to write a small script to control the projector. For a while now, I longed for a setup where the projector was controlled by Kodi, for example in such a way that when the screen saver went on, the projector was turned off, and when the screen saver exited, the projector was turned on again.

A few days ago, with very good help from parts of my family, I managed to find a Kodi Add-on for controlling a Epson projector, and got in touch with its author to see if we could join forces and make a Add-on with support for several projectors. To my pleasure, he was positive to the idea, and we set out to add InFocus support to his add-on, and make the add-on suitable for the official Kodi add-on repository.

The Add-on is now working (for me, at least), with a few minor adjustments. The most important change I do relative to the master branch in the github repository is embedding the pyserial module in the add-on. The long term solution is to make a "script" type pyserial module for Kodi, that can be pulled in as a dependency in Kodi. But until that in place, I embed it.

The add-on can be configured to turn on the projector when Kodi starts, off when Kodi stops as well as turn the projector off when the screensaver start and on when the screesaver stops. It can also be told to set the projector source when turning on the projector.

If this sound interesting to you, check out the project github repository. Perhaps you can send patches to support your projector too? As soon as we find time to wrap up the latest changes, it should be available for easy installation using any Kodi instance.

For future improvements, I would like to add projector model detection and the ability to adjust the brightness level of the projector from within Kodi. We also need to figure out how to handle the cooling period of the projector. My projector refuses to turn on for 60 seconds after it was turned off. This is not handled well by the add-on at the moment.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Krebs on SecurityHow to Avoid Card Skimmers at the Pump

Previous stories here on the proliferation of card-skimming devices hidden inside fuel pumps have offered a multitude of security tips for readers looking to minimize their chances of becoming the next victim, such as favoring filling stations that use security cameras and tamper-evident tape on their pumps. But according to police in San Antonio, Texas, there are far more reliable ways to avoid getting skimmed at a fuel station.

San Antonio, like most major U.S. cities, is grappling with a surge in pump skimming scams. So far in 2018, the San Antonio Police Department (SAPD) has found more than 100 skimming devices in area fuel pumps, and that figure already eclipses the total number of skimmers found in the area in 2017. The skimmers are hidden inside of the pumps, and there are often few if any outward signs that a pump has been compromised.

In virtually all cases investigated by the SAPD, the incidents occurred at filling stations using older-model pumps that have not yet been upgraded with physical and digital security features which make it far more difficult for skimmer thieves to tamper with fuel pumps and siphon customer card data (and PINs from debit card users).

Lt. Marcus Booth is the financial crimes unit director for the SAPD. Booth said most filling stations in San Antonio and elsewhere use legacy pumps that have a vertical card reader and a flat, membrane-based keypad. In addition, access to the insides of these older pumps frequently is secured via a master key that opens not only all pumps at a given station, but in many cases all pumps of a given model made by the same manufacturer.

Older model fuel pumps like this one feature a flat, membrane-based keypad and vertical card reader. Image: SAPD.

In contrast, Booth said, newer and more secure pumps typically feature a horizontal card acceptance slot along with a raised metallic keypad — much like a traditional payphone keypad and referred to in the fuel industry as a “full travel” keypad:

Newer, more tamper-resistant fuel pumps include raised metallic keypads (known in the industry as “full travel” keypads), horizontal card readers and custom locks for each pump.

Booth said the SAPD has yet to see a skimming incident involving newer pump models like the one pictured directly above.

“Here in San Antonio, many of these stations with these older keypads and card slots were getting hit all the time, sometimes weekly,” he said. “But as soon as those went over to newer gear, we’ve seen zero problems.”

According to Booth, the newer pumps include not only custom keys for each pump, but also tamper protections that physically shut down a pump if the machine is improperly accessed. What’s more, these more advanced pumps do a better job of compartmentalizing individual components, very often enclosing the electronics that serve the card reader and keypad in separately secured metal cages.

“Pretty much all these full travel metallic keypads are encrypted, and if you disconnect them they disable themselves and can only be re-enabled by technician,” Booth told KrebsOnSecurity. “Also, if the pump is opened improperly, it disables itself. These two specific items: The card reader or the pad, if you pull power to them they’re dead, and then they can only be re-enabled by an authorized technician.”

Newer pumps may also include more modern mobile payment options — such as Apple Pay — although many stations with pumps that advertise this capability have not yet enabled it, which allows customers to pay for fuel without ever sharing their credit or debit card account details with the fuel station.

One reason that pump skimmers seem to be more pervasive is that authorities across the country are doing a better job of working with banks and federal investigators to determine fuel stations that appear to be compromised. The flip side is that thieves are generally opportunistic, and tend to focus on targeting systems that offer the least resistance and lowest hanging fruit.

Unfortunately, there is still a ton of low-hanging fruit, and these newer and more secure pump systems remain the exception rather than the rule, Booth said. In December 2016, Visa delayed by three years a deadline for fuel station owners to install payment terminals at the pump that are capable of handling more secure chip-based cards. The chip card technology standard, also known as EMV (short for Europay, MasterCard and Visa) makes credit and debit cards far more expensive and difficult for thieves to clone.

Under previous credit card association rules, station owners that didn’t have chip-ready readers in place by Oct. 2017 would have been on the hook to absorb 100 percent of the costs of fraud associated with transactions in which the customer presented a chip-based card yet was not asked or able to dip the chip (currently, card-issuing banks eat most of the fraud costs from fuel skimming). Currently, fuel stations have until Oct. 1, 2020 to meet the liability shift deadline.

Some pump skimming devices are capable of stealing debit card PINs as wellso it’s a good idea to avoid paying with a debit card at the pump. Armed with your PIN and debit card data, thieves can clone the card and pull money out of your account at an ATM. Having your checking account emptied of cash while your bank sorts out the situation can be a huge hassle and create secondary problems (bounced checks, for instance).

This advice often runs counter to the messaging pushed by fuel station owners themselves, many of whom offer lower prices for cash or debit card transactions. That’s because credit card transactions typically are more expensive to process.

In summary, if you have the choice, look for fuel pumps with raised keypads and horizontal card slots. And keep in mind that it may not be the best idea to frequent a particular filling station simply because it offers the lowest prices: Doing so could leave you with hidden costs down the road.

If you enjoyed this story, check out my series on all things skimmer-related: All About Skimmers. Looking for more information on fuel pump skimming? Have a look at some of these stories.

Sociological ImagesThe Half-Dozen Headline

Want to help fight fake news and manage political panics? We have to learn to talk about numbers.

While teaching basic statistics to sociology undergraduates, one of the biggest trends I noticed was students who thought they hated math experiencing a brain shutdown when it was time to interpret their results. I felt the same way when I started in this field, and so I am a big advocate for working hard to bridge the gap between numeracy and literacy. You don’t have to be a statistical wizard to make your reporting clear to readers.

Sociology is a great field to do this, because we are used to going out into the world and finding all kinds of cultural tropes (like pointlessly gendered products!). My new favorite trope is the Half-Dozen Headline. You can spot them in the wild, or through Google News with a search for “half dozen.” Every time I read one of these headlines, my brain echoes with “half of a dozen is six.”

Sometimes, six is a lot:

Sometimes, six is not:

(at least, not relative to past administrations)

Sometimes, well, we just don’t know:

Is this five deaths (nearly six)? Is a rate of about two deaths a year in a Walmart parking lot high? If people already struggle to interpret raw numbers, wrapping your findings in fuzzy language only makes the problem worse.

Spotting Half-Dozen Headlines is a great introductory exercise for classes in social statistics, public policy, journalism, or other fields that use applied data analysis. If you find a favorite Half-Dozen Headline, be sure to send it our way!

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

CryptogramBypassing Passcodes in iOS

Last week, a story was going around explaining how to brute-force an iOS password. Basically, the trick was to plug the phone into an external keyboard and trying every PIN at once:

We reported Friday on Hickey's findings, which claimed to be able to send all combinations of a user's possible passcode in one go, by enumerating each code from 0000 to 9999, and concatenating the results in one string with no spaces. He explained that because this doesn't give the software any breaks, the keyboard input routine takes priority over the device's data-erasing feature.

I didn't write about it, because it seemed too good to be true. A few days later, Apple pushed back on the findings -- and it seems that it doesn't work.

This isn't to say that no one can break into an iPhone. We know that companies like Cellebrite and Grayshift are renting/selling iPhone unlock tools to law enforcement -- which means governments and criminals can do the same thing -- and that Apple is releasing a new feature called "restricted mode" that may make those hacks obsolete.

Grayshift is claiming that its technology will still work.

Former Apple security engineer Braden Thomas, who now works for a company called Grayshift, warned customers who had bought his GrayKey iPhone unlocking tool that iOS 11.3 would make it a bit harder for cops to get evidence and data out of seized iPhones. A change in the beta didn't break GrayKey, but would require cops to use GrayKey on phones within a week of them being last unlocked.

"Starting with iOS 11.3, iOS saves the last time a device has been unlocked (either with biometrics or passcode) or was connected to an accessory or computer. If a full seven days (168 hours) elapse [sic] since the last time iOS saved one of these events, the Lightning port is entirely disabled," Thomas wrote in a blog post published in a customer-only portal, which Motherboard obtained. "You cannot use it to sync or to connect to accessories. It is basically just a charging port at this point. This is termed USB Restricted Mode and it affects all devices that support iOS 11.3."

Whether that's real or marketing, we don't know.

TEDApply now to be a TED2019 Fellow

The TED Fellows program is turning ten years old next year, and we are looking for our most ambitious class yet. We select people from every discipline and every country to be Fellows, and we give them support to scale their dreams and scale their impact.

Apply to be a TED Fellow by August 26.

Who are TED Fellows? Fellows are individuals with original work, a record of achievement in their field and exceptional potential. They are also courageous, collaborative people dedicated to improving life where they work.

How do we help you dream bigger? The Fellows program is robust, long-term and, we think, unlike any other Fellowship out there. From our open application process to our rigorous support systems, we have designed a program that maximizes innovation and collaboration.

Fellows get career coaching and speaker training as well as mentorship and public relations guidance. Fellows also give a talk at a TED Conference, a huge opportunity to share their work with a wide, new audience. And perhaps most important, Fellows join the community of 450+ other Fellows who inspire one another and collaborate on new projects.

What have Fellows done after joining the program? In our nearly 10-year history, the Fellows program has sparked remarkable cultural change and reached millions of people. With the support of TED, Fellows have conserved large swaths of our planet, protecting many species in the process. They’ve made headway in understanding complex diseases like Parkinson’s, cancer and malaria. They’ve created art that shines a light on injustice and made music that celebrates our history. They’ve made huge strides in robotics and 4-D printing and launched new startups. They’ve passed laws and have gone on to win Oscars, Grammys and MacArthur “genius” grants. And in the process, Fellows have improved conditions on our planet for countless communities and inspired others to pursue their own unconventional projects.

Our application is straightforward. It’s open to everyone (no one is appointed a Fellow; everyone has to apply), and we encourage you to apply even if you’re not sure you’re qualified. We have a way of picking winners before they know it.

The online application can take as little as 20 minutes. It asks for general biographical information, short essays on your work and three references. We don’t have an upper age limit, but you must be 18 or older to apply. If you’re selected, you will be part of our 10-year anniversary class, and you will need to reserve April 13 through April 20, 2019, for TED2019 and our own very special pre-conference.

So dream bigger. Apply to be a TED Fellow today.

For more information on the TED Fellows:


Follow: @TEDFellow



TED12 books from favorite TEDWomen speakers, for your summer reading list

We all have a story to tell. And in my work as curator of the TEDWomen conference, I’ve had the pleasure of providing a platform to some of the best stories and storytellers out there. Beyond their TED Talk, of course, many TEDWomen speakers are also accomplished authors — and if you liked them on the TED stage, odds are you will enjoy spending more time with them in the pages of their books.

All of the women and men listed here have given talks at TEDWomen, though some talks are related to their books and some aren’t. See what connects with you and enjoy your summer!


Luvvie Ajayi‘s 2017 TEDWomen talk has already amassed over 2.2 million views online! In it, she talks about how she wants to leave this world better than she found it and in order to do that, she says we all have to get more comfortable saying the sometimes uncomfortable things that need to be said. What’s great about Luvvie is that she delivers her commentary with a sly side eye that pokes fun at everyone, including herself.

In her book, I’m Judging You: The Do-Better Manual — written in the form of an Emily Post-type guidebook for modern manners — Luvvie doles out criticism and advice with equal amounts of wit, charm and humor that’s often laugh-out-loud funny. As Shonda Rhimes noted in her review, “This truth-riot of a book gives us everything from hilarious lectures on the bad behavior all around us to razor sharp essays on media and culture. With I’m Judging You, Luvvie brilliantly puts the world on notice that she is not here for your foolishness — or mine.”


At the first TEDWomen in 2010, Madeleine Albright talked to me about what it was like to be a woman and a diplomat. In her new book, entitled Fascism: A Warning, the former secretary of state writes about the history of fascism and the clash that took place between two ideologies of governing: fascism and democracy. She argues that “fascism not only endured the 20th century, but now presents a more virulent threat to peace and justice than at any time since the end of World War II.”

“At a moment when the question ‘Is this how it begins?’ haunts Western democracies,” the Economist notes in its review, “[Albright] writes with rare authority.”


Sometimes a talk perfectly captures the zeitgeist, and that was the case with Gretchen Carlson last November at TEDWomen. At the time, the #MeToo movement founded in 2007 by Tarana Burke was seeing a huge surge online, thanks to signal-boosting from Alyssa Milano and more women with stories to share.

Carlson took to the stage to talk about her personal experience with sexual harassment at Fox News, her historic lawsuit and the lessons she’d learned and related in her just-released book, Be Fierce. In her talk, she identifies three specific things we can all do to create safer places to work. “We will no longer be underestimated, intimidated or set back,” Carlson says. “We will stand up and speak up and have our voices heard. We will be the women we were meant to be.” In her book, she writes in detail about how we can stop harassment and take our power back.


John Cary is an architect who thinks deeply about diversity in design — and how the field’s lack of diversity leads to thoughtless, compassionless spaces in the modern world. As he said in his 2017 TEDWomen talk, “well-designed spaces are not just a matter of taste or a questions of aesthetics. They literally shape our ideas about who we are in the world and what we deserve.”

For years, as the executive director of Public Architecture, John has advocated for the term “public interest design” to become part of the architect’s lexicon, in much the same way as it is in fields like law and health care. In his new book, Design for Good, John presents 20 building projects from around the world that exemplify how good design can improve communities, the environment, and the lives of the people who live with it.


In her thought-provoking 2016 TEDWomen talk, professor Brittney Cooper examined racism through the lens of time — showing how moments of joy, connection and well-being had been lost to people of color because of delays in social progress.

Last summer, I recommended Brittney’s book on the lives and thoughts of intellectual Black women in history who had been left out of textbooks. And this year, Brittney is back with another book, one that is more personal and also very timely in this election year in which women are figuring out what a truly intersectional feminist movement looks like.

As my friend Jane Fonda wrote in a recent blog post, in order to build truly multi-racial coalitions, white people need to do the work to truly understand race and racism. For white feminists in particular, the work starts by listening to the perspectives of women of color. Brittney’s book, Eloquent Rage: A Black Feminist Discovers Her Superpower, offers just that opportunity. Brittney’s sharp observations from high school (at a predominantly white school), college (at Howard University) and as a 30-something professional make the political personal. As she told the Washington Post, “When we figure out politics at a personal level, then perhaps it wouldn’t be so hard to figure it out at the more structural level.”


Susan David is a Harvard Medical School psychologist who studies how we process our emotions. In a deeply moving talk at TEDWomen 2017, Susan suggested that the way we deal with our emotions shapes everything that matters: our actions, careers, relationships, health and happiness. “I’m not anti-happiness. I like being happy. I’m a pretty happy person,” she says. “But when we push aside normal emotions to embrace false positivity, we lose our capacity to develop skills to deal with the world as it is, not as we wish it to be.”

In her book, Emotional Agility, Susan shares strategies for the radical acceptance of all of our emotions. How do we not let our self-doubts, failings, shame, fear, or anger hold us back?

“We own our emotions,” she says. “They don’t own us.”


Dr. Musimbi Kanyoro is president and CEO of Global Fund for Women, one of the world’s leading publicly supported foundations for gender equality. In her TEDWomen talk last year, she introduced us to the Maragoli concept of “isirika” — a pragmatic way of life that embraces the mutual responsibility to care for one another — something she sees women practicing all over the world.

In All the Women in My Family Sing, Musimbi is one of 69 women of color who have contributed prose and poetry to this “moving anthology” that “illuminates the struggles, traditions, and life views of women at the dawn of the 21st century. The authors grapple with identity, belonging, self-esteem, and sexuality, among other topics.” Contributors range in age from 16 to 77 and represent African-American, Native American, Asian-American, Muslim, Cameroonian, Kenyan, Liberian, Mexican-American, Korean, Chinese-American and LGBTQI experiences.


In her 2017 TEDWomen talk, author Anjali Kumar shared some of what she learned in researching her new book, Stalking God: My Unorthodox Search for Something to Believe In. A few years ago, Anjali — a pragmatic lawyer for Google who, like more than 56 million of her fellow Americans, describes herself as not religious — set off on a mission to find God.

Spoiler alert: She failed. But along the way, she learned a lot about spirituality, humanity and what binds us all together as human beings.

In her humorous and thoughtful book, Anjali writes about her search for answers to life’s most fundamental questions and finding a path to spirituality in our fragmented world. The good news is that we have a lot more in common than we might think.


New York Times best-selling author Peggy Orenstein is out with a new collection of essays titled Don’t Call Me Princess: Girls, Women, Sex and Life. Peggy combines a unique blend of investigative reporting, personal revelation and unexpected humor in her many books, including Schoolgirls and the book that was the subject of her 2016 TEDWomen talk, Girls & Sex.

Don’t Call Me Princess “offers a crucial evaluation of where we stand today as women — in our work lives, sex lives, as mothers, as partners — illuminating both how far we’ve come and how far we still have to go.” Don’t miss it.


Caroline Paul began her remarkable career as the first female firefighter in San Francisco. She wrote about that in her first book, Fighting Fires. In the 20 years since, she’s written many more books, including her most recent, You Are Mighty: A Guide to Changing the World.

This well-timed book offers advice and inspiration to young activists. She writes about the experiences of young people — from famous kids like Malala Yousafzai and Claudette Colvin to everyday kids — who stood up for what they thought was right and made a difference in their communities. Paul offers loads of tactics for young people to use in their own activism — and proves you’re never too young to change the world.


I first encountered Cleo Wade‘s delightful, heartfelt words of wisdom like most people, on Instagram. Cleo has over 350,000 followers on her popular feed that features short poems, bits of wisdom and pics. Cleo has been called the poet of her generationeverybody’s BFF and the millennial Oprah. In her new poetry collection, Heart Talk: Poetic Wisdom for a Better Life, the poet, artist and activist shares some of the Instagram notes she wrote “while sitting in her apartment, poems about loving, being and healing” and “the type of good ol’-fashioned heartfelt advice I would share with you if we were sitting in my home at my kitchen table.”


In 1994, the Rwandan Civil War forced six-year-old Clemantine Wamariya and her fifteen-year-old sister from their home in Kigali, leaving their parents and everything they knew behind. In her 2017 TEDWomen talk, Clemantine shared some of her experiences over the next six years growing up while living in refugee camps and migrating through seven African countries.

In her new memoir, The Girl Who Smiled Beads: A Story of War and What Comes After, Clemantine recounts her harrowing story of hunger, imprisonment, and not knowing whether her parents were alive or dead. At the age of 12, she moved to Chicago and was raised in part by an American family. It’s an incredible, poignant story and one that is so important during this time when many are denying the humanity of people who are victims of war and civil unrest. For her part, Clemantine remains hopeful. “There are a lot of great people everywhere,” she told the Washington Post. “And there are also a lot of not-so-great people. It’s all over the world. But when we stepped out of the airplane, we had people waiting for us — smiling, saying, ‘Welcome to America.’ People were happy. Many countries were not happy to have us. Right now there are people at the airport still holding those banners.”


I also want to mention that registration for TEDWomen 2018 is open now! Space is limited and I don’t want you to miss out. This year, TEDWomen will be held Nov. 28–30 in Palm Springs, California. The theme is Showing Up.

The time for silent acceptance of the status quo is over. Women around the world are taking matters into their own hands, showing up for each other and themselves to shape the future we all want to see.We’ll explore the many aspects of this year’s theme through curated TED Talks, community dinners and activities.

Join us!

— Pat

Worse Than FailureCodeSOD: External SQL

"Externalize your strings" is generally good advice. Maybe you pull them up into constants, maybe you move them into a resource file, but putting a barrier between your code and the strings you output makes everything more flexible.

But what about strings that aren't output? Things like, oh… database queries? We want to be cautious about embedding SQL directly into our application code, but our SQL code often is our business logic, so it makes sense to inline it. Most data access layers end up trying to abstract the details of SQL behind method calls, whether it's just a simple repository or an advanced ORM approach.

Sean found a… unique approach to resolving this tension in some Java code he inherited. He saw lots of references to keys in a hash-map, keys like user or pw or insert_account_table or select_all_transaction_table. But where did these keys get defined?

Like all good strings, they were externalized into a file called sql.txt. A simple regex-based parser loaded the data and created the dictionary. Now, any module which wanted to query the database had a map of any query they could possibly want to run. Just chuck 'em into a PreparedStatement object and you're ready to go.

Here, in its entirety, is the sql.txt file.

user = root
pw = password
db_name = lrc_mydb

create_account_table = create table if not exists account_table(username varchar(45) not null, password text not null, last_name text, first_name text, mid_name text, suffix_name text, primary key (username))
create_course_table = create table if not exists course_table (course_abbr char(45) not null unique, course_name text, primary key(course_abbr))
create_student_table = create table if not exists student_table (username varchar(45) not null, registration_date date, year_lvl char(45), photolink longblob, freetime time, course_abbr char(45) not null, status char(45) not null, balance double not null, foreign key fk_username(username) references account_table(username) on update cascade on delete cascade, foreign key fk_course_abbr(course_abbr) references course_table(course_abbr) on update cascade on delete cascade, primary key(username))
create_admin_table = create table if not exists admin_table (username varchar(45) not null, delete_priv boolean, settle_priv boolean, db_access boolean, foreign key fk_username(username) references account_table(username) on update cascade on delete cascade, primary key(username))
create_reservation_table = create table if not exists reservation_table (username varchar(45) not null, foreign key fk_username(username) references account_table(username) on update cascade on delete cascade, primary key(username))
create_service_table = create table if not exists service_table (service_id int not null auto_increment, service_name text, amount double, page_requirement boolean, primary key (service_id))
create_pc_table = create table if not exists pc_table (pc_id char(45) not null, ip_address varchar(45), primary key (pc_id))
create_transaction_table = create table if not exists transaction_table (transaction_id int not null auto_increment, date_rendered date, amount_paid double unsigned not null,cost_payable double, username varchar(45) not null, service_id int not null, foreign key fk_username(username) references account_table(username) on update cascade on delete cascade, foreign key fk_service_id(service_id) references service_table(service_id) on update cascade on delete cascade, primary key (transaction_id))
create_pc_usage_table = create table if not exists pc_usage_table (transaction_id int not null, pc_id char(45) not null, login_time time, logout_time time, foreign key fk_pc_id(pc_id) references pc_table(pc_id) on update cascade on delete cascade, foreign key fk_transaction_id(transaction_id) references transaction_table(transaction_id) on update cascade on delete cascade, primary key(transaction_id))
create_pasa_hour_table = create table if not exists pasa_hour_table (transaction_id int not null auto_increment, date_rendered date, sender varchar(45) not null, amount_time time, current_free_sender time, deducted_free_sender time, receiver varchar(45) not null, current_free_receiver time, added_free_receiver time, primary key(transaction_id))
create_receipt_table = create table if not exists receipt_table (dates date, receipt_id varchar(45) not null, transaction_id int not null, username varchar(45) not null, amount_paid double, amount_change double, foreign key fk_transaction_id(transaction_id) references transaction_table(transaction_id) on update cascade on delete cascade, foreign key fk_username(username) references account_table(username) on update cascade on delete cascade)
create_cash_flow_table = create table if not exists cash_flow_table (dates date, cash_in double, cash_close double, cash_out double, primary key(dates))
create_free_pc_usage_table = create table if not exists free_pc_usage_table (transaction_id int not null, foreign key fk_transaction_id(transaction_id) references transaction_table(transaction_id) on update cascade on delete cascade, primary key(transaction_id))
create_diagnostic_table = create table if not exists diagnostic_table (sem_id int not null auto_increment , date_start date, date_end date, sem_num enum('first', 'second', 'mid year'), freetime time, time_penalty double, balance_penalty double, primary key(sem_id))
create_pasa_balance_table = create table if not exists pasa_balance_table (transaction_id int not null auto_increment, date_rendered date, sender varchar(45) not null, amount double, current_balance_sender double, deducted_balance_sender double, receiver varchar(45) not null, current_balance_receiver double, added_balance_receiver double, primary key(transaction_id))

insert_account_table = insert into account_table values (?, password(?), ?, ?, ?, ?)
insert_course_table = insert into course_table values (?, ?)
insert_student_table = insert into student_table values (?, now(), ?, ?, ?, ?, ?, ?)
insert_admin_table = insert into admin_table values (?, ?, ?, ?)
insert_reservation_table = insert into reservation_table values (?)
insert_service_table = insert into service_table (service_name, amount, page_requirement) values (?, ?, ?)
insert_pc_table = insert into pc_table values (?, ?)
insert_transaction_table = insert into transaction_table (date_rendered, amount_paid, cost_payable, username, service_id) values (now(), ?, ?, ?, ?)
insert_pc_usage_table = insert into pc_usage_table values (?, ?, ?, ?)
insert_pasa_hour_table = insert into pasa_hour_table (date_rendered, sender, amount_time, current_free_sender, deducted_free_sender, receiver, current_free_receiver, added_free_receiver) values (curdate(), ?, ?, ?, ?, ?, ?, ?)
insert_free_pc_usage_table = insert into free_pc_usage_table values (?)
insert_cash_flow_table = insert into cash_flow_table values (curdate(), ?, ?, ?)
insert_receipt_table = insert into receipt_table values (curdate(), ?, ?, ?, ?, ?)
insert_diagnostic_table = insert into diagnostic_table (date_start, date_end, sem_num, freetime, time_penalty, balance_penalty) values (?, ?, ?, ?, ?, ?)
insert_pasa_balance_table = insert into pasa_balance_table (date_rendered, sender, amount, current_balance_sender, deducted_balance_sender, receiver, current_balance_receiver, added_balance_receiver) values (curdate(), ?, ?, ?, ?, ?, ?, ?)

delete_reservation_table = delete from reservation_table where username = ?
delete_course_table = delete from course_table where course_abbr = ?
delete_user_assoc_to_course = delete account_table, student_table from student_table inner join account_table on account_table.username = student_table.username where student_table.course_abbr = ?
delete_service_table = delete from service_table where service_name = ?
delete_user_student = delete account_table, student_table from student_table inner join account_table on account_table.username = student_table.username where student_table.username = ?
delete_user_staff = delete account_table, admin_table from admin_table inner join account_table on account_table.username = admin_table.username where admin_table.username = ?

select_total_cost = select sum(cost_payable - amount_paid) from transaction_table where username = ? and cost_payable > amount_paid
select_time_penalty = select time_penalty from diagnostic_table where sem_id = ?
select_balance_penalty = select balance_penalty from diagnostic_table where sem_id = ?
select_balance = select balance from student_table where username = ?
select_accountabilities = select sum(cost_payable - amount_paid) from transaction_table where username = ? and cost_payable > amount_paid
select_count_service_table = select count(*) from service_table
select_count_course_table = select count(*) from course_table
select_course_count = select count(course_abbr) from student_table where course_abbr = ?
select_course_abbr = select course_abbr from course_table where course_name = ?
select_degree_name_abbr = select * from course_table
select_service_name = select * from service_table
select_service_name1 = select service_name from service_table where service_id = ?
select_services_amount = select * from service_table
select_username = select * from account_table where username = (?) and password = password(?)
select_user = select * from account_table where username = (?)
select_reserved_user = select * from reservation_table where username = (?)
select_existing_course = select * from course_table where course_abbr = (?)
select_existing_service = select * from service_table where service_name = (?)
select_existing_transaction_id = select transaction_id from transaction_table where transaction_id = ?
select_user_is_active = select status from student_table where username = ?
select_page_requirement = select page_requirement from service_table where service_name = ?
select_user_details = select account_table.username as 'Username', concat(account_table.last_name, ', ', account_table.first_name, ' ', account_table.suffix_name, ' ', account_table.mid_name) as 'Name',  student_table.course_abbr as 'Degree Program', student_table.year_lvl as 'Year Level', student_table.freetime as 'Free Time' from account_table inner join student_table on account_table.username = student_table.username where student_table.username = ?
select_amount_service = select amount from service_table where service_name = ?
select_id_service = select * from service_table where service_name = ?
select_freetime = select student_table.freetime from student_table inner join transaction_table on student_table.username = transaction_table.username where transaction_table.transaction_id = ?
select_timediff = select timediff(time(?), timediff(time(logout_time), time(login_time))) as 'timedifference' from pc_usage_table where transaction_id = ?
select_trans_user = select username from transaction_table where transaction_id = ?
select_pc_id1 = select pc_id from pc_table where ip_address = ?
select_timedifference = select timediff(time(?), timediff(curtime(), time(?))) as 'timedifference' from pc_usage_table where transaction_id = ?
select_logout_time = select logout_time from pc_usage_table where transaction_id = ?
select_login_time = select login_time from pc_usage_table where transaction_id = ?
select_now = select curtime()
select_time_consumed = select timediff(time(logout_time), time(login_time)) as 'timedifference' from pc_usage_table where time_to_sec(timediff(time(logout_time), time(login_time))) < time_to_sec(time(?)) and transaction_id = ?
select_freetime_user = select freetime from student_table where username = ?
select_cost_transaction = select cost_payable from transaction_table where transaction_id = ?
select_amount_transaction = select amount_paid from transaction_table where transaction_id = ?
select_pc_id_from_trans = select pc_id from pc_usage_table where transaction_id = ?
select_pc_id2 = select pc_table.pc_id from pc_table
select_transactions_with_accountabilities = select transaction_id from transaction_table where username = ? and amount_paid < cost_payable
select_picture = select photolink from student_table where username = ?
select_diagnostic_table2 = select * from diagnostic_table where sem_id = ?
select_diagnostic_table = select * from diagnostic_table order by diagnostic_table.date_end desc limit 1

select_filtered_username = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table inner join student_table on account_table.username = student_table.username where account_table.username like (?) and student_table.username like (?) group by username
select_filtered_lastname = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table join student_table on account_table.username = student_table.username where account_table.last_name like ? group by username
select_filtered_firstname = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table join student_table on account_table.username = student_table.username where account_table.first_name like ? group by username
select_filtered_yearlvl = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table join student_table on account_table.username = student_table.username where student_table.year_lvl like ? group by username
select_filtered_degprog = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table join student_table on account_table.username = student_table.username where student_table.course_abbr like ? group by username

select_filtered_username2 = select transaction_table.date_rendered as 'Date', transaction_table.transaction_id as 'Transaction ID', transaction_table.username 'Username', service_table.service_name 'Service Name', substring(transaction_table.cost_payable, 1, 5) as 'Cost', substring(transaction_table.amount_paid, 1, 5) as 'Amount Rendered' from transaction_table inner join service_table on transaction_table.service_id = service_table.service_id where transaction_table.username like ? group by transaction_id
select_filtered_servicename = select transaction_table.date_rendered as 'Date', transaction_table.transaction_id as 'Transaction ID', transaction_table.username 'Username', service_table.service_name 'Service Name', substring(transaction_table.cost_payable, 1, 5) as 'Cost', substring(transaction_table.amount_paid, 1, 5) as 'Amount Rendered' from transaction_table inner join service_table on transaction_table.service_id = service_table.service_id where service_table.service_name like ? group by transaction_id
select_filtered_date = select transaction_table.date_rendered as 'Date', transaction_table.transaction_id as 'Transaction ID', transaction_table.username 'Username', service_table.service_name 'Service Name', substring(transaction_table.cost_payable, 1, 5) as 'Cost', substring(transaction_table.amount_paid, 1, 5) as 'Amount Rendered' from transaction_table inner join service_table on transaction_table.service_id = service_table.service_id where transaction_table.date_rendered like ? group by Transaction_id

select_all = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.freetime as 'Free Time', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table join student_table on account_table.username = student_table.username
select_filtered_active = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.freetime as 'Free Time', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table join student_table on account_table.username = student_table.username where student_table.status = 'active'
select_filtered_inactive = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.freetime as 'Free Time', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table join student_table on account_table.username = student_table.username where student_table.status = 'inactive'

select_online_pc =
select_reserved_pc = select reservation_table.username as 'Username' from reservation_table
select_staff_table = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', admin_table.delete_priv as 'Delete Privilege', admin_table.settle_priv as 'Settle Privilege', admin_table.db_access as 'Database Access' from account_table inner join admin_table on account_table.username = admin_table.username
select_degree_table = select course_table.course_name as 'Degree Program', course_table.course_abbr as 'Abbreviation' from course_table
select_service_table = select service_name as 'Service Name', amount as 'Amount' from service_table
select_pasa_hour = select pasa_hour_table.date_rendered as 'Date', pasa_hour_table.amount_time as 'Amount Time', concat(pasa_hour_table.sender, '     ( ', pasa_hour_table.current_free_sender, '  -  ', pasa_hour_table.deducted_free_sender, ' )') as 'Sender (Current - Deducted)', concat(pasa_hour_table.receiver, '     ( ', pasa_hour_table.current_free_receiver, '  -  ', pasa_hour_table.added_free_receiver, ' )') as 'Receiver (Current - Added)' from pasa_hour_table
select_pasa_bal = select date_rendered as 'Date', amount as 'Amount Time', concat(sender, '     ( ', current_balance_sender, '  -  ', deducted_balance_sender, ' )') as 'Sender (Current - Deducted)', concat(receiver, '     ( ', current_balance_receiver, '  -  ', added_balance_receiver, ' )') as 'Receiver (Current - Added)' from pasa_balance_table

select_transaction_table = select transaction_table.date_rendered as 'Date', transaction_table.transaction_id as 'Transaction ID', transaction_table.username 'Username', service_table.service_name 'Service Name', substring(transaction_table.cost_payable, 1, 5) as 'Cost', substring(transaction_table.amount_paid, 1, 5) as 'Amount Rendered' from transaction_table inner join service_table on transaction_table.service_id = service_table.service_id where transaction_table.date_rendered = curdate()
select_all_transaction_table = select transaction_table.date_rendered as 'Date', transaction_table.transaction_id as 'Transaction ID', transaction_table.username 'Username', service_table.service_name 'Service Name', substring(transaction_table.cost_payable, 1, 5) as 'Cost', substring(transaction_table.amount_paid, 1, 5) as 'Amount Rendered' from transaction_table inner join service_table on transaction_table.service_id = service_table.service_id
select_paid_transaction_table = select transaction_table.date_rendered as 'Date', transaction_table.transaction_id as 'Transaction ID', transaction_table.username 'Username', service_table.service_name 'Service Name', substring(transaction_table.cost_payable, 1, 5) as 'Cost', substring(transaction_table.amount_paid, 1, 5) as 'Amount Rendered' from transaction_table inner join service_table on transaction_table.service_id = service_table.service_id where transaction_table.cost_payable <= transaction_table.amount_paid
select_unpaid_transaction_table = select transaction_table.date_rendered as 'Date', transaction_table.transaction_id as 'Transaction ID', transaction_table.username 'Username', service_table.service_name 'Service Name', substring(transaction_table.cost_payable, 1, 5) as 'Cost', substring(transaction_table.amount_paid, 1, 5) as 'Amount Rendered' from transaction_table inner join service_table on transaction_table.service_id = service_table.service_id where transaction_table.cost_payable > transaction_table.amount_paid

select_usage_daily = select distinct a.pc_id as 'PC Number', (select count(b.pc_id) from pc_usage_table b where b.pc_id = a.pc_id && b.transaction_id in (select transaction_id from transaction_table where date_rendered = ?)) as 'Total # of Transactions', (select count(distinct c.username) from transaction_table c where c.transaction_id in (select d.transaction_id from pc_usage_table d where d.pc_id = a.pc_id) && c.transaction_id in (select transaction_id from transaction_table where date_rendered = ?)) as 'Total # of Users' from pc_usage_table a join transaction_table e on a.transaction_id = e.transaction_id where date_rendered = ?
select_usage_monthly = select distinct a.pc_id as 'PC Number', (select count(b.pc_id) from pc_usage_table b where b.pc_id = a.pc_id && b.transaction_id in (select transaction_id from transaction_table where year(date_rendered) = ? and monthname(date_rendered) = ?)) as 'Total # of Transactions', (select count(distinct c.username) from transaction_table c where c.transaction_id in (select d.transaction_id from pc_usage_table d where d.pc_id = a.pc_id) && c.transaction_id in (select transaction_id from transaction_table where year(date_rendered) = ? and monthname(date_rendered) = ?)) as 'Total # of Users' from pc_usage_table a join transaction_table e on a.transaction_id = e.transaction_id where year(e.date_rendered) = ? and monthname(date_rendered) = ?
select_usage_annual = select distinct a.pc_id as 'PC Number', (select count(b.pc_id) from pc_usage_table b where b.pc_id = a.pc_id && b.transaction_id in (select transaction_id from transaction_table where year(date_rendered) = ?)) as 'Total # of Transactions', (select count(distinct c.username) from transaction_table c where c.transaction_id in (select d.transaction_id from pc_usage_table d where d.pc_id = a.pc_id) && c.transaction_id in (select transaction_id from transaction_table where year(date_rendered) = ?)) as 'Total # of Users' from pc_usage_table a join transaction_table e on a.transaction_id = e.transaction_id where year(e.date_rendered) = ?
select_usage_semestral = select distinct a.pc_id as 'PC Number', (select count(b.pc_id) from pc_usage_table b where b.pc_id = a.pc_id && b.transaction_id in (select transaction_id from transaction_table where date_rendered between (select date_start from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)) and (select date_end from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)))) as 'Total # of Transactions', (select count(distinct c.username) from transaction_table c where c.transaction_id in (select d.transaction_id from pc_usage_table d where d.pc_id = a.pc_id) && c.transaction_id in (select transaction_id from transaction_table where transaction_table.date_rendered between (select date_start from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)) and (select date_end from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)))) as 'Total # of Users' from pc_usage_table a join transaction_table e on a.transaction_id = e.transaction_id where e.date_rendered between (select date_start from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)) and (select date_end from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?))

select_student_daily = select account_table.username, concat(account_table.last_name, ', ', account_table.first_name, ', ', account_table.suffix_name, ', ', account_table.mid_name), student_table.course_abbr from account_table inner join student_table on student_table.username = account_table.username where account_table.username in (select transaction_table.username from transaction_table inner join pc_usage_table on transaction_table.transaction_id = pc_usage_table.transaction_id where year(transaction_table.date_rendered) = ? and transaction_table.date_rendered = ?)
select_student_monthly = select account_table.username as 'Student Number', concat(account_table.last_name, ', ', account_table.first_name, ', ', account_table.suffix_name, ', ', account_table.mid_name) as 'Name', student_table.course_abbr as 'Degree Program' from account_table inner join student_table on student_table.username = account_table.username where account_table.username in (select transaction_table.username from transaction_table inner join pc_usage_table on transaction_table.transaction_id = pc_usage_table.transaction_id where year(transaction_table.date_rendered) = ? and monthname(transaction_table.date_rendered) = ?)
select_student_annual = select account_table.username as 'Student Number', concat(account_table.last_name, ', ', account_table.first_name, ', ', account_table.suffix_name, ', ', account_table.mid_name) as 'Name', student_table.course_abbr as 'Degree Program' from account_table inner join student_table on student_table.username = account_table.username where account_table.username in (select transaction_table.username from transaction_table inner join pc_usage_table on transaction_table.transaction_id = pc_usage_table.transaction_id where year(transaction_table.date_rendered) = ?)
select_student_semestral = select account_table.username as 'Student Number', concat(account_table.last_name, ', ', account_table.first_name, ', ', account_table.suffix_name, ', ', account_table.mid_name) as 'Name', student_table.course_abbr as 'Degree Program' from account_table inner join student_table on student_table.username = account_table.username where account_table.username in (select transaction_table.username from transaction_table inner join pc_usage_table on transaction_table.transaction_id = pc_usage_table.transaction_id where transaction_table.date_rendered between (select date_start from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)) and (select date_end from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)))

select_transaction_daily = select service_table.service_name as 'Service Name', sum(transaction_table.cost_payable) as 'Cost Payable' from service_table join transaction_table on service_table.service_id = transaction_table.service_id where transaction_table.date_rendered = ? group by transaction_table.service_id
select_transaction_monthly = select service_table.service_name as 'Service Name', sum(transaction_table.cost_payable) as 'Cost Payable' from service_table join transaction_table on service_table.service_id = transaction_table.service_id where year(transaction_table.date_rendered) = ? and monthname(transaction_table.date_rendered) = ? group by transaction_table.service_id
select_transaction_annual = select service_table.service_name as 'Service Name', sum(transaction_table.cost_payable) as 'Cost Payable' from service_table join transaction_table on service_table.service_id = transaction_table.service_id where year(transaction_table.date_rendered) = ? group by transaction_table.service_id
select_transaction_semestral = select service_table.service_name as 'Service Name', sum(transaction_table.cost_payable) as 'Cost Payable' from service_table join transaction_table on service_table.service_id = transaction_table.service_id where transaction_table.date_rendered between (select date_start from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)) and (select date_end from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)) group by transaction_table.service_id

select_latest_trans = select transaction_table.date_rendered as 'Date', service_table.service_name as 'Service Name', substring(transaction_table.amount_paid,1,5) as 'Cash Rendered', substring(transaction_table.cost_payable,1,5) as "Cost Payable" from transaction_table inner join service_table on service_table.service_id = transaction_table.service_id where transaction_table.username = ? order by transaction_table.transaction_id desc limit 5
select_trans_by_user = select service_table.service_name as 'Service Name', sum(transaction_table.amount_paid) as 'Amount Paid', sum(transaction_table.cost_payable) as 'Cost Payable' from service_table join transaction_table on service_table.service_id = transaction_table.service_id where transaction_table.username = ? and transaction_table.amount_paid < transaction_table.cost_payable group by transaction_table.service_id

update_activate_student = update student_table set status = 'active' where username = ?
update_deactivate_student = update student_table set status = 'inactive' where username = ?
update_profile_pic = update student_table set photolink = ? where username = ?
update_amount = update transaction_table set amount_paid = ? where transaction_id = ?
update_cash_close = update cash_flow_table set cash_close = cash_close + ? where dates = curdate()
update_balance = update student_table set balance = ? where username = ?
update_logout_expand = update pc_usage_table set logout_time = ? where transaction_id = ?
update_cost_transaction = update transaction_table set cost_payable = (select cost_payable + ? where transaction_id = ?) where transaction_id = ?
update_cost_transaction_plain = update transaction_table set cost_payable = ? where transaction_id = ?
update_amount_transaction = update transaction_table set amount_paid = (select amount_paid + ? where transaction_id = ?) where transaction_id = ?
update_pasa_hour_table = update pasa_hour_table set deducted_free_sender = ?, added_free_receiver = ? where transaction_id = ?
update_pasa_balance_table = update pasa_balance_table set deducted_balance_sender = ?, added_balance_receiver = ? where transaction_id = ?
update_receiver_time = update student_table set freetime = (select addtime(freetime,time(?)) where username = ?) where username = ?
update_sender_time = update student_table set freetime = (select timediff(freetime,time(?)) where username = ?) where username = ?
update_logout_pending = update pc_usage_table set logout_time = (select addtime(time(login_time), time(?))) where transaction_id = ?
update_logout_time = update pc_usage_table set logout_time = curtime() where transaction_id = ?
update_logout_time_with_reference = update pc_usage_table set logout_time = ? where transaction_id = ?
update_user_time = update student_table set freetime = ? where username = ?
update_reset_pw = update account_table set password = password(?) where username = ?
update_all_status = update student_table set status = 'inactive'
update_course_table = update course_table set course_abbr = ?, course_name = ? where course_abbr = ?
update_user_password = update account_table set password = password(?) where username = ? and password = password(?)
update_account_table = update account_table set username = ?, last_name = ?, first_name = ?, mid_name = ?, suffix_name = ? where username = ?
update_admin_table = update admin_table set username = ?, delete_priv = ?, settle_priv = ?, db_access = ? where username = ?
update_student_table = update student_table set username = ?, year_lvl = ?, course_abbr = ?, status = ? where username = ?
update_service_table = update service_table set service_name = ?, amount = ?, page_requirement = ? where service_name = ?
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Cory DoctorowPodcast: Let’s get better at demanding better from tech

Here’s my reading (MP3) of Let’s get better at demanding better from tech, a Locus Magazine column about the need to enlist moral, ethical technologists in the fight for a better technological future. It was written before the death of EFF co-founder John Perry Barlow, whose life’s work was devoted to this proposition, and before the Google uprising over Project Maven, in which technologists killed millions in military contracts by refusing to build AI systems for the Pentagon’s drones.


Worse Than FailureSponsor Post: Error Logging vs. Crash Reporting

A lot of developers confuse error and crash reporting tools with traditional logging. And it’s easy to make the relation without understanding the two in more detail.

Dedicated logging tools give you a running history of events that have happened in your application. Dedicated error and crash reporting tools focus on the issues users face that occur when your app is in production, and record the diagnostic details surrounding the problem that happened to the user, so you can fix it with greater speed and accuracy.

Most error logging activities within software teams remain just that. A log of errors that are never actioned and fixed.

Traditionally speaking, when a user reports an issue, you might find yourself hunting around in log files searching for what happened so you can debug it successfully.

Having an error reporting tool running silently in production means not only do users not need to report issues, as they are identified automatically, but each one is displayed in a dashboard, ranked by severity. Teams are able to get down to the root cause of an issue in seconds, not hours.

Full diagnostic details about the issue are presented to the developer immediately. Information such as OS, browser, machine, a detailed stack trace, a history of events leading up to the issue and even which individual users have encountered the specific issue are all made available.

In short, when trying to solve issues in your applications, you immediately see the needle, without bothering with the haystack.

Error monitoring tools are designed to give you answers quickly. Once you experience how they fit into the software development workflow and work alongside your logging, you won’t want to manage your application errors in any other way.

So next time you’re struggling to resolve problems in your apps - Think Raygun.

Your life as a developer will be made so much easier.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Worse Than FailureA Hard SQL Error

Prim Maze

Padma was the new guy on the team, and that sucked. When you're the new guy, but you're not new to the field, there's this maddening combination of factors that can make onboarding rough: a combination of not knowing the product well enough to be efficient, but knowing your craft well enough to expect efficiency. After all, if you're a new intern, you can throw back general-purpose tutorials and feel like you're learning new things at least. When you're a senior trying to make sense of your new company's dizzying array of under-documented products? The only way to get that knowledge is by dragging people who are already efficient away from what they're doing to ask.

By the start of week 2, however, Padma knew enough to get his hands dirty with some smaller bug-fixes. By the end of it, he'd begun browsing the company bug tracker looking for more work on his own. That's when he came across this bug report that seemed rather urgent:

Error: Can't connect to local MySQL server

It had been in the tracker for a month. That could mean a lot of things, all of them opaque when you're new enough not to know anyone. Was it impossible to reproduce? Was it one of those reports thrown in by someone who liked to tamper with their test environment and blame things breaking on the coders? Was their survey product just low priority enough that they hadn't gotten around to fixing it? Which client was this for?

It took Padma a few hours to dig into it enough to get to the root of the problem. The repository for their survey product was stored in their private github, one of dozens of repositories with opaque names. He found the codename of the product, "Santiago," by reading older tickets filed against the same product, before someone had renamed the tag to "Survey Deluxe." There was a branch for every client, an empty Master branch, and a Development branch as the default; he reached back out to the reporter for the name of the client so he could pull up their branch. Of course they had a "clientname" branch, a "clientname-new," and a "clientname3.0," but after comparing merge histories, he eventually discovered the production code: in a totally different branch, after they had merged two clients' environments together for a joint venture. Of course.

But finally, he had the problem reproduced in his local dev environment. After an hour of digging through folders, he found the responsible code:

<h2 id="survey">Surveys</h2>
        <div style="margin-left:10px;">
        <ul class="submenu">
                <li><a href="survey1.php">Survey #1</a></li>
                <li><a href="survey2.php">Survey #2</a><span style="color:red">Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)</span></li>

"But ... why?!" Padma growled at the screen.

"Oh, is that Santiago?" asked his neighbor, leaning over to see his screen. "Yeah, they requested a one-for-one conversion from their previous product. Warts and all. Seems they thought that was the name of the survey, and it was important that it be in red so they could find it easily enough."

Padma stared at the code in disbelief. After a long moment, he closed the editor and the browser, deleted the code from his hard drive, and closed the ticket "won't fix."

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

CryptogramSecure Speculative Execution

We're starting to see research into designing speculative execution systems that avoid Spectre- and Meltdown-like security problems. Here's one.

I don't know if this particular design secure. My guess is that we're going to see several iterations of design and attack before we settle on something that works. But it's good to see the research results emerge.

News article.


Valerie AuroraYesterday’s joke protest sign just became today’s reality

Tomorrow I’m going to a protest against the forcible separation of immigrant children from their families. When I started thinking about what sign to make, I remembered my sign for the first Women’s March protest, the day after Trump took office in January 2017. It said: “Trump hates kids and puppies… for real!!!

My  protest sign for the 2017 Women’s March

While I expected a lot of terrifying things to happen over the next few years, I never, never thought that Trump would deliberately tear thousands of children away from their families and put them in concentration camps. I knew he hated children; I didn’t know he hated children (specifically, brown children) so much that he’d hold them hostage to force Congress to pass his racist legislation. I did not expect him and his party to try to sell cages full of weeping little boys as future gang members. I did not expect 55% of Republican voters to support splitting up families and putting them in camps. I’m smiling at the cute dog in that photo; now the entire concept of that sign seems impossibly naive and inappropriate, much less my expression in that photo. I apologize for this sign and my joking attitude.

I remember being terrified during the months between Trump’s election and his inauguration. I couldn’t sleep; I put together a go-bag; I bought three weeks worth of food and water and stored them in the closet. I read a dozen books on fascism and failed democracies. I even built a spreadsheet tracking signs of fascism so I’d know when to leave the country.

I came up with the concept of that sign as a way to increase people’s disgust for Trump; what kind of pathetic low-life creep hates kids AND puppies? But I still didn’t get how bad things truly were; I thought Trump hated kids in the sense that he didn’t want any of them around him and wouldn’t lift a finger to help them. I didn’t understand that he—and many people in his administration—took actual pleasure in knowing they were building camps full of crying, desperate, terrified kids who may never be reunited with their parents. In January 2017, I thought I understood the evil of this administration and of a significant percentage of the people in this country; actually, I way underestimated it.

At that protest, several people asked me if Trump really hated puppies, but not one person asked me if Trump really hated kids. In retrospect, this seems ominous, not funny.

I’m going to think very carefully before creating any more “joke” protest signs. Today’s “joke” could easily be tomorrow’s reality.


CryptogramFriday Squid Blogging: Capturing the Giant Squid on Video

In this 2013 TED talk, oceanographer Edith Widder explains how her team captured the giant squid on video.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecuritySupreme Court: Police Need Warrant for Mobile Location Data

The U.S. Supreme Court today ruled that the government needs to obtain a court-ordered warrant to gather location data on mobile device users. The decision is a major development for privacy rights, but experts say it may have limited bearing on the selling of real-time customer location data by the wireless carriers to third-party companies.

Image: Wikipedia.

At issue is Carpenter v. United States, which challenged a legal theory the Supreme Court outlined more than 40 years ago known as the “third-party doctrine.” The doctrine holds that people who voluntarily give information to third parties — such as banks, phone companies, email providers or Internet service providers (ISPs) — have “no reasonable expectation of privacy.”

That framework in recent years has been interpreted to allow police and federal investigators to obtain information — such as mobile location data — from third parties without a warrant. But in a 5-4 ruling issued today that flies in the face of the third-party doctrine, the Supreme Court cited “seismic shifts in digital technology” allowing wireless carriers to collect “deeply revealing” information about mobile users that should be protected by the 4th Amendment to the U.S. Constitution, which is intended to shield Americans against unreasonable searches and seizures by the government.

Amy Howe, a reporter for, writes that the decision means police will generally need to get a warrant to obtain cell-site location information, a record of the cell towers (or other sites) with which a cellphone connected.

The ruling is no doubt a big win for privacy advocates, but many readers have been asking whether this case has any bearing on the sharing or selling of real-time customer location data by the mobile providers to third party companies. Last month, The New York times revealed that a company called Securus Technologies had been selling this highly sensitive real-time location information to local police forces across the United States, thanks to agreements the company had in place with the major mobile providers.

It soon emerged that Securus was getting its location data second-hand through a company called 3Cinteractive, which in turn was reselling data from California-based “location aggregator” LocationSmart. Roughly two weeks after The Times’ scoop, KrebsOnSecurity broke the news that anyone could look up the real time location data for virtually any phone number assigned by the major carriers, using a buggy try-before-you-buy demo page that LocationSmart had made available online for years to showcase its technology.

Since those scandals broke, LocationSmart disabled its promiscuous demo page. More importantly, AT&T, Sprint, T-Mobile and Verizon all have said they are now in the process of terminating agreements with third-parties to share this real-time location data.

Still, there is no law preventing the mobile providers from hashing out new deals to sell this data going forward, and many readers here have expressed concerns that the carriers can and eventually will do exactly that.

So the question is: Does today’s Supreme Court ruling have any bearing whatsoever on mobile providers sharing location data with private companies?

According to SCOTUSblog’s Howe, the answer is probably “no.”

“[Justice] Roberts emphasized that today’s ruling ‘is a narrow one’ that applies only to cell-site location records,” Howe writes. “He took pains to point out that the ruling did not ‘express a view on matters not before us’ – such as obtaining cell-site location records in real time, or getting information about all of the phones that connected to a particular tower at a particular time. He acknowledged that law-enforcement officials might still be able to obtain cell-site location records without a warrant in emergencies, to deal with ‘bomb threats, active shootings, and child abductions.'”

However, today’s decision by the high court may have implications for companies like Securus which have marketed the ability to provide real-time mobile location data to law enforcement officials, according to Jennifer Lynch, a senior staff attorney with the Electronic Frontier Foundation, a nonprofit digital rights advocacy group.

“The court clearly recognizes the ‘deeply revealing nature’ of location data and recognizes we have a privacy interest in this kind of information, even when it’s collected by a third party (the phone companies),” Lynch wrote in an email to KrebsOnSecurity. “I think Carpenter would have implications for the Securus context where the phone companies were sharing location data with non-government third parties that were then, themselves, making that data available to the government.”

Lynch said that in those circumstances, there is a strong argument the government would need to get a warrant to access the data (even if the information didn’t come directly from the phone company).

“However, Carpenter’s impact in other contexts — specifically in contexts where the government is not involved — is much less clear,” she added. “Currently, there aren’t any federal laws that would prevent phone companies from sharing data with non-government third parties, and the Fourth Amendment would not apply in that context.”

And there’s the rub: There is nothing in the current law that prevents mobile companies from sharing real-time location data with other commercial entities. For that reality to change, Congress would need to act. For more on the prospects of that happening and how we wound up here, check out my May 26 story, Why is Your Location Data No Longer Private?

The full Supreme Court opinion in Carpenter v. United States is available here (PDF).

CryptogramThe Effects of Iran's Telegram Ban

The Center for Human Rights in Iran has released a report outlining the effect's of that country's ban on Telegram, a secure messaging app used by about half of the country.

The ban will disrupt the most important, uncensored platform for information and communication in Iran, one that is used extensively by activists, independent and citizen journalists, dissidents and international media. It will also impact electoral politics in Iran, as centrist, reformist and other relatively moderate political groups that are allowed to participate in Iran's elections have been heavily and successfully using Telegram to promote their candidates and electoral lists during elections. State-controlled domestic apps and media will not provide these groups with such a platform, even as they continue to do so for conservative and hardline political forces in the country, significantly aiding the latter.

From a Wired article:

Researchers found that the ban has had broad effects, hindering and chilling individual speech, forcing political campaigns to turn to state-sponsored media tools, limiting journalists and activists, curtailing international interactions, and eroding businesses that grew their infrastructure and reach off of Telegram.

It's interesting that the analysis doesn't really center around the security properties of Telegram, but more around its ubiquity as a messaging platform in the country.

CryptogramDomain Name Stealing at Gunpoint

I missed this story when it came around last year: someone tried to steal a domain name at gunpoint. He was just sentenced to 20 years in jail.

Worse Than FailureError'd: Be Patient!...OK?

"I used to feel nervous when making payments online, but now I feel'Close' about it," writes Jeff K.


"Looks like me and Microsoft have different ideas of what 75% means," Gary S. wrote.


George writes, "Try this one at home! Head to, search for 'documents for opening account' and enjoy 8 solid pages of ...this."


"I'm confused if the developers knew the difference between Javascript and Java. This has to be a troll...right?" wrote JM.


Tom S. writes, "Saw this in the Friendo app, but what I didn't spot was an Ok button. "


"I look at this and wonder if someone could deny a vacation requests because of a conflict of 0.000014 days with another member of staff," writes Rob.


[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Sam VargheseRecycling Trump: Old news passed off as investigative reporting

Over the last three weeks, viewers of the Australian Broadcasting Corporation’s Four Corners program have been treated to what is the ultimate waste of time: a recapping of all that has gone on in the United States during the investigation into alleged Russian collusion with the Trump campaign in the 2016 presidential campaign.

There was nothing new in the nearly three hours of programming on what is the ABC’s prime investigative program. It only served as a vanity outlet for Sarah Ferguson, rated as one of the network’s better reporters, but after this, and her unnecessary Hillary Clinton interview, she appears to be someone who is interested in big-noting herself.

Exactly why Ferguson and a crew spent what must be between four to six weeks in the US, London and Moscow to put to air material that has been beaten to death by the US and other Western media is a mystery. Had Ferguson managed to unearth one nugget of information that has gone unnoticed so far, one would not be inclined to complain.

But this same ABC has been crying itself hoarse for the last few months over cuts to its budget and trumpeting its news credentials – and then it produces garbage like the three episode of the Russia-Trump series or whatever it was called.

As an aside, the investigation has been going on for more than a year now, with special counsel Robert Mueller, a former FBI director, having been appointed on May 17, 2017. The American media have had a field day and every time there is a fresh development, these are shrieks all around that this is the straw that breaks the camel’s back. But it all turns out to be an illusion in the end.

Every little detail of the process of electing Donald Trump has been covered and dissected over and over and over again. And yet Ferguson thought it a good idea to run three hours of this garbage.

Apart from the fact that this something akin to the behaviour of a dog that revisits its own vomit, Ferguson also paraded some very dodgy individuals to bolster her program.

One was James Clapper, the director of national intelligence during the Obama presidency. Clapper is a man who has committed perjury by lying to the US Congress under oath. Clapper also leaked information about the infamous anti-Trump dossier to CNN’s Jake Tapper and then was rewarded with a contract at CNN.

Clapper does not have the best of reputations when it comes to integrity. To call him a shady character would not be a stretch. Now Ferguson may have needed to speak to him once, because he was the DNI under Obama. But she did not need to have him appear every now and then, remarking on this and that. He added no weight to an already weak program.

Another person Ferguson gave plenty of air time to was Luke Harding, a reporter with the Guardian. Harding is known for a few things: plagiarising others’ reports while he was stationed in Moscow and writing a book about Edward Snowden without having met any of the principal players in the matter. Once again, a person of dubious character.

One would also have to ask: why does the camera focus on the reporter so much? Is she the story? Or is it a way to puff herself up and appear so important that she cannot be out of sight of the lens lest the story break down? It is a curse of modern journalism, this narcissism, and Ferguson suffers from it badly.

This is the second worthless program Ferguson has produced in recent times; the first was her puff interview with Hillary Clinton.

Maybe she is gearing up to take on some kind of job in the US. Wouldn’t surprise me if public money was being used to paint the meretricious as the magnificent.

CryptogramAlgeria Shut Down the Internet to Prevent Students from Cheating on Exams

Algeria shut the Internet down nationwide to prevent high-school students from cheating on their exams.

The solution in New South Wales, Australia was to ban smartphones.

EDITED TO ADD (6/22): Slashdot thread.


Worse Than FailureWait Low Down

As mentioned previously I’ve been doing a bit of coding for microcontrollers lately. Coming from the world of desktop and web programming, it’s downright revelatory. With no other code running, and no operating system, I can use every cycle on a 16MHz chip, which suddenly seems blazing fast. You might have to worry about hardware interrupts- in fact I had to swap serial connection libraries out because the one we were using misused interrupts and threw of the timing of my process.

And boy, timing is amazing when you’re the only thing running on the CPU. I was controlling some LEDs and if I just went in a smooth ramp from one brightness level to the other, the output would be ugly steps instead of a smooth fade. I had to use a technique called temporal dithering, which is a fancy way of saying “flicker really quickly” and in this case depended on accurate, sub-microsecond timing. This is all new to me.

Speaking of sub-microsecond timing, or "subus", let's check out Jindra S’s submission. This code also runs on a microcontroller, and for… “performance” or “clock accuracy” is assembly inlined into C.

/*********************** FUNCTION v_Angie_WaitSubus *******************************//**
@brief Busy waits for a defined number of cycles.
The number of needed sys clk cycles depends on the number of flash wait states,
but due to the caching, the flash wait states are not relevant for STM32F4.
4 cycles per u32_Cnt
__asm void  v_Angie_WaitSubus( uint32_t u32_Cnt )
    subs r0, #1
    cbz  r0, loop_exit
    b loop
    bx lr

Now, this assembly isn’t the most readable thing, but the equivalent C code is pretty easy to follow: while(--u32_Cnt); In other words, this is your typical busy-loop. Since this code is the only code running on the chip, no problem right? Well, check out this one:

/*********************** FUNCTION v_Angie_IRQWaitSubus *******************************//**
@brief Busy waits for a defined number of cycles.
The number of needed sys clk cycles depends on the number of flash wait states,
but due to the caching, the flash wait states are not relevant for STM32F4.
4 cycles per u32_Cnt
__asm void  v_Angie_IRQWaitSubus( uint32_t u32_Cnt )
    subs r0, #1
    cbz  r0, IRQloop_exit
    b IRQloop
    bx lr

What do you know, it’s the same exact code, but called IRQWaitSubus, implying it’s meant to be called inside of an interrupt handler. The details can get fiendishly complicated, but for those who aren’t looking at low-level code on the regular, interrupts are the low-level cousin of event handlers. It allows a piece of hardware (or software, in multiprocessing systems) to notify the CPU that something interesting has happened, and the CPU can then execute some of your code to react to it. Like any other event handler, interrupt handlers should be fast, so they can update the program state and then allow normal execution to continue.

What you emphatically do not do is wait inside of an interrupt handler. That’s bad. Not a full-on WTF, but… bad.

There’s at least three more variations of this function, with slightly different names, scattered across different modules, all of which represent a simple busy loop.

Ugly, sure, but where’s the WTF? Well, among other things, this board needed to output precisely timed signals, like say, a 500Hz square wave with a 20% duty cycle. The on-board CPU clock was a simple oscillator which would drift- over time, with changes in temperature, etc. Also, interrupts could claim CPU cycles, throwing off the waits. So Jindra’s company had placed this code onto some STM32F4 ARM microcontrollers, shipped it into the field, and discovered that outside of their climate controlled offices, stuff started to fail.

The code fix was simple- the STM32-series of processors had a hardware timer which could provide precise timing. Switching to that approach not only made the system more accurate- it also meant that Jindra could throw away hundreds of lines of code which was complicated, buggy, and littered with inline assembly for no particular reason. There was just one problem: the devices with the bad software were already in the field. Angry customers were already upset over how unreliable the system was. And short of going on site to reflash the microcontrollers or shipping fresh replacements, the company was left with only one recourse:

They announced Rev 2 of their product, which offered higher rates of reliability and better performance, and only cost 2% more!

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

CryptogramAre Free Societies at a Disadvantage in National Cybersecurity

Jack Goldsmith and Stuart Russell just published an interesting paper, making the case that free and democratic nations are at a structural disadvantage in nation-on-nation cyberattack and defense. From a blog post:

It seeks to explain why the United States is struggling to deal with the "soft" cyber operations that have been so prevalent in recent years: cyberespionage and cybertheft, often followed by strategic publication; information operations and propaganda; and relatively low-level cyber disruptions such as denial-of-service and ransomware attacks. The main explanation is that constituent elements of U.S. society -- a commitment to free speech, privacy and the rule of law; innovative technology firms; relatively unregulated markets; and deep digital sophistication -- create asymmetric vulnerabilities that foreign adversaries, especially authoritarian ones, can exploit. These asymmetrical vulnerabilities might explain why the United States so often appears to be on the losing end of recent cyber operations and why U.S. attempts to develop and implement policies to enhance defense, resiliency, response or deterrence in the cyber realm have been ineffective.

I have long thought this to be true. There are defensive cybersecurity measures that a totalitarian country can take that a free, open, democratic country cannot. And there are attacks against a free, open, democratic country that just don't matter to a totalitarian country. That makes us more vulnerable. (I don't mean to imply -- and neither do Russell and Goldsmith -- that this disadvantage implies that free societies are overall worse, but it is an asymmetry that we should be aware of.)

I do worry that these disadvantages will someday become intolerable. Dan Geer often said that "the price of freedom is the probability of crime." We are willing to pay this price because it isn't that high. As technology makes individual and small-group actors more powerful, this price will get higher. Will there be a point in the future where free and open societies will no longer be able to survive? I honestly don't know.

EDITED TO ADD (6/21): Jack Goldsmith also wrote this.


TEDTEDx talk under review

Updated June 20, 2018: An independently organized TEDx event recently posted, and subsequently removed, a talk from the TEDx YouTube channel that the event organizer titled: “Why our perception of pedophilia has to change.”

In the TEDx talk, a speaker described pedophilia as a condition some people are born with, and suggested that if we recognize it as such, we can do more to prevent those people from acting on their instincts.

TEDx events are organized independently from the main annual TED conference, with some 3,500 events held every year in more than 100 countries. Our nonprofit TED organization does not control TEDx events’ content.

This talk and its removal was recently brought to our attention. After reviewing the talk, we believe it cites research in ways that are open to serious misinterpretation. This led some viewers to interpret the talk as an argument in favor of an illegal and harmful practice.

Furthermore, after contacting the organizer to understand why it had been taken down, we learned that the speaker herself requested it be removed from the internet because she had serious concerns about her own safety in its wake.

Our policy is and always has been to remove speakers’ talks when they request we do so. That is why we support this TEDx organizer’s decision to respect this speaker’s wishes and keep the talk offline.

We will continue to take down any illegal copies of the talk posted on the Internet.

Original, posted June 19, 2018: An independently organized TEDx event recently posted, and subsequently removed, a talk from the TEDx YouTube channel that the event organizer had titled: “Why our perception of pedophilia has to change.”
We were not aware of this organizer’s actions, but understand now that their decision to remove the talk was at the speaker’s request for her safety.
In our review of the talk in question, we at TED believe it cites research open for serious misinterpretation.
TED does not support or advocate for pedophilia.
We are now reviewing the talk to determine how to move forward.
Until we can review this talk for potential harm to viewers, we are taking down any illegal copies of the talk posted on the Internet.  

CryptogramPerverse Vulnerability from Interaction between 2-Factor Authentication and iOS AutoFill

Apple is rolling out an iOS security usability feature called Security code AutoFill. The basic idea is that the OS scans incoming SMS messages for security codes and suggests them in AutoFill, so that people can use them without having to memorize or type them.

Sounds like a really good idea, but Andreas Gutmann points out an application where this could become a vulnerability: when authenticating transactions:

Transaction authentication, as opposed to user authentication, is used to attest the correctness of the intention of an action rather than just the identity of a user. It is most widely known from online banking, where it is an essential tool to defend against sophisticated attacks. For example, an adversary can try to trick a victim into transferring money to a different account than the one intended. To achieve this the adversary might use social engineering techniques such as phishing and vishing and/or tools such as Man-in-the-Browser malware.

Transaction authentication is used to defend against these adversaries. Different methods exist but in the one of relevance here -- which is among the most common methods currently used -- the bank will summarise the salient information of any transaction request, augment this summary with a TAN tailored to that information, and send this data to the registered phone number via SMS. The user, or bank customer in this case, should verify the summary and, if this summary matches with his or her intentions, copy the TAN from the SMS message into the webpage.

This new iOS feature creates problems for the use of SMS in transaction authentication. Applied to 2FA, the user would no longer need to open and read the SMS from which the code has already been conveniently extracted and presented. Unless this feature can reliably distinguish between OTPs in 2FA and TANs in transaction authentication, we can expect that users will also have their TANs extracted and presented without context of the salient information, e.g. amount and destination of the transaction. Yet, precisely the verification of this salient information is essential for security. Examples of where this scenario could apply include a Man-in-the-Middle attack on the user accessing online banking from their mobile browser, or where a malicious website or app on the user's phone accesses the bank's legitimate online banking service.

This is an interesting interaction between two security systems. Security code AutoFill eliminates the need for the user to view the SMS or memorize the one-time code. Transaction authentication assumes the user read and approved the additional information in the SMS message before using the one-time code.

Worse Than FailureThe Wizard Algorithm

Password requirements can be complicated. Some minimum and maximum number of characters, alpha and numeric characters, special characters, upper and lower case, change frequency, uniqueness over the last n passwords and different rules for different systems. It's enough to make you revert to a PostIt in your desk drawer to keep track of it all. Some companies have brillant employees who feel that they can do better, and so they create a way to figure out the password for any given computer - so you need to neither remember nor even know it.

Kendall Mfg. Co. (estab. 1827) (3092720143)

History does not show who created the wizard algorithm, or when, or what they were smoking at the time.

Barry W. has the misfortune of being a Windows administrator at a company that believes in coming up with their own unique way of doing things, because they can make it better than the way that everyone else is doing it. It's a small organization, in a sleepy part of a small country. And yet, the IT department prides itself on its highly secure practices.

Take the password of the local administrator account, for instance. It's the Windows equivalent of root, so you'd better use a long and complex password. The IT team won't use software to automate and keep track of passwords, so to make things extremely secure, there's a different password for every server.

Here's where the wizard algorithm comes in.

To determine the password, all you need is the server's hostname and its IP address.

For example, take the server PRD-APP2-SERV4 which has the IP address

Convert the hostname to upper case and discard any hyphens, yielding PRDAPP2SERV4.

Take the middle two octets of the IP address. If either is a single digit, pad it out to double digits. So becomes which yields 8010. Now take the last character of the host name; if that's a digit, discard it and take the last letter, otherwise just take the last letter, which gives us V. Now take the second and third letters of the hostname and concatenate them to the 8010 and then stick that V on the end. This gives us 8010RDV. Now take the fourth and fifth letters, and add them to the end, which makes 8010RDVAP. And there's your password! Easy.

It had been that way for as long as anyone could remember, until the day someone decided to enable password complexity on the domain. From then on, you had to do all of the above, and then add @!#%&$?@! to the end of the password. How would you know whether a server has a password using the old method or the new one? Why by a spreadsheet available on the firm-wide-accessible file system, of course! Oh, by the way, there is no server management software.

Critics might say the wizard algorithm has certain disadvantages. The fact that two people, given the same hostname and IP address, often come up with different results for the algorithm. Apparently, writing a script to figure it out for you never dawned on anyone.

Or the fact that when a server has lost contact with the domain and you're trying to log on locally and the phone's ringing and everyone's pressuring you to get it resolved, the last thing you want to be doing is math puzzles.

But at least it's better than the standard way people normally do it!

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Krebs on SecurityAT&T, Sprint, Verizon to Stop Sharing Customer Location Data With Third Parties

In the wake of a scandal involving third-party companies leaking or selling precise, real-time location data on virtually all Americans who own a mobile phone, AT&T, Sprint and Verizon now say they are terminating location data sharing agreements with third parties.

At issue are companies known in the wireless industry as “location aggregators,” entities that manage requests for real-time customer location data for a variety of purposes, such as roadside assistance and emergency response. These aggregators are supposed to obtain customer consent before divulging such information, but several recent incidents show that this third-party trust model is fundamentally broken.

On May 10, 2018, The New York Times broke the story that a little-known data broker named Securus was selling local police forces around the country the ability to look up the precise location of any cell phone across all of the major U.S. mobile networks.

Then it emerged that Securus had been hacked, its database of hundreds of law enforcement officer usernames and passwords plundered. We also learned that Securus’ data was ultimately obtained from a company called 3Cinteractive, which in turn obtained its data through a California-based location tracking firm called LocationSmart.

On May 17, KrebsOnSecurity broke the news of research by Carnegie Mellon University PhD student Robert Xiao, who discovered that a LocationSmart try-before-you-buy opt-in demo of the company’s technology was wide open — allowing real-time lookups from anyone on anyone’s mobile device — without any sort of authentication, consent or authorization.

LocationSmart disabled its demo page shortly after that story. By that time, Sen. Ron Wyden (D-Ore.) had already sent letters to AT&T, Sprint, T-Mobile and Verizon, asking them to detail any agreements to share real-time customer location data with third-party data aggregation firms.

AT&T, T-Mobile and Verizon all said they had terminated data-sharing agreements with Securus. In a written response (PDF) to Sen. Wyden, Sprint declined to share any information about third-parties with which it may share customer location data, and it was the only one of the four carriers that didn’t say it was terminating any data-sharing agreements.

T-Mobile and Verizon each said they both share real-time customer data with two companies — LocationSmart and another firm called Zumigo, noting that these companies in turn provide services to a total of approximately 75 other customers.

Verizon emphasized that Zumigo — unlike LocationSmart — has never offered any kind of mobile location information demo service via its site. Nevertheless, Verizon said it had decided to terminate its current location aggregation arrangements with both LocationSmart and Zumigo.

“Verizon has notified these location aggregators that it intends to terminate their ability to access and use our customers’ location data as soon as possible,” wrote Karen Zacharia, Verizon’s chief privacy officer. “We recognize that location information can provide many pro-consumer benefits. But our review of our location aggregator program has led to a number of internal questions about how best to protect our customers’ data. We will not enter into new location aggregation arrangements unless and until we are comfortable that we can adequately protect our customers’ location data through technological advancements and/or other practices.”

In its response (PDF), AT&T made no mention of any other company besides Securus. AT&T indicated it had no intention to stop sharing real-time location data with third-parties, stating that “without an aggregator, there would be no practical and efficient method to facilitate requests across different carriers.”

Sen. Wyden issued a statement today calling on all wireless companies to follow Verizon’s lead.

“Verizon deserves credit for taking quick action to protect its customers’ privacy and security,” Wyden said. “After my investigation and follow-up reports revealed that middlemen are selling Americans’ location to the highest bidder without their consent, or making it available on insecure web portals, Verizon did the responsible thing and promptly announced it was cutting these companies off. In contrast, AT&T, T-Mobile, and Sprint seem content to continuing to sell their customers’ private information to these shady middle men, Americans’ privacy be damned.”

Update, 5:20 p.m. ET: Shortly after Verizon’s letter became public, AT&T and Sprint have now said they, too, will start terminating agreements to share customer location data with third parties.

“Based on our current internal review, Sprint is beginning the process of terminating its current contracts with data aggregators to whom we provide location data,” the company said in an emailed statement. “This will take some time in order to unwind services to consumers, such as roadside assistance and fraud prevention services. Sprint previously suspended all data sharing with LocationSmart on May 25, 2018. We are taking this further step to ensure that any instances of unauthorized location data sharing for purposes not approved by Sprint can be identified and prevented if location data is shared inappropriately by a participating company.”

AT&T today also issued a statement: “Our top priority is to protect our customers’ information, and, to that end, we will be ending our work with aggregators for these services as soon as practical in a way that preserves important, potential lifesaving services like emergency roadside assistance.”

KrebsOnSecurity asked T-Mobile if the company planned to follow suit, and was referred to a tweet today from T-Mobile CEO John Legere, who wrote: “I’ve personally evaluated this issue & have pledged that T-Mobile will not sell customer location data to shady middlemen.” In a follow-up statement shared by T-Mobile, the company said, “We ended all transmission of customer data to Securus and we are terminating our location aggregator agreements.

Wyden’s letter asked the carriers to detail any arrangements they may have to validate that location aggregators are in fact gaining customer consent before divulging the information. Both Sprint and T-Mobile said location aggregators were contractually obligated to obtain customer consent before sharing the data, but they provided few details about any programs in place to review claims and evidence that an aggregator has obtained consent.

AT&T and Verizon each said they have processes for periodically auditing consent practices by the location aggregators, but that Securus’ unauthorized use of the data somehow flew under the radar.

AT&T noted that it began its relationship with LocationSmart in October 2012 (back when it was known by another name, “Locaid”).  Under that agreement, LocationSmart’s customer 3Cinteractive would share location information with prison officials through prison telecommunications provider Securus, which operates a prison inmate calling service.

But AT&T said after Locaid was granted that access, Securus began abusing it to sell an unauthorized “on-demand service” that allowed police departments to learn the real-time location data of any customer of the four major providers.

“We now understand that, despite AT&T’s requirements to obtain customer consent, Securus did not in fact obtain customer consent before collecting customers’ location information for its on-demand service,” wrote Timothy P. McKone, executive vice president of federal relations at AT&T. “Instead, Securus evidently relied upon law enforcement’s representation that it had appropriate legal authority to obtain customer location data, such as a warrant, court order, or other authorizing document as a proxy for customer consent.”

McKone’s letter downplays the severity of the Securus incident, saying that the on-demand location requests “comprised a tiny fraction — less than two tenths of one percent — of the total requests Securus submitted for the approved inmate calling service. AT&T has no reason to believe that there are other instances of unauthorized access to AT&T customer location data.”

Blake Reid, an associate clinical professor at the University of Colorado School of Law, said the entire mobile location-sharing debacle shows the futility of transitive trust.

“The carriers basically have arrangements with these location aggregators that contractually say, ‘You agree not to use this access we provide you without getting customer consent’,” Reid said. “Then that aggregator has a relationship with another aggregator, and so on. So what we then have is this long chain of trust where no one has ever consented to the provision of the location information, and yet it ends up getting disclosed anyhow.”

Curious how we got here and what Congress or federal regulators might do about the current situation? Check out last month’s story, Why Is Your Location Data No Longer Private.

Update, 5:20 p.m. ET: Updated headline and story to reflect statements from AT&T and Sprint that they are winding down customer location data-sharing agreements with third party companies.

Update, June 20, 2:23 p.m. ET: Added clarification from T-Mobile.

Sociological Images“Uncomfortable with Cages”: When Framing Fails

By now, you’ve probably heard about the family separation and detention policies at the U.S. border. The facts are horrifying.

Recent media coverage has led to a flurry of outrage and debate about the origins of this policy. It is a lot to take in, but this case also got me thinking about an important lesson from sociology for following politics in 2018: we’re not powerless in the face of “fake news.”

Photo Credit: Fibonacci Blue, Flickr CC

Political sociologists talk a lot about framing—the way movements and leaders select different interpretations of an issue to define and promote their position. Frames are powerful interpretive tools, and sociologists have shown how framing matters for everything from welfare reform and nuclear power advocacy to pro-life and labor movements.

One of the big assumptions in framing theory is that leaders coordinate. There might be competition to establish a message at first, but actors on the same side have to get together fairly quickly to present a clean, easy to understand “package” of ideas to people in order to make political change.

The trick is that it is easy to get cynical about framing, to think that only powerful people get to define the terms of debate. We assume that a slick, well-funded media campaign will win out, and any counter-frames will get pushed to the side. But the recent uproar over boarder separation policies shows how framing can be a very messy process. Over just a few days, these are a few of the frames coming from administration officials and border authorities:

We don’t know how this issue is going to turn out, but many of these frames have been met with skepticism, more outrage, and plenty of counter-evidence. Calling out these frames alone is not enough; it will take mobilization, activism, lobbying, and legislation to change these policies. Nevertheless, this is an important reminder that framing is a social process, and, especially in an age of social media, it is easier than ever to disrupt a political narrative before it has the chance to get organized.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

Worse Than FailureCodeSOD: A Unique Specification

One of the skills I think programmers should develop is not directly programming related: you should be comfortable reading RFCs. If, for example, you want to know what actually constitutes an email address, you may want to brush up on your BNF grammars. Reading and understanding an RFC is its own skill, and while I wouldn’t suggest getting in the habit of reading RFCs for fun, it’s something you should do from time to time.

To build the skill, I recommend picking a simple one, like UUIDs. There’s a lot of information encoded in a UUID, and five different ways to define UUIDs- though usually we use type 1 (timestamp-based) and type 4 (random). Even if you haven’t gone through and read the spec, you already know the most important fact about UUIDs: they’re unique. They’re universally unique in fact, and you can use them as identifiers. You shouldn’t have a collision happen within the lifetime of the universe, unless someone does something incredibly wrong.

Dexen encountered a database full of collisions on UUIDs. Duplicates were scattered all over the place. Since we’re not well past the heat-death of the universe, the obvious answer is that someone did something entirely wrong.

use Ramsey\Uuid\Uuid;
$model->uuid = Uuid::uuid5(Uuid::NAMESPACE_DNS, sprintf('%s.%s.%s.%s', 
    rand(0, time()), time(), 
    static::class, config('modelutils.namespace')))->toString();

This block of PHP code uses the type–5 UUID, which allows you to generate the UUID based on a name. Given a namespace, usually a domain name, it runs it through SHA–1 to generate the required bytes, allowing you to create specific UUIDs as needed. In this case, Dexen’s predecessor was generating a “domain name”-ish string by combining: a random number from 0 to seconds after the epoch, the number of seconds after the epoch, the name of the class, and a config key. So this developer wasn’t creating UUIDs with a specific, predictable input (the point of UUID–5), but was mixing a little from the UUID–1 time-based generation, and the UUID–4 random-based generation, but without the cryptographically secure source of randomness.

Thus, collisions. Since these UUIDs didn’t need to be sortable (no need for UUID–1), Dexen changed the generation to UUID–4.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Don Martiblood donation: no good deed goes unpunished

I have been infected with the Ebola virus.

I have had sex with another man in the past year.

I am taking Coumadin®.

Actually, none of those three statements is true. And Facebook knows it.

The American Red Cross has given Facebook this highly personal information about me, by adding my contact info to an "American Red Cross Blood Donors" Facebook Custom Audience. If any of that stuff were true, I wouldn't have been allowed to give blood.

When I heard back from the American Red Cross about this personal data problem, they told me that they don't share my health information with Facebook.

That's not how it works. I'm listed in the Custom Audience as a blood donor. Anyway, too late. Facebook has the info now.

So, which of its promises about how it uses people's personal information is Facebook going to break next?

And is some creepy tech bro right now making a killer pitch to Paul Graham about a business plan to "disrupt" the health insurance market using blood donor information?

I should not have to care about this, and I don't have time to. I don't even have time to attempt a funny remark about the whole Facebook board member Peter Thiel craving blood thing.


Rondam RamblingsDamn straight there's a moral equivalence here

Germany, 1945: The United States of America, 2018: It's true, the kid in the second picture is not being sent to the gas chambers (yet).  But here's the thing: she doesn't know that!  This kid is two years old.  All she knows is that her mother is being taken away, and she may or may not ever see her again. The government of the United States of America has run completely off the

Krebs on SecurityGoogle to Fix Location Data Leak in Google Home, Chromecast

Google in the coming weeks is expected to fix a location privacy leak in two of its most popular consumer products. New research shows that Web sites can run a simple script in the background that collects precise location data on people who have a Google Home or Chromecast device installed anywhere on their local network.

Craig Young, a researcher with security firm Tripwire, said he discovered an authentication weakness that leaks incredibly accurate location information about users of both the smart speaker and home assistant Google Home, and Chromecast, a small electronic device that makes it simple to stream TV shows, movies and games to a digital television or monitor.

Young said the attack works by asking the Google device for a list of nearby wireless networks and then sending that list to Google’s geolocation lookup services.

“An attacker can be completely remote as long as they can get the victim to open a link while connected to the same Wi-Fi or wired network as a Google Chromecast or Home device,” Young told KrebsOnSecurity. “The only real limitation is that the link needs to remain open for about a minute before the attacker has a location. The attack content could be contained within malicious advertisements or even a tweet.”

It is common for Web sites to keep a record of the numeric Internet Protocol (IP) address of all visitors, and those addresses can be used in combination with online geolocation tools to glean information about each visitor’s hometown or region. But this type of location information is often quite imprecise. In many cases, IP geolocation offers only a general idea of where the IP address may be based geographically.

This is typically not the case with Google’s geolocation data, which includes comprehensive maps of wireless network names around the world, linking each individual Wi-Fi network to a corresponding physical location. Armed with this data, Google can very often determine a user’s location to within a few feet (particularly in densely populated areas), by triangulating the user between several nearby mapped Wi-Fi access points. [Side note: Anyone who’d like to see this in action need only to turn off location data and remove the SIM card from a smart phone and see how well navigation apps like Google’s Waze can still figure out where you are].

“The difference between this and a basic IP geolocation is the level of precision,” Young said. “For example, if I geolocate my IP address right now, I get a location that is roughly 2 miles from my current location at work. For my home Internet connection, the IP geolocation is only accurate to about 3 miles. With my attack demo however, I’ve been consistently getting locations within about 10 meters of the device.”

Young said a demo he created (a video of which is below) is accurate enough that he can tell roughly how far apart his device in the kitchen is from another device in the basement.

“I’ve only tested this in three environments so far, but in each case the location corresponds to the right street address,” Young said. “The Wi-Fi based geolocation works by triangulating a position based on signal strengths to Wi-Fi access points with known locations based on reporting from people’s phones.”

Beyond leaking a Chromecast or Google Home user’s precise geographic location, this bug could help scammers make phishing and extortion attacks appear more realistic. Common scams like fake FBI or IRS warnings or threats to release compromising photos or expose some secret to friends and family could abuse Google’s location data to lend credibility to the fake warnings, Young notes.

“The implications of this are quite broad including the possibility for more effective blackmail or extortion campaigns,” he said. “Threats to release compromising photos or expose some secret to friends and family could use this to lend credibility to the warnings and increase their odds of success.”

When Young first reached out to Google in May about his findings, the company replied by closing his bug report with a “Status: Won’t Fix (Intended Behavior)” message. But after being contacted by KrebsOnSecurity, Google changed its tune, saying it planned to ship an update to address the privacy leak in both devices. Currently, that update is slated to be released in mid-July 2018.

According to Tripwire, the location data leak stems from poor authentication by Google Home and Chromecast devices, which rarely require authentication for connections received on a local network.

“We must assume that any data accessible on the local network without credentials is also accessible to hostile adversaries,” Young wrote in a blog post about his findings. “This means that all requests must be authenticated and all unauthenticated responses should be as generic as possible. Until we reach that point, consumers should separate their devices as best as is possible and be mindful of what web sites or apps are loaded while on the same network as their connected gadgets.”

Earlier this year, KrebsOnSecurity posted some basic rules for securing your various “Internet of Things” (IoT) devices. That primer lacked one piece of advice that is a bit more technical but which can help mitigate security or privacy issues that come with using IoT systems: Creating your own “Intranet of Things,” by segregating IoT devices from the rest of your local network so that they reside on a completely different network from the devices you use to browse the Internet and store files.

“A much easier solution is to add another router on the network specifically for connected devices,” Young wrote. “By connecting the WAN port of the new router to an open LAN port on the existing router, attacker code running on the main network will not have a path to abuse those connected devices. Although this does not by default prevent attacks from the IoT devices to the main network, it is likely that most naïve attacks would fail to even recognize that there is another network to attack.”

For more on setting up a multi-router solution to mitigating threats from IoT devices, check out this in-depth post on the subject from security researcher and blogger Steve Gibson.

Update, June 19, 6:24 p.m. ET: The authentication problems that Tripwire found are hardly unique to Google’s products, according to extensive research released today by artist and programmer Brannon Dorsey. Check out‘s story on Dorsey’s research here.

MECooperative Learning

This post is about my latest idea for learning about computers. I posted it to my local LUG mailing list and received no responses. But I still think it’s a great idea and that I just need to find the right way to launch it.

I think it would be good to try cooperative learning about Computer Science online. The idea is that everyone would join an IRC channel at a suitable time with virtual machine software configured and try out new FOSS software at the same time and exchange ideas about it via IRC. It would be fairly informal and people could come and go as they wish, the session would probably go for about 4 hours but if people want to go on longer then no-one would stop them.

I’ve got some under-utilised KVM servers that I could use to provide test VMs for network software, my original idea was to use those for members of my local LUG. But that doesn’t scale well. If a larger group people are to be involved they would have to run their own virtual machines, use physical hardware, or use trial accounts from VM companies.

The general idea would be for two broad categories of sessions, ones where an expert provides a training session (assigning tasks to students and providing suggestions when they get stuck) and ones where the coordinator has no particular expertise and everyone just learns together (like “let’s all download a random BSD Unix and see how it compares to Linux”).

As this would be IRC based there would be no impediment for people from other regions being involved apart from the fact that it might start at 1AM their time (IE 6PM in the east coast of Australia is 1AM on the west coast of the US). For most people the best times for such education would be evenings on week nights which greatly limits the geographic spread.

While the aims of this would mostly be things that relate to Linux, I would be happy to coordinate a session on ReactOS as well. I’m thinking of running training sessions on etbemon, DNS, Postfix, BTRFS, ZFS, and SE Linux.

I’m thinking of coordinating learning sessions about DragonflyBSD (particularly HAMMER2), ReactOS, Haiku, and Ceph. If people are interested in DragonflyBSD then we should do that one first as in a week or so I’ll probably have learned what I want to learn and moved on (but not become enough of an expert to run a training session).

One of the benefits of this idea is to help in motivation. If you are on your own playing with something new like a different Unix OS in a VM you will be tempted to take a break and watch YouTube or something when you get stuck. If there are a dozen other people also working on it then you will have help in solving problems and an incentive to keep at it while help is available.

So the issues to be discussed are:

  1. What communication method to use? IRC? What server?
  2. What time/date for the first session?
  3. What topic for the first session? DragonflyBSD?
  4. How do we announce recurring meetings? A mailing list?
  5. What else should we setup to facilitate training? A wiki for notes?

Finally while I list things I’m interested in learning and teaching this isn’t just about me. If this becomes successful then I expect that there will be some topics that don’t interest me and some sessions at times when I am have other things to do (like work). I’m sure people can have fun without me. If anyone has already established something like this then I’d be happy to join that instead of starting my own, my aim is not to run another hobbyist/professional group but to learn things and teach things.

There is a Wikipedia page about Cooperative Learning. While that’s interesting I don’t think it has much relevance on what I’m trying to do. The Wikipedia article has some good information on the benefits of cooperative education and situations where it doesn’t work well. My idea is to have a self-selecting people who choose it because of their own personal goals in terms of fun and learning. So it doesn’t have to work for everyone, just for enough people to have a good group.

CryptogramRidiculously Insecure Smart Lock

Tapplock sells an "unbreakable" Internet-connected lock that you can open with your fingerprint. It turns out that:

  1. The lock broadcasts its Bluetooth MAC address in the clear, and you can calculate the unlock key from it.

  2. Any Tapplock account an unlock every lock.

  3. You can open the lock with a screwdriver.

Regarding the third flaw, the manufacturer has responded that "...the lock is invincible to the people who do not have a screwdriver."

You can't make this stuff up.

EDITED TO ADD: The quote at the end is from a different smart lock manufacturer. Apologies for that.

Worse Than FailureCodeSOD: The Sanity Check

I've been automating deployments at work, and for Reasons™, this is happening entirely in BASH. Those Reasons™ are that the client wants to use Salt, but doesn't want to give us access to their Salt environment. Some of our deployment targets are microcontrollers, so Salt isn't even an option.

While I know the shell well enough, I'm getting comfortable with more complicated scripts than I usually write, along with tools like xargs which may be the second best shell command ever invented. yes is the best, obviously.

The key point is that the shell, coupled with the so-called "Unix Philosophy" is an incredibly powerful tool. Even if you already know that it's powerful, it's even more powerful than you think it is.

How powerful? Well, how about ripping apart the fundamental rules of mathematics? An anonymous submitter found this prelude at the start of every shell script in their organization.

#/usr/bin/env bash declare -r ZERO=$(true; echo ${?}) declare -r DIGITZERO=0 function sanity_check() { function err_msg() { echo -e "\033[31m[ERR]:\033[0m ${@}" } if [ ${ZERO} -ne ${DIGITZERO} ]; then err_msg "The laws of physics doesn't apply to this server." err_msg "Real value ${ZERO} is not equal to ${DIGITZERO}." exit 1 fi } sanity_check

true, like yes, is one of those absurdly simple tools: it's a program that completes successfully (returning a 0 exit status back to the shell). The ${?} expression contains the last exit status. Thus, the variable $ZERO will contain… 0. Which should then be equal to 0.

Now, maybe BASH isn't BASH anymore. Maybe true has been patched to fail. Maybe, maybe, maybe, but honestly, I'm wondering whose sanity is actually being checked in the sanity_check?

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet Linux AustraliaJames Morris: Linux Security BoF at Open Source Summit Japan

This is a reminder for folks attending OSS Japan this week that I’ll be leading a  Linux Security BoF session  on Wednesday at 6pm.

If you’ve been working on a Linux security project, feel welcome to discuss it with the group.  We will have a whiteboard and projector.   This is also a good opportunity to raise topics for discussion, and to ask questions about Linux security.

See you then!

Valerie AuroraIn praise of the 30-hour work week

I’ve been working about 30 hours a week for the last two and a half years. I’m happier, healthier, and wealthier than when I was working 40, 50, or 60 hours a week as a full-time salaried software engineer (that means I was only paid for 40 hours a week). If you are a salaried professional in the U.S. who works 40 hours a week or more, there’s a pretty good chance you could also be working fewer hours, possibly even for more money. In this post, I’ll explain some of the myths and the realities that promote overwork. If you’re already convinced that you’d like to work fewer hours, you can skip straight to how you can start taking steps to work less.

A little about me: After college, I worked for about 8 years as a full-time salaried software engineer. Like many software engineers, I often worked 50 or 60 hour weeks while being paid for 40 hours a week. I hit the glass ceiling at age 29 and started working part-time hourly as a software consultant. I loved the hours but hated the instability and was about to lose my health insurance benefits (this was before the ACA passed). Then a colleague offered me a job at his storage startup, working 20 hours a week, salaried, with benefits. I thought, “You can do that???” and negotiated a 30 hour salaried job with benefits with my dream employer. I worked full-time again for about 5 years after that, and put in more 60 hour weeks while co-founding a non-profit. After shutting the non-profit down, I took 3 months off to recover. For the last two and a half years, I’ve worked for myself as a diversity and inclusion in tech consultant. I rarely work more than 30 hours a week and last year I made more money than any other year of my life.

Now, if I told my 25-year-old self this, she’d probably refuse to believe me. When I was 25, I believed my extra hours and hard work would be rewarded, that I’d be able to work 50 or 60 hours a week forever, and that I’d never enjoy anything as much as working. Needless to say, I no longer believe any of those things.

Myths about working overtime

Here are a few of the myths I used to believe about working overtime:

Myth: I can be productive for more than 8 hours a day on a sustained basis

How many hours a day can I productively write code? This will vary for everyone, but the number I hear most often is 4 hours a day 5 days a week, which is my max. I slowly learned that if I wrote code longer than that, my productivity steeply declined. After 8 hours, I was just adding bugs that I’d have to fix the next day. For the other 4 hours, I was better off dealing with email, writing papers, submitting expenses, reading books, or taking a walk (during which I’d usually figure out what I needed to do next in my program). After 8 hours, my brain is useless for anything requiring focus or discipline. I can do more work for short bursts occasionally when I’m motivated, but it takes a toll on my health and I need extra time off to recover.

I know other people can do focused productive work for more than 8 hours a day; congrats! However, keep in mind that I know plenty of people who thought they could work more than 8 hours a day, and then discovered they’d given themselves major stress-related health problems—repetitive stress injury, ulcers, heart trouble—or ignored existing health problems until they got so bad they started interfering with their work. This includes several extremely successful people who only need to sleep 5 hours a night and were using the extra time that gave them to do more work. The human body can only take so much stress.

Myth: My employer will reward me for working extra hours

Turns out, software engineering isn’t graded on effort, like kindergarten class. I remember the first year of my career when I worked my usual overtime and did not get a promotion or a raise; the company was slowly going out of business and it didn’t matter how many hours I worked—I wasn’t getting a raise. Given that my code quality fell off after 4 hours and went negative after 8 hours, it was a waste of time to work overtime anyway. At the same time, I always felt a lot of pressure to appear to be working for more than 40 hours a week, such that 40 hours became the unofficial minimum. The end result was a lot of programmers in the office late at night doing things other than coding: playing games, reading the internet, talking with each other. Which is great when you have no friends outside work, no family nearby, and no hobbies; less great when you do.

Overall, my general impression of the reward structure for software engineers is that people who fit people’s preconceptions of what a programmer looks like and who aggressively self-promote are more likely to get raises and promotions than people who produce more value. (Note that aggressive self-promotion is often punished in women of all races, people of color, disabled folks, immigrants, etc.)

Myth: People who work 40 hours or less are lazy

I was raised with fairly typical American middle-class beliefs about work: work is virtuous, if people don’t have jobs it’s because of some personal failing of theirs, etc. I started to change my mind when I read about Venezuelan medical doctors who were unable to buy shoes during an economic recession. Medical school is hard; I couldn’t believe all of those doctors were lazy! In my first full-time job, I had a co-worker who spent 40 hours a week in the office, but never did any real work. Then I realized that many of the hardest working people I knew were mothers who worked in the home for no pay at all. Nowadays I understand that I can’t judge someone’s moral character by the number of hours of labor they do (or are paid for) each week.

The kind of laziness that does concern me comes from abuse: people using coercion to extract an unfair amount of value from other people’s labor. This includes many abusive spouses, most billionaires, and many politicians. I’m not worried about people who want to work 40 hours a week or fewer so they can spend more time with their kids or crocheting or traveling; they aren’t the problem.

Myth: I work more than 40 hours because I’d be unhappy otherwise

When I was 25, I couldn’t imagine wanting to do other things with the time I was spending on work. With hindsight, I can see that’s because I was socially isolated and didn’t know how to deal with my anxiety other than by working. If I tried to stop working, I would very quickly run out of things to do that I enjoyed, and would end up writing some more code or answering some more work email just to have some positive feelings. It took years and years of therapy, building up my social circle, and developing hobbies before I had enough enjoyable things to do other than work.

Working for pay gives a lot of people joy and that is perfectly fine! It’s when you have few other ways to feel happy that overwork begins to be a problem.

Myth: The way to fix my anxiety is to work more hours

The worse the social safety net is in your country, the more anxious you probably are about your future: Will you have a place to live? Food to eat? Medical care? Clothes for your kids? We often respond to anxiety by shutting down any higher thought and focusing on what is in front of us. For many of us in this situation, the obvious answer seems to be “work more hours.” Now, if you are being paid for working more hours, this makes some sense: money contributes to security. But if you’re not, those extra hours bring no concrete reward. You are just hoping that your employer will take the extra work into consideration when deciding whether to give you a raise or end your employment. Unfortunately, in my experience, the best way to get a raise or keep your job is to be as similar to your management as possible.

If you can take the time to work with your anxiety and pull back and look at the larger picture, you’ll often find better ways to use those extra hours to improve your personal safety net. Just a few off the top of my head: building your professional network, improving your resume, learning new skills, helping friends, caring for your family, meditating, taking care of your health, and talking to a therapist about your anxiety. The future is uncertain and only partially under your control; nothing can change that fundamental truth. Consider carefully whether working unpaid hours is the best way to increase your safety.

Myth: The extra hours are helping me learn skills that will pay off later

Maybe it’s just me, but I can only learn new stuff for a few hours a day. Judging by the recommended course loads at universities, most people can’t actively learn new stuff more than 40 hours a week. If I’ve been working for more than 8 hours, all I can do is repeat things I’ve already learned (like stepping through a program in a debugger). Creative thought and breakthroughs are pretty thin on the ground after 8 hours of hard work. The only skills I’m sure I learned from working more than 40 hours a week are: how to keep going through hunger, how to ignore pain in my body, how to keep going through boredom, how to stay awake, and how to sublimate my healthy normal human desires. Oh, and which office snack foods are least nauseating at 2am.

Myth: Companies won’t hire salaried professionals part-time

Some won’t, some will. Very few companies will spontaneously offer part-time salaried work for a position that usually requires full-time, but if you have negotiating power and you’re persistent, you will be surprised how often you can get part-time work. Negotiating power usually increases as you become a more desirable employee; if you can’t swing part-time now, keeping working on your career and you may be able to get it in the future.

Myth: I can only get benefits if I work full-time

Whether a company can offer the benefits available to full-time employees to part-time employees is up to their internal policies combined with local law. Human beings create policies and laws and they can be changed. Small companies are generally more flexible about policies than large companies. Some companies offer part-time positions as a competitive advantage in hiring. Again, having more negotiating power will help here. Companies are more likely to change their policies or make exceptions if they really really want your services.

Myth: My career will inevitably suffer if I work part-time

There are absolutely some career goals that can only be achieved by working full-time. But working part-time can also help your career. You can use your extra time to learn new skills, or improve your education. You can work on unpaid projects that improve your portfolio. You can extend your professional network. You can get career coaching. You can start your own business. You can write books. You can speak at conferences. Many things are possible.

Real barriers to working fewer hours

Under capitalism, in the absence of enforced laws against working more than a certain number of hours a week, the number of hours a week employees work will grow until the employer is no longer getting a marginal benefit out of each additional hour. That means if the employer will get any additional value out of an hour above and beyond the costs of working that hour, they’ll require the employee to work that hour. This happens without regard for the cost for the employee or their dependents, in terms of health, happiness, or quality of life for their dependents.

In the U.S. and many other countries, we often act like the 40-hour working week is some kind of natural law, when the laws surrounding it were actually the result of a long, desperately fought battle between labor and capital extending over many decades. Even so, what laws we do have limiting the amount of labor an employer can demand from an employee have many loopholes, and often go unenforced. Wage theft—employers stealing wages from employees through a variety of means, including unpaid overtime—accounts for more money stolen in the U.S. than all robberies.

Due to loopholes and lax enforcement, many salaried professionals end up in a situation where all the people they are competing with for jobs or promotions are all working far more than 40 hours a week. They don’t have to be working efficiently for more than 40 hours a week for this to be of benefit to their employers, they just have to be creating more value than they are costing during those hours of work. Some notorious areas of high competition and high hours include professors on the tenure track, lawyers on the partner track, and software engineers working in competitive fields.

In particular, software engineers working for venture capital-funded startups in fields with lots of competitors are under a lot of pressure to produce more work more quickly, since timing is such an important element of success in the fields that venture capital invests in. The result is a lot of software engineers who burn themselves out working too many hours for startups for less total compensation than they’d make working at Microsoft or IBM, despite whatever stock options they were offered to make up for lower salaries and benefits. This is because (a) most startups fail, (b) most software engineers either don’t vest their stock options before they quit, or quit before the company goes public and can’t afford to buy the options during the short (usually 90-day) exercise window after they quit.

No individual actions or decisions by a single worker can change these kinds of competitive pressures, and if your goal is to succeed in one of these highly competitive, poorly governed areas, you’ll probably have to work more than 40 hours a week. Overall, unchecked capitalism leads to a Red Queen’s race, in which individual workers have to work as hard as they can just to keep up with their competition (and those who can’t, die). I don’t want to live in this world, which is why I support laws limiting working hours and requiring pay, government-paid parental and family leave, a universal basic income, and the unions and political parties that fight for and win these protections.

Tips for working fewer hours

These tips for working fewer hours are aimed primarily at software engineers in the U.S. who have some job mobility, and more generally for salaried professionals in the U.S. Some of these tips may be useful for other folks as well.

See a career counselor or career coach. Most of us are woefully unprepared to guide and shape our career paths. A career counselor can help you figure out what you value, what your goals should be, and how to achieve them, while taking into account your whole self (including family, friends, and hobbies). A career counselor will help you with the mechanics of actually working fewer hours: negotiating down your current job, finding a new job, starting your own business, etc. To find a career counselor, ask your friends for recommendations or search online review sites.

Go to therapy. If you’re voluntarily overworking, you’ve internalized a lot of ideas about what a good person is or how to be happy that are actually about how to make employers wealthier. Even if you are your own employer, you’ll still need to work these out. You’re also likely to be dealing with anxiety or unresolved problems in your life by escaping to work. You’ll need to learn new values, new ideas, and new coping mechanisms before you can work fewer hours. I’ve written about how to find therapy here. You might also want to read up on workaholics. The short version is: there is some reason you are currently overworking, and you’ll need to address that before you can stop overworking.

Find other things to do with your time. Spend more time with your kids, develop new hobbies or pick up new ones, learn a sport, watch movies, volunteer, write a novel – the options are endless. Learn to identify the voice in your head that says you shouldn’t be wasting your time on that and tell it to mind its own business.

Search for more efficient ways to make money. In general, hourly wage labor is going to have a very hard limit on how much money you can make per hour, even in highly paid positions. Work with your career counselor to figure out how to make more money per hour of labor. Often this looks like teaching, reviewing, or selling a product or service with low marginal cost.

Talk to a financial advisor. Reducing hours often means at least some period of lower income, even if your income ends up higher after that. If like many people you are living paycheck-to-paycheck, you’ll need help. A professional financial advisor can help you figure out how to get through this period and make better financial decisions in general. [Added 19-June-2018]

Finally, we can help normalize working fewer hours a week just by talking about it and, if it is safe for us, actually asking for fewer hours of work. We can also support unions, elect politicians who promise to pass legislation protecting workers, promote universal basic income, support improvements in the social safety net, and raise awareness of what working conditions are like without these protections.


Planet Linux AustraliaMichael Still: Rejected talk proposal: Design at scale: OpenStack versus Kubernetes


This proposal was submitted for pyconau 2018. It wasn’t accepted, but given I’d put the effort into writing up the proposal I’ll post it here in case its useful some other time. The oblique references to OpensStack are because pycon had an “anonymous” review system in 2018, and I was avoiding saying things which directly identified me as the author.

OpenStack and Kubernetes solve very similar problems. Yet they approach those problems in very different ways. What can we learn from the different approaches taken? The differences aren’t just technical though, there are some interesting social differences too.

OpenStack and Kubernetes solve very similar problems – at their most basic level they both want to place workloads on large clusters of machines, and ensure that those placement decisions are as close to optimal as possible. The two projects even have similar approaches to the fundamentals – they are both orchestration systems at their core, seeking to help existing technologies run at scale instead of inventing their own hypervisors or container run times.

Yet they have very different approaches to how to perform these tasks. OpenStack takes a heavily centralised and monolithic approach to orchestration, whilst Kubernetes has a less stateful and more laissez faire approach. Some of that is about early technical choices and the heritage of the projects, but some of it is also about hubris and a desire to tightly control. To be honest I lived the OpenStack experience so I feel I should be solidly in that camp, but the Kubernetes approach is clever and elegant. There’s a lot to like on the Kubernetes side of the fence.

Its increasingly common that at some point you’ll encounter one of these systems, as neither seems likely to go away in the next few years. Understanding some of the basics of their operation is therefore useful, as well as being interesting at a purely hypothetical level.


The post Rejected talk proposal: Design at scale: OpenStack versus Kubernetes appeared first on Made by Mikal.

Planet Linux AustraliaMichael Still: Accepted talk proposal: Learning from the mistakes that even big projects make


This proposal was submitted for pyconau 2018. It was accepted, but hasn’t been presented yet. The oblique references to OpensStack are because pycon had an “anonymous” review system in 2018, and I was avoiding saying things which directly identified me as the author.

Since 2011, I’ve worked on a large Open Source project in python. It kind of got out of hand – 1000s of developers and millions of lines of code. Yet despite being well resourced, we made the same mistakes that those tiny scripts you whip up to solve a small problem make. Come learn from our fail.

This talk will use the privilege separation daemon that the project wrote to tell the story of decisions that were expedient at the time, and how we regretted them later. In a universe in which you can only run commands as root via sudo, dd’ing from one file on the filesystem to another seems almost reasonable. Especially if you ignore that the filenames are defined by the user. Heck, we shell out to “mv” to move files around, even when we don’t need escalated permissions to move the file in question.

While we’ll focus mainly on the security apparatus because it is the gift that keeps on giving, we’ll bump into other examples along the way as well. For example how we had pluggable drivers, but you have to turn them on by passing in python module paths. So what happens when we change the interface the driver is required to implement and you have a third party driver? The answer isn’t good. Or how we refused to use existing Open Source code from other projects through a mixture of hubris and licensing religion.

On a strictly technical front, this is a talk about how to do user space privilege separation sensibly. Although we should probably discuss why we also chose in the last six months to not do it as safely as we could.

For a softer technical take, the talk will cover how doing things right was less well documented than doing things the wrong way. Code reviewers didn’t know the anti-patterns, which were common in the code base, so made weird assumptions about what was ok or not.

On a human front, this is about herding cats. Developers with external pressures from their various employers, skipping steps because it was expedient, and how throwing automation in front of developers because having a conversation as adults is hard. Ultimately we ended up being close to stalled before we were “saved” from an unexpected direction.

In the end I think we’re in a reasonable place now, so I certainly don’t intend to give a lecture about doom and gloom. Think of us more as a light hearted object lesson.


The post Accepted talk proposal: Learning from the mistakes that even big projects make appeared first on Made by Mikal.

Don MartiHelping people move ad budgets away from evil stuff

Hugo-award-winning author Charles Stross said that a corporation is some kind of sociopathic hive organism, but as far as I can tell a corporation is really more like a monkey troop cosplaying a sociopathic hive organism.

This is important to remember because, among other reasons, it turns out that the money that a corporation spends to support democracy and creative work comes from the same advertising budget as the money it spends on random white power trolls and actual no-shit Nazis. The challenge for customers is to help people at corporations who want to do the right thing with the advertising budget, but need to be able to justify it in terms that won't break character (since they have agreed to pretend to be part of a sociopathic hive organism that only cares about its stock price).

So here is a quick follow-up to my earlier post about denying permission for some kinds of ad targeting.

Techcrunch reports that "Facebook Custom Audiences," the system where advertisers upload contact lists to Facebook in order to target the people on those lists with ads, will soon require permission from the people on the list. Check it out: Introducing New Requirements for Custom Audience Targeting | Facebook Business. On July 2, Facebook's own rules will extend a subset of Europe-like protection to everyone with a Facebook account. Beaujolais!

So this is a great opportunity to help people who work for corporations and want to do the right thing. Denying permission to share your info with Facebook can move the advertising money that they spend to reach you away from evil stuff and towards sites that make something good. Here's a permission withdrawal letter to cut and paste. Pull requests welcome.


Rondam RamblingsSuffer the little children

Nothing illustrates the complete moral and intellectual bankruptcy of Donald Trump's supporters, apologists, and enablers better than Jeff Sessions's Biblical justification for separating children from their families: “I would cite you to the Apostle Paul and his clear and wise command in Romans 13, to obey the laws of the government because God has ordained the government for his purposes,”

Planet Linux AustraliaDonna Benjamin: The Five Whys

The Five Whys - Need to go to the hardware store?

Imagine you work in a hardware store. You notice a customer puzzling over the vast array of electric drills.

She turns to you and says, I need a drill, but I don’t know which one to pick.

You ask “So, why do you want a drill?

“To make a hole.” she replies, somewhat exasperated. “Isn’t that obvious?”

“Sure,” you might say, “But why do you want to drill a hole? It might help us decide which drill you need!” “

Oh, okay," and she goes on to describe the need to thread cable from one room, to another.

From there, we might want to know more about the walls, about the type and thickness of the cable, and perhaps about what the cable is for. But what if we keep asking why? What if the next question was something like this?

“Why do you want to pull the cable from one room to the other?”

Our customer then explains she wants to connect directly to the internet router in the other room. "Our wifi reception is terrible! This seemed the fastest, easiest way to fix that."

At this point, there may be other solutions to the bad wifi problem that don’t require a hole at all, let alone a drill.

Someone who needs a drill, rarely wants a drill, nor do they really want a hole.

It’s the utility of that hole that we’re trying to uncover with the 5 Whys.


I can't remember who first told me about this technique. I wish I could, it's been profoundly useful, and I evangelise it's simple power at every opportunity. Thank you who ever you are, I honour your generous wisdom by paying it forward today.

More about the Five whys

Image credits

Creative Commons Icons all from the Noun Project

  • Drill by Andrejs Kirma
  • Mouse Hole by Sergey Demushkin
  • Cable by Amy Schwartz
  • Internet by Vectors Market
  • Wifi by Baboon designs
  • Not allowed by Adnen Kadri


Planet Linux AustraliaLev Lafayette: Being An Acrobat: Linux and PDFs

The PDF file format can be efficiently manipulated in Linux and other free software that may not be easy in proprietary operating systems or applications. This includes a review of various PDF readers for Linux, creation of PDFs from office documents using LibreOffice, editing PDF documents, converting PDF documents to images, extracting text from non-OCR PDF documents, converting to PostScript, converting restructuredText, Markdown, and other formats, searching PDFs according to regular expressions, converting to text, extracting images, separating and combining PDF documents, creating PDF presentations from text, creating fillable PDF forms, encrypting and decrypting PDF documents, and parsing PDF documents.

A presentation to Linux Users of Victoria, Saturday June 16, 2018

CryptogramFriday Squid Blogging: Cephalopod Week on Science Friday

It's Cephalopod Week! "Three hearts, eight arms, can't lose."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Worse Than FailureError'd: Just Handle It

Clint writes, "On Facebook, I tried to report a post as spam. I think I might just have to accept it."


"Jira seems to have strange ideas about my keyboard layout... Or is there a key that I don't know about?" writes Rob H.


George wrote, "There was deep wisdom bestowed upon weary travelers by the New York subway system at the Jamaica Center station this morning."


"Every single number field on the checkout page, including phone and credit card, was an integer. Just in case, you know, you felt like clicking a lot," Jeremiah C. writes.


"I don't know which is more ridiculous: that a Linux recovery image is a Windows 10, or that there's a difference between Pro and Professional," wrote Dima R.


"I got my weekly workout summary and, well, it looks I might have been hitting the gym a little too hard," Colin writes.


[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaOpenSTEM: Assessment Time

For many of us, the colder weather has started to arrive and mid-year assessment is in full swing. Teachers are under the pump to produce mid-year reports and grades. The OpenSTEM® Understanding Our World® program aims to take the pressure off teachers by providing for continuous assessment throughout the term. Not only are teachers continually […]

Planet Linux AustraliaDonna Benjamin: Makarrata

The time has come
To say fairs fair...

Dear members of the committee,

Please listen to the Uluru statement from the heart. Please hear those words. Please accept them, please act to adopt them.

Enshrine a voice for Australia’s first nation peoples in the Australian constitution.

Create a commission for Makarrata.

Invest in uncovering and telling the truth of our history.

We will be a stronger, wiser nation when we truly acknowledge the frontier wars and not only a stolen generation but stolen land, and stolen hope.

We have nothing to lose, and everything to gain through real heartfelt recognition and reconciliation.

Makarrata. Treaty. Sovereignty.

Please. I am Australian. I want this.

I felt sick shame when the prime minister rejected the Uluru statement. He did not, does not, speak for me.

Donna Benjamin
Melbourne, VIC.

Planet Linux AustraliaDonna Benjamin: Leadership, and teamwork.

Photo by Mohamed Abd El Ghany - Women protestors in Tahrir Square, Egypt 2013.

I'm angry and defensive. I don't know why. So I'm trying hard to figure that out right now.

Here's some words.

I'm writing these words for myself to try and figure this out.
I'm hoping these words might help make it clear.
I'm fearful these words will make it worse.

But I don't want to be silent about this.

Content Warning: This post refers to genocide.

This is about a discussion at the teamwork and leadership workshop at DrupalCon. For perhaps 5 mins within a 90 minute session we talked about Hitler. It was an intensely thought provoking, and uncomfortable 5 minute conversation. It was nuanced. It wasn't really tweetable.

On Holocaust memorial day, it seems timely to explore whether or not we should talk about Hitler when exploring the nature of leadership. Not all leaders are good. Call them dictators, call them tyrants, call them fascists, call them evil. Leadership is defined differently by different cultures, at different times, and in different contexts.

Some people in the room were upset and disgusted that we had that conversation. I'm really very deeply sorry about that.

Some of them then talked about it with others afterwards, which is great. It was a confronting conversation, and one, frankly, we should all be having as genocide and fascism exist in very real ways in the very real world.

But some of those they spoke with, who weren't there, seem to have extrapolated from that conversation that it was something different to what I experienced in the room. I feel they formed opinions that I can only call, well, what words can I call those opinions? Uninformed? Misinformed? Out of context? Wrong? That's probably unfair, it's just my perspective. But from those opinions, they also made assumptions, and turned those assumptions into accusations.

One person said they were glad they weren't there, but clearly happy to criticise us from afar on twitter. I responded that I thought it was a shame they didn't come to the workshop, but did choose to publicly criticise our work. Others responded to that saying this was disgusting, offensive, unacceptable and inappropriate that we would even consider having this conversation. One accused me of trying to shut down the conversation.

So, I think perhaps the reason I'm feeling angry and defensive, is I'm being accused of something I don't think I did.

And I want to defend myself.

I've studied World War Two and the Genocide that took place under Hitler's direction.

My grandmother was arrested in the early 1930's and held in a concentration camp. She was, thankfully, released and fled Germany to Australia as a refugee before the war was declared. Her mother was murdered by Hitler. My grandfather's parents and sister were also murdered by Hitler.

So, I guess I feel like I've got a pretty strong understanding of who Hitler was, and what he did.

So when I have people telling me, that it's completely disgusting to even consider discussing Hitler in the context of examining what leadership is, and what it means? Fuck that. I will not desist. Hitler was a monster, and we must never forget what he was, or what he did.

During silent reflection on a number of images, I wrote this note.

"Hitler was a powerful leader. No question. So powerful, he destroyed the world."

When asked if they thought Hitler was a leader or not, most people in the room, including me, put up their hand. We were wrong.

The four people who put their hand up to say he was NOT a leader were right.

We had not collectively defined leadership at that point. We were in the middle of a process doing exactly that.

The definition we were eventually offered is that leaders must care for their followers, and must care for people generally.

At no point, did anyone in that room, consider the possibility that Hitler was a "Good Leader" which is the misinformed accusation I most categorically reject.

Our facilitator, Adam Goodman, told us we were all wrong, except the four who rejected Hitler as an example of a Leader, by saying, that no, he was not a leader, but yes, he was a dictator, yes he was a tyrant. But he was not a leader.

Whilst I agree, and was relieved by that reframing, I would also counter argue that it is English semantics.

Someone else also reminded us, that Hitler was elected. I too, was elected to the board of the Drupal Association, I was then appointed to one of the class Director seats. My final term ends later this year, and frankly, right now, I'm kind of wondering if I should leave right now.

Other people shown in the slide deck were Oprah Winfrey, Angela Merkel, Rosa Parks, Serena Williams, Marin Alsop, Sonia Sotomayor, a woman in military uniform, and a large group of women protesting in Tahrir Square in Egypt.

It also included Gandhi, and Mandela.

I observed that I felt sad I could think of no woman that I would list in the same breath as those two men.

So... for those of you who judged us, and this workshop, from what you saw on twitter, before having all the facts?
Let me tell you what I think this was about.

This wasn't about Hitler.

This was about leadership, and learning how we can be better leaders. I felt we were also exploring how we might better support the leaders we have, and nurture the ones to come. And I now also wonder how we might respectfully acknowledge the work and effort of those who've come and gone, and learn to better pass on what's important to those doing the work now.

We need teamwork. We need leadership. It takes collective effort, and most of all, it takes collective empathy and compassion.

Dries Buytaert was the final image in the deck.

Dries shared these 5 values and their underlying principles with us to further explore, discuss and develop together.

Prioritize impact
Impact gives us purpose. We build software that is easy, accessible and safe for everyone to use.

Better together
We foster a learning environment, prefer collaborative decision-making, encourage others to get involved and to help lead our community.

Strive for excellence
We constantly re-evaluate and assume that change is constant.

Treat each other with dignity and respect
We do not tolerate intolerance toward others. We seek first to understand, then to be understood. We give each other constructive criticism, and are relentlessly optimistic.

Enjoy what you do
Be sure to have fun.

I'm sorry to say this, but I'm really not having fun right now. But I am much clearer about why I'm feeling angry.

Photo Credit "Protesters against Egyptian President Mohamed Morsi celebrate in Tahrir Square in Cairo on July 3, 2013. Egypt's armed forces overthrew elected Islamist President Morsi on Wednesday and announced a political transition with the support of a wide range of political and religious leaders." Mohamed Abd El Ghany Reuters.

Planet Linux AustraliaDonna Benjamin: DrupalCon Nashville

I'm going to Nashville!!

That is all. Carry on. Or... better yet - you should come too!


CryptogramE-Mail Vulnerabilities and Disclosure

Last week, researchers disclosed vulnerabilities in a large number of encrypted e-mail clients: specifically, those that use OpenPGP and S/MIME, including Thunderbird and AppleMail. These are serious vulnerabilities: An attacker who can alter mail sent to a vulnerable client can trick that client into sending a copy of the plaintext to a web server controlled by that attacker. The story of these vulnerabilities and the tale of how they were disclosed illustrate some important lessons about security vulnerabilities in general and e-mail security in particular.

But first, if you use PGP or S/MIME to encrypt e-mail, you need to check the list on this page and see if you are vulnerable. If you are, check with the vendor to see if they've fixed the vulnerability. (Note that some early patches turned out not to fix the vulnerability.) If not, stop using the encrypted e-mail program entirely until it's fixed. Or, if you know how to do it, turn off your e-mail client's ability to process HTML e-mail or -- even better -- stop decrypting e-mails from within the client. There's even more complex advice for more sophisticated users, but if you're one of those, you don't need me to explain this to you.

Consider your encrypted e-mail insecure until this is fixed.

All software contains security vulnerabilities, and one of the primary ways we all improve our security is by researchers discovering those vulnerabilities and vendors patching them. It's a weird system: Corporate researchers are motivated by publicity, academic researchers by publication credentials, and just about everyone by individual fame and the small bug-bounties paid by some vendors.

Software vendors, on the other hand, are motivated to fix vulnerabilities by the threat of public disclosure. Without the threat of eventual publication, vendors are likely to ignore researchers and delay patching. This happened a lot in the 1990s, and even today, vendors often use legal tactics to try to block publication. It makes sense; they look bad when their products are pronounced insecure.

Over the past few years, researchers have started to choreograph vulnerability announcements to make a big press splash. Clever names -- the e-mail vulnerability is called "Efail" -- websites, and cute logos are now common. Key reporters are given advance information about the vulnerabilities. Sometimes advance teasers are released. Vendors are now part of this process, trying to announce their patches at the same time the vulnerabilities are announced.

This simultaneous announcement is best for security. While it's always possible that some organization -- either government or criminal -- has independently discovered and is using the vulnerability before the researchers go public, use of the vulnerability is essentially guaranteed after the announcement. The time period between announcement and patching is the most dangerous, and everyone except would-be attackers wants to minimize it.

Things get much more complicated when multiple vendors are involved. In this case, Efail isn't a vulnerability in a particular product; it's a vulnerability in a standard that is used in dozens of different products. As such, the researchers had to ensure both that everyone knew about the vulnerability in time to fix it and that no one leaked the vulnerability to the public during that time. As you can imagine, that's close to impossible.

Efail was discovered sometime last year, and the researchers alerted dozens of different companies between last October and March. Some companies took the news more seriously than others. Most patched. Amazingly, news about the vulnerability didn't leak until the day before the scheduled announcement date. Two days before the scheduled release, the researchers unveiled a teaser -- honestly, a really bad idea -- which resulted in details leaking.

After the leak, the Electronic Frontier Foundation posted a notice about the vulnerability without details. The organization has been criticized for its announcement, but I am hard-pressed to find fault with its advice. (Note: I am a board member at EFF.) Then, the researchers published -- and lots of press followed.

All of this speaks to the difficulty of coordinating vulnerability disclosure when it involves a large number of companies or -- even more problematic -- communities without clear ownership. And that's what we have with OpenPGP. It's even worse when the bug involves the interaction between different parts of a system. In this case, there's nothing wrong with PGP or S/MIME in and of themselves. Rather, the vulnerability occurs because of the way many e-mail programs handle encrypted e-mail. GnuPG, an implementation of OpenPGP, decided that the bug wasn't its fault and did nothing about it. This is arguably true, but irrelevant. They should fix it.

Expect more of these kinds of problems in the future. The Internet is shifting from a set of systems we deliberately use -- our phones and computers -- to a fully immersive Internet-of-things world that we live in 24/7. And like this e-mail vulnerability, vulnerabilities will emerge through the interactions of different systems. Sometimes it will be obvious who should fix the problem. Sometimes it won't be. Sometimes it'll be two secure systems that, when they interact in a particular way, cause an insecurity. In April, I wrote about a vulnerability that arose because Google and Netflix make different assumptions about e-mail addresses. I don't even know who to blame for that one.

It gets even worse. Our system of disclosure and patching assumes that vendors have the expertise and ability to patch their systems, but that simply isn't true for many of the embedded and low-cost Internet of things software packages. They're designed at a much lower cost, often by offshore teams that come together, create the software, and then disband; as a result, there simply isn't anyone left around to receive vulnerability alerts from researchers and write patches. Even worse, many of these devices aren't patchable at all. Right now, if you own a digital video recorder that's vulnerable to being recruited for a botnet -- remember Mirai from 2016? -- the only way to patch it is to throw it away and buy a new one.

Patching is starting to fail, which means that we're losing the best mechanism we have for improving software security at exactly the same time that software is gaining autonomy and physical agency. Many researchers and organizations, including myself, have proposed government regulations enforcing minimal security standards for Internet-of-things devices, including standards around vulnerability disclosure and patching. This would be expensive, but it's hard to see any other viable alternative.

Getting back to e-mail, the truth is that it's incredibly difficult to secure well. Not because the cryptography is hard, but because we expect e-mail to do so many things. We use it for correspondence, for conversations, for scheduling, and for record-keeping. I regularly search my 20-year e-mail archive. The PGP and S/MIME security protocols are outdated, needlessly complicated and have been difficult to properly use the whole time. If we could start again, we would design something better and more user friendly­but the huge number of legacy applications that use the existing standards mean that we can't. I tell people that if they want to communicate securely with someone, to use one of the secure messaging systems: Signal, Off-the-Record, or -- if having one of those two on your system is itself suspicious -- WhatsApp. Of course they're not perfect, as last week's announcement of a vulnerability (patched within hours) in Signal illustrates. And they're not as flexible as e-mail, but that makes them easier to secure.

This essay previously appeared on

CryptogramRouter Vulnerability and the VPNFilter Botnet

On May 25, the FBI asked us all to reboot our routers. The story behind this request is one of sophisticated malware and unsophisticated home-network security, and it's a harbinger of the sorts of pervasive threats ­ from nation-states, criminals and hackers ­ that we should expect in coming years.

VPNFilter is a sophisticated piece of malware that infects mostly older home and small-office routers made by Linksys, MikroTik, Netgear, QNAP and TP-Link. (For a list of specific models, click here.) It's an impressive piece of work. It can eavesdrop on traffic passing through the router ­ specifically, log-in credentials and SCADA traffic, which is a networking protocol that controls power plants, chemical plants and industrial systems ­ attack other targets on the Internet and destructively "kill" its infected device. It is one of a very few pieces of malware that can survive a reboot, even though that's what the FBI has requested. It has a number of other capabilities, and it can be remotely updated to provide still others. More than 500,000 routers in at least 54 countries have been infected since 2016.

Because of the malware's sophistication, VPNFilter is believed to be the work of a government. The FBI suggested the Russian government was involved for two circumstantial reasons. One, a piece of the code is identical to one found in another piece of malware, called BlackEnergy, that was used in the December 2015 attack against Ukraine's power grid. Russia is believed to be behind that attack. And two, the majority of those 500,000 infections are in Ukraine and controlled by a separate command-and-control server. There might also be classified evidence, as an FBI affidavit in this matter identifies the group behind VPNFilter as Sofacy, also known as APT28 and Fancy Bear. That's the group behind a long list of attacks, including the 2016 hack of the Democratic National Committee.

Two companies, Cisco and Symantec, seem to have been working with the FBI during the past two years to track this malware as it infected ever more routers. The infection mechanism isn't known, but we believe it targets known vulnerabilities in these older routers. Pretty much no one patches their routers, so the vulnerabilities have remained, even if they were fixed in new models from the same manufacturers.

On May 30, the FBI seized control of, a critical VPNFilter command-and-control server. This is called "sinkholing," and serves to disrupt a critical part of this system. When infected routers contact, they will no longer be contacting a server owned by the malware's creators; instead, they'll be contacting a server owned by the FBI. This doesn't entirely neutralize the malware, though. It will stay on the infected routers through reboot, and the underlying vulnerabilities remain, making the routers susceptible to reinfection with a variant controlled by a different server.

If you want to make sure your router is no longer infected, you need to do more than reboot it, the FBI's warning notwithstanding. You need to reset the router to its factory settings. That means you need to reconfigure it for your network, which can be a pain if you're not sophisticated in these matters. If you want to make sure your router cannot be reinfected, you need to update the firmware with any security patches from the manufacturer. This is harder to do and may strain your technical capabilities, though it's ridiculous that routers don't automatically download and install firmware updates on their own. Some of these models probably do not even have security patches available. Honestly, the best thing to do if you have one of the vulnerable models is to throw it away and get a new one. (Your ISP will probably send you a new one free if you claim that it's not working properly. And you should have a new one, because if your current one is on the list, it's at least 10 years old.)

So if it won't clear out the malware, why is the FBI asking us to reboot our routers? It's mostly just to get a sense of how bad the problem is. The FBI now controls When an infected router gets rebooted, it connects to that server to get fully reinfected, and when it does, the FBI will know. Rebooting will give it a better idea of how many devices out there are infected.

Should you do it? It can't hurt.

Internet of Things malware isn't new. The 2016 Mirai botnet, for example, created by a lone hacker and not a government, targeted vulnerabilities in Internet-connected digital video recorders and webcams. Other malware has targeted Internet-connected thermostats. Lots of malware targets home routers. These devices are particularly vulnerable because they are often designed by ad hoc teams without a lot of security expertise, stay around in networks far longer than our computers and phones, and have no easy way to patch them.

It wouldn't be surprising if the Russians targeted routers to build a network of infected computers for follow-on cyber operations. I'm sure many governments are doing the same. As long as we allow these insecure devices on the Internet ­ and short of security regulations, there's no way to stop them ­ we're going to be vulnerable to this kind of malware.

And next time, the command-and-control server won't be so easy to disrupt.

This essay previously appeared in the Washington Post

EDITED TO ADD: The malware is more capable than we previously thought.

CryptogramThomas Dullien on Complexity and Security

For many years, I have said that complexity is the worst enemy of security. At CyCon earlier this month, Thomas Dullien gave an excellent talk on the subject with far more detail than I've ever provided. Video. Slides.

Planet Linux AustraliaDonna Benjamin: Makarrata

The time has come
To say fairs fair...

Dear members of the committee,

Please listen to the Uluru statement from the heart. Please hear those words. Please accept them, please act to adopt them.

Enshrine a voice for Australia’s first nation peoples in the Australian constitution.

Create a commission for Makarrata.

Invest in uncovering and telling the truth of our history.

We will be a stronger, wiser nation when we truly acknowledge the frontier wars and not only a stolen generation but stolen land, and stolen hope.

We have nothing to lose, and everything to gain through real heartfelt recognition and reconciliation.

Makarrata. Treaty. Sovereignty.

Please. I am Australian. I want this.

I felt sick shame when the prime minister rejected the Uluru statement. He did not, does not, speak for me.

Donna Benjamin
Melbourne, VIC.

Worse Than FailureThe New Guy (Part II): Database Boogaloo

When we last left our hero Jesse, he was wading through a quagmire of undocumented bad systems while trying to solve an FTP issue. Several months later, Jesse had things figured out a little better and was starting to feel comfortable in his "System Admin" role. He helped the company join the rest of the world by dumping Windows NT 4.0 and XP. The users whose DNS settings he bungled were now happily utilizing Windows 10 workstations. His web servers were running Windows Server 2016, and the SQL boxes were up to SQL 2016. Plus his nemesis Ralph had since retired. Or died. Nobody knew for sure. But things were good.

Despite all these efforts, there were still several systems that relied on Access 97 haunting him every day. Jesse spent tens of dollars of his own money on well-worn Access 97 programming books to help plug holes in the leaky dike. The A97 Finance system in particular was a complete mess to deal with. There were no clear naming guidelines and table locations were haphazard at best. Stored procedures and functions were scattered between the A97 VBS and the SQL DB. Many views/functions were nested with some going as far as eight layers while others would form temporary tables in A97 then continue to nest.

One of Jesse's small wins involved improving performance of some financial reporting queries that took minutes to run before but now took seconds. A few of these sped-up reports happened to be ones that Shane, the owner of the company, used frequently. The sudden time-savings got his attention to the point of calling Jesse in to his office to meet.

"Jesse! Good to see you!" Shane said in an overly cheerful manner. "I'm glad to talk to the guy who has saved me a few hours a week with his programmering fixes." Jesse downplayed the praise before Shane got to the point. "I'd like to find out from you how we can make further improvements to our Finance program. You seem to have a real knack for this."

Jesse, without thinking about it, blurted, "This here system is a pile of shit." Shane stared at him blankly, so he continued, "It should be rebuilt from the ground up by experienced software development professionals. That's how we make further improvements."

"Great idea! Out with the old, in with the new! You seem pretty well-versed in this stuff, when can you start on it?" Shane said with growing excitement. Jesse soon realized his response had backfired and he was now on the hook to the owner for a complete system rewrite. He took a couple classes on C# and ASP.NET during his time at Totally Legit Technical Institute so it was time to put that valuable knowledge to use.

Shane didn't just let Jesse loose on redoing the Finance program though. He insisted Jesse work closely with Linda, their CFO who used it the most. Linda proved to be very resistant to any kind of change Jesse proposed. She had mastered the painstaking nuances of A97 and didn't seem to mind fixing large amounts of bad data by hand. "It makes me feel in control, you know," Linda told him once after Jesse tried to explain the benefits of the rewrite.

While Jesse pecked away at his prototype, Linda would relentlessly nitpick any UI ideas he came up with. If she had it her way, the new system would only be usable by someone as braindead as her. "I don't need all these fancy menus and buttons! Just make it look and work like it does in the current system," she would say at least once a week. "And don't you dare take my manual controls away! I don't trust your automated robotics to get these numbers right!" In the times it wasn't possible to make something work like Access 97, she would run to Shane, who would have to talk her down off the ledge.

Even though Linda opposed Jesse at every turn, the new system was faster and very expandable. Using C# .NET 4.7.1 with WPF, it was much less of an eyesore. The database was also clearly defined with full documentation, both on the tables and in the stored procedures. The database size managed to go from 8 GB to .8 GB with no loss in data.

The time came at last for go-live of Finance 2.0. The thing Jesse was most excited about was shutting down the A97 system and feeling Linda die a little bit inside. He sent out an email to the Finance department with instructions for how to use it. The system was well-received by everyone except Linda. But that still led to more headaches for Jesse.

With Finance 2.0 in their hands, the rest of the users noticed the capabilities modern technology brought. The feature requests began pouring in with no way to funnel them. Linda refused to participate in feature reviews because she still hated the new system, so they all went to Shane, who greenlighted everything. Jesse soon found himself buried in the throes of the monster he created with no end in sight. To this day, he toils at his computer cranking out features while Linda sits and reminisces about the good old days of Access 97.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Krebs on SecurityLibrarian Sues Equifax Over 2017 Data Breach, Wins $600

In the days following revelations last September that big-three consumer credit bureau Equifax had been hacked and relieved of personal data on nearly 150 million people, many Americans no doubt felt resigned and powerless to control their information. But not Jessamyn West. The 49-year-old librarian from a tiny town in Vermont took Equifax to court. And now she’s celebrating a small but symbolic victory after a small claims court awarded her $600 in damages stemming from the 2017 breach.

Vermont librarian Jessamyn West sued Equifax over its 2017 data breach and won $600 in small claims court. Others are following suit.

Just days after Equifax disclosed the breach, West filed a claim with the local Orange County, Vt. courthouse asking a judge to award her almost $5,000. She told the court that her mother had just died in July, and that it added to the work of sorting out her mom’s finances while trying to respond to having the entire family’s credit files potentially exposed to hackers and identity thieves.

The judge ultimately agreed, but awarded West just $690 ($90 to cover court fees and the rest intended to cover the cost of up to two years of payments to online identity theft protection services).

In an interview with KrebsOnSecurity, West said she’s feeling victorious even though the amount awarded is a drop in the bucket for Equifax, which reported more than $3.4 billion in revenue last year.

“The small claims case was a lot more about raising awareness,” said West, a librarian at the Randolph Technical Career Center who specializes in technology training and frequently conducts talks on privacy and security.

“I just wanted to change the conversation I was having with all my neighbors who were like, ‘Ugh, computers are hard, what can you do?’ to ‘Hey, here are some things you can do’,” she said. “A lot of people don’t feel they have agency around privacy and technology in general. This case was about having your own agency when companies don’t behave how they’re supposed to with our private information.”

West said she’s surprised more people aren’t following her example. After all, if just a tiny fraction of the 147 million Americans who had their Social Security number, date of birth, address and other personal data stolen in last year’s breach filed a claim and prevailed as West did, it could easily cost Equifax tens of millions of dollars in damages and legal fees.

“The paperwork to file the claim was a little irritating, but it only cost $90,” she said. “Then again, I could see how many people probably would see this as a lark, where there’s a pretty good chance you’re not going to see that money again, and for a lot of people that probably doesn’t really make things better.”

Equifax is currently the target of several class action lawsuits related to the 2017 breach disclosure, but there have been a few other minor victories in state small claims courts.

In January, data privacy enthusiast Christian Haigh wrote about winning an $8,000 judgment in small claims court against Equifax for its 2017 breach (the amount was reduced to $5,500 after Equifax appealed).

Haigh is co-founder of litigation finance startup Legalist. According to, Haigh’s company has started funding other people’s small claims suits against Equifax, too. (Legalist pays lawyers in plaintiff’s suits on an hourly basis, and takes a contingency fee if the case is successful.)

Days after the Equifax breach news broke, a 20-year-old Stanford University student published a free online bot that helps users sue the company in small claims court.

It’s not clear if the Web site tool is still functioning, but West said it was media coverage of this very same lawsuit bot that prompted her to file.

“I thought if some stupid online bot can do this, I could probably figure it out,” she recalled.

If you’re a DYI type person, by all means file a claim in your local small claims court. And then write and publish about your experience, just like West did in a post at

West said she plans to donate the money from her small claims win to the Vermont chapter of the American Civil Liberties Union (ACLU), and that she hopes her case inspires others.

“Even if all this does is get people to use better passwords, or go to the library, or to tell a company, ‘No, that’s not not good enough, you need to do better,’ that would be a good thing,” West said. “I wanted to show that there are constructive ways to seek redress of grievances about lots of different things, which makes me happy. I was willing to do the work and go to court. I look at this like an opportunity to educate and inform yourself, and realize there is a step you can take beyond just rending of garments and gnashing of teeth.”

Rondam RamblingsTrump makes it look easy

One has to wonder, after Donald Trump's tidy wrapping-up of the North Korea situation (he did everything short of come right out and say "peace for our time!"), what all the fuss was ever about.  It took only a few months (or forty minutes, depending on how you count) to go from the the brink of nuclear war to BFFs.  Today the U.S. seems to be getting along better with North Korea than with

CryptogramRussian Censorship of Telegram

Internet censors have a new strategy in their bid to block applications and websites: pressuring the large cloud providers that host them. These providers have concerns that are much broader than the targets of censorship efforts, so they have the choice of either standing up to the censors or capitulating in order to maximize their business. Today's Internet largely reflects the dominance of a handful of companies behind the cloud services, search engines and mobile platforms that underpin the technology landscape. This new centralization radically tips the balance between those who want to censor parts of the Internet and those trying to evade censorship. When the profitable answer is for a software giant to acquiesce to censors' demands, how long can Internet freedom last?

The recent battle between the Russian government and the Telegram messaging app illustrates one way this might play out. Russia has been trying to block Telegram since April, when a Moscow court banned it after the company refused to give Russian authorities access to user messages. Telegram, which is widely used in Russia, works on both iPhone and Android, and there are Windows and Mac desktop versions available. The app offers optional end-to-end encryption, meaning that all messages are encrypted on the sender's phone and decrypted on the receiver's phone; no part of the network can eavesdrop on the messages.

Since then, Telegram has been playing cat-and-mouse with the Russian telecom regulator Roskomnadzor by varying the IP address the app uses to communicate. Because Telegram isn't a fixed website, it doesn't need a fixed IP address. Telegram bought tens of thousands of IP addresses and has been quickly rotating through them, staying a step ahead of censors. Cleverly, this tactic is invisible to users. The app never sees the change, or the entire list of IP addresses, and the censor has no clear way to block them all.

A week after the court ban, Roskomnadzor countered with an unprecedented move of its own: blocking 19 million IP addresses, many on Amazon Web Services and Google Cloud. The collateral damage was widespread: The action inadvertently broke many other web services that use those platforms, and Roskomnadzor scaled back after it became clear that its action had affected services critical for Russian business. Even so, the censor is still blocking millions of IP addresses.

More recently, Russia has been pressuring Apple not to offer the Telegram app in its iPhone App Store. As of this writing, Apple has not complied, and the company has allowed Telegram to download a critical software update to iPhone users (after what the app's founder called a delay last month). Roskomnadzor could further pressure Apple, though, including by threatening to turn off its entire iPhone app business in Russia.

Telegram might seem a weird app for Russia to focus on. Those of us who work in security don't recommend the program, primarily because of the nature of its cryptographic protocols. In general, proprietary cryptography has numerous fatal security flaws. We generally recommend Signal for secure SMS messaging, or, if having that program on your computer is somehow incriminating, WhatsApp. (More than 1.5 billion people worldwide use WhatsApp.) What Telegram has going for it is that it works really well on lousy networks. That's why it is so popular in places like Iran and Afghanistan. (Iran is also trying to ban the app.)

What the Russian government doesn't like about Telegram is its anonymous broadcast feature­ -- channel capability and chats -- ­which makes it an effective platform for political debate and citizen journalism. The Russians might not like that Telegram is encrypted, but odds are good that they can simply break the encryption. Telegram's role in facilitating uncontrolled journalism is the real issue.

Iran attempts to block Telegram have been more successful than Russia's, less because Iran's censorship technology is more sophisticated but because Telegram is not willing to go as far to defend Iranian users. The reasons are not rooted in business decisions. Simply put, Telegram is a Russian product and the designers are more motivated to poke Russia in the eye. Pavel Durov, Telegram's founder, has pledged millions of dollars to help fight Russian censorship.

For the moment, Russia has lost. But this battle is far from over. Russia could easily come back with more targeted pressure on Google, Amazon and Apple. A year earlier, Zello used the same trick Telegram is using to evade Russian censors. Then, Roskomnadzor threatened to block all of Amazon Web Services and Google Cloud; and in that instance, both companies forced Zello to stop its IP-hopping censorship-evasion tactic.

Russia could also further develop its censorship infrastructure. If its capabilities were as finely honed as China's, it would be able to more effectively block Telegram from operating. Right now, Russia can block only specific IP addresses, which is too coarse a tool for this issue. Telegram's voice capabilities in Russia are significantly degraded, however, probably because high-capacity IP addresses are easier to block.

Whatever its current frustrations, Russia might well win in the long term. By demonstrating its willingness to suffer the temporary collateral damage of blocking major cloud providers, it prompted cloud providers to block another and more effective anti-censorship tactic, or at least accelerated the process. In April, Google and Amazon banned­ -- and technically blocked­ -- the practice of "domain fronting," a trick anti-censorship tools use to get around Internet censors by pretending to be other kinds of traffic. Developers would use popular websites as a proxy, routing traffic to their own servers through another website­ -- in this case­ -- to fool censors into believing the traffic was intended for The anonymous web-browsing tool Tor has used domain fronting since 2014. Signal, since 2016. Eliminating the capability is a boon to censors worldwide.

Tech giants have gotten embroiled in censorship battles for years. Sometimes they fight and sometimes they fold, but until now there have always been options. What this particular fight highlights is that Internet freedom is increasingly in the hands of the world's largest Internet companies. And while freedom may have its advocates -- ­the American Civil Liberties Union has tweeted its support for those companies, and some 12,000 people in Moscow protested against the Telegram ban­ -- actions such as disallowing domain fronting illustrate that getting the big tech companies to sacrifice their near-term commercial interests will be an uphill battle. Apple has already removed anti-censorship apps from its Chinese app store.

In 1993, John Gilmore famously said that "The Internet interprets censorship as damage and routes around it." That was technically true when he said it but only because the routing structure of the Internet was so distributed. As centralization increases, the Internet loses that robustness, and censorship by governments and companies becomes easier.

This essay previously appeared on

CryptogramNew Data Privacy Regulations

When Marc Zuckerberg testified before both the House and the Senate last month, it became immediately obvious that few US lawmakers had any appetite to regulate the pervasive surveillance taking place on the Internet.

Right now, the only way we can force these companies to take our privacy more seriously is through the market. But the market is broken. First, none of us do business directly with these data brokers. Equifax might have lost my personal data in 2017, but I can't fire them because I'm not their customer or even their user. I could complain to the companies I do business with who sell my data to Equifax, but I don't know who they are. Markets require voluntary exchange to work properly. If consumers don't even know where these data brokers are getting their data from and what they're doing with it, they can't make intelligent buying choices.

This is starting to change, thanks to a new law in Vermont and another in Europe. And more legislation is coming.

Vermont first. At the moment, we don't know how many data brokers collect data on Americans. Credible estimates range from 2,500 to 4,000 different companies. Last week, Vermont passed a law that will change that.

The law does several things to improve the security of Vermonters' data, but several provisions matter to all of us. First, the law requires data brokers that trade in Vermonters' data to register annually. And while there are many small local data brokers, the larger companies collect data nationally and even internationally. This will help us get a more accurate look at who's in this business. The companies also have to disclose what opt-out options they offer, and how people can request to opt out. Again, this information is useful to all of us, regardless of the state we live in. And finally, the companies have to disclose the number of security breaches they've suffered each year, and how many individuals were affected.

Admittedly, the regulations imposed by the Vermont law are modest. Earlier drafts of the law included a provision requiring data brokers to disclose how many individuals' data it has in its databases, what sorts of data it collects and where the data came from, but those were removed as the bill negotiated its way into law. A more comprehensive law would allow individuals to demand to exactly what information they have about them­ -- and maybe allow individuals to correct and even delete data. But it's a start, and the first statewide law of its kind to be passed in the face of strong industry opposition.

Vermont isn't the first to attempt this, though. On the other side of the country, Representative Norma Smith of Washington introduced a similar bill in both 2017 and 2018. It goes further, requiring disclosure of what kinds of data the broker collects. So far, the bill has stalled in the state's legislature, but she believes it will have a much better chance of passing when she introduces it again in 2019. I am optimistic that this is a trend, and that many states will start passing bills forcing data brokers to be increasingly more transparent in their activities. And while their laws will be tailored to residents of those states, all of us will benefit from the information.

A 2018 California ballot initiative could help. Among its provisions, it gives consumers the right to demand exactly what information a data broker has about them. If it passes in November, once it takes effect, lots of Californians will take the list of data brokers from Vermont's registration law and demand this information based on their own law. And again, all of us -- regardless of the state we live in­ -- will benefit from the information.

We will also benefit from another, much more comprehensive, data privacy and security law from the European Union. The General Data Protection Regulation (GDPR) was passed in 2016 and took effect on 25 May. The details of the law are far too complex to explain here, but among other things, it mandates that personal data can only be collected and saved for specific purposes and only with the explicit consent of the user. We'll learn who is collecting what and why, because companies that collect data are going to have to ask European users and customers for permission. And while this law only applies to EU citizens and people living in EU countries, the disclosure requirements will show all of us how these companies profit off our personal data.

It has already reaped benefits. Over the past couple of weeks, you've received many e-mails from companies that have you on their mailing lists. In the coming weeks and months, you're going to see other companies disclose what they're doing with your data. One early example is PayPal: in preparation for GDPR, it published a list of the over 600 companies it shares your personal data with. Expect a lot more like this.

Surveillance is the business model of the Internet. It's not just the big companies like Facebook and Google watching everything we do online and selling advertising based on our behaviors; there's also a large and largely unregulated industry of data brokers that collect, correlate and then sell intimate personal data about our behaviors. If we make the reasonable assumption that Congress is not going to regulate these companies, then we're left with the market and consumer choice. The first step in that process is transparency. These new laws, and the ones that will follow, are slowly shining a light on this secretive industry.

This essay originally appeared in the Guardian.

Worse Than FailureThe Manager Who Knew Everything

Have you ever worked for/with a manager that knows everything about everything? You know the sort; no matter what the issue, they stubbornly have an answer. It might be wrong, but they have an answer, and no amount of reason, intelligent thought, common sense or hand puppets will make them understand. For those occasions, you need to resort to a metaphorical clue-bat.

A few decades ago, I worked for a place that had a chief security officer who knew everything there was to know about securing their systems. Nothing could get past the policies she had put in place. Nobody could ever come up with any mechanism that could bypass her concrete walls, blockades and insurmountable defenses.

One day, she held an interdepartmental meeting to announce her brand spanking shiny new policies regarding this new-fangled email that everyone seemed to want to use. It would prevent unauthorized access, so only official emails sent by official individuals could be sent through her now-secured email servers.

I pointed out that email servers could only be secured to a point, because they had to have an open port to which email clients running on any internal computer could connect. As long as the port was open, anyone with internal access and nefarious intent could spoof a legitimate authorized email address and send a spoofed email.

She was incensed and informed me (and the group) that she knew more than all of us (together) about security, and that there was absolutely no way that could ever happen. I told her that I had some background in military security, and that I might know something that she didn't.

At this point, if she was smart, she would have asked me to explain. If she already handled the case, then I'd have to shut up. If she didn't handle the case, then she'd learn something, AND the system could be made more secure. She was not smart; she publicly called my bluff.

I announced that I accepted the challenge, and that I was going to use my work PC to send an email - from her - to the entire firm (using the restricted blast-to-all email address, which I would not normally be able to access as myself). In the email, I would explain that it was a spoof, and if they were seeing it, then the so-called impenetrable security might be somewhat less secure than she proselytized. In fact, I would do it in such a way that there would be absolutely no way to prove that I did it (other than my admission in the email).

She said that if I did that, that I'd be fired. I responded that 1) if the system was as secure as she thought, that there'd be nothing to fire me for, and 2) if they could prove that it was me, and tell me how I did it (aside from my admission that I had done it), that I would resign. But if not, then she had to stop the holier-than-thou act.

Fifteen minutes later, I went back to my desk, logged into my work PC using the guest account, wrote a 20 line Cold Fusion script to attach to the email server on port 25, and filled out the fields as though it was coming from her email client. Since she had legitimate access to the firm-wide email blast address, the email server allowed it. Then I sent it. Then I secure-erased the local system event and assorted other logs, as well as editor/browser/Cold Fusion/server caches, etc. that would show what I did. Finally, I did a cold boot to ensure that even the RAM was wiped out.

Not long after that, her minions the SA's showed up at my desk joking that they couldn't believe that I had actually done it. I told them that I had wiped out all the logs where they'd look, the actual script that did it, and the disk space that all of the above had occupied. Although they knew the IP address of the PC from which the request came, they agreed that without those files, there was no way they could prove that it was me. Then they checked everything and verified what I told them.

This info made its way back up the chain until the SAs, me and my boss got called into her office, along with a C-level manager. Everything was explained to the C-manager. She was expecting him to fire me.

He simply looked at me and raised an eyebrow. I responded that I spent all of ten minutes doing it in direct response to her assertion that it was un-doable, and that I had announced my intentions to expose the vulnerability - to her - in front of everyone - in advance.

He chose to tell her that maybe she needed to accept that she doesn't know quite as much about everything as she thinks, and that she might want to listen to people a little more. She then pointed out that I had proven that email was totally insecure and that it should be banned completely (this was at the point where the business had mostly moved to email). I pointed out that I had worked there for many years, had no destructive tendencies, that I was only exposing a potential gap in security, and would not do it again. The SAs also pointed out that the stunt, though it proved the point, was harmless. They also mentioned that nobody else at the firm had access to Cold Fusion. I didn't think it helpful to mention that not just Cold Fusion, but any programming language could be used to connect to port 25 and do the same thing, and so didn't. She huffed and puffed, but had no credibility at that point.

After that, my boss and I bought the SAs burgers and beer.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!