Planet Russell


Planet DebianPetter Reinholdtsen: First rough draft Norwegian and Spanish edition of the book Made with Creative Commons

I am working on publishing yet another book related to Creative Commons. This time it is a book filled with interviews and histories from those around the globe making a living using Creative Commons.

Yesterday, after many months of hard work by several volunteer translators, the first draft of a Norwegian Bokmål edition of the book Made with Creative Commons from 2017 was complete. The Spanish translation is also complete, while the Dutch, Polish, German and Ukraine edition need a lot of work. Get in touch if you want to help make those happen, or would like to translate into your mother tongue.

The whole book project started when Gunnar Wolf announced that he was going to make a Spanish edition of the book. I noticed, and offered some input on how to make a book, based on my experience with translating the Free Culture and The Debian Administrator's Handbook books to Norwegian Bokmål. To make a long story short, we ended up working on a Bokmål edition, and now the first rough translation is complete, thanks to the hard work of Ole-Erik Yrvin, Ingrid Yrvin, Allan Nordhøy and myself. The first proof reading is almost done, and only the second and third proof reading remains. We will also need to translate the 14 figures and create a book cover. Once it is done we will publish the book on paper, as well as in PDF, ePub and possibly Mobi formats.

The book itself originates as a manuscript on Google Docs, is downloaded as ODT from there and converted to Markdown using pandoc. The Markdown is modified by a script before is converted to DocBook using pandoc. The DocBook is modified again using a script before it is used to create a Gettext POT file for translators. The translated PO file is then combined with the earlier mentioned DocBook file to create a translated DocBook file, which finally is given to dblatex to create the final PDF. The end result is a set of editions of the manuscript, one English and one for each of the translations.

The translation is conducted using the Weblate web based translation system. Please have a look there and get in touch if you would like to help out with proof reading. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

CryptogramE-Mailing Private HTTPS Keys

I don't know what to make of this story:

The email was sent on Tuesday by the CEO of Trustico, a UK-based reseller of TLS certificates issued by the browser-trusted certificate authorities Comodo and, until recently, Symantec. It was sent to Jeremy Rowley, an executive vice president at DigiCert, a certificate authority that acquired Symantec's certificate issuance business after Symantec was caught flouting binding industry rules, prompting Google to distrust Symantec certificates in its Chrome browser. In communications earlier this month, Trustico notified DigiCert that 50,000 Symantec-issued certificates Trustico had resold should be mass revoked because of security concerns.

When Rowley asked for proof the certificates were compromised, the Trustico CEO emailed the private keys of 23,000 certificates, according to an account posted to a Mozilla security policy forum. The report produced a collective gasp among many security practitioners who said it demonstrated a shockingly cavalier treatment of the digital certificates that form one of the most basic foundations of website security.

Generally speaking, private keys for TLS certificates should never be archived by resellers, and, even in the rare cases where such storage is permissible, they should be tightly safeguarded. A CEO being able to attach the keys for 23,000 certificates to an email raises troubling concerns that those types of best practices weren't followed.

I am croggled by the multiple layers of insecurity here.

BoingBoing post.

Worse Than FailureCodeSOD: And Now You Have Two Problems

We all know the old saying: “Some people, when confronted with a problem, think ‘I know, I’ll use regular expressions.’ Now they have two problems.” The quote has a long and storied history, but Roger A’s co-worker decided to take it quite literally.

Specifically, they wanted to be able to build validation rules which could apply a regular expression to the input. Thus, they wrote the RegExpConstraint class:

public class RegExpConstraint
        private readonly Regex _pattern;

        private readonly string _unmatchedErrorMessage;
        protected string UnmatchedErrorMessage => _unmatchedErrorMessage;

        public RegExpConstraint(string pattern, string unmatchedErrorMessage)
                _pattern = new Regex(pattern);
                _unmatchedErrorMessage = unmatchedErrorMessage;

        /// <summary>
        /// Check if the given value match the RegExp. Return the unmatched error message if it doesn't, null otherwise.
        /// </summary>
        public virtual string CheckMatch(string value)
                if (!_pattern.IsMatch(value))
                        return _unmatchedErrorMessage;
                return null;

This “neatly” solved the problem of making sure that an input string matched a regex, if by “neatly” you mean, “returns a string instead of a boolean value”, but it introduced a new problem: what if you wanted to make certain that it absolutely didn’t match a certain subset of characters. For example, if you wanted “\:*<>|@” to be illegal characters, how could you do that with the RegExpConstraint? By writing a regex like this: [^\:*<>|@]? Don’t be absurd. You need a new class.

public class RegExpExcludeConstraint : RegExpConstraint
        private Regex _antiRegex;
        public Regex AntiRegex => _antiRegex;

        public RegExpExcludeConstraint()
                : base(null, null)

        /// <summary>
        /// Constructor
        /// </summary>
        /// <param name="pattern">Regex expression to validate</param>
        /// <param name="antiPattern">Regex expression to invalidate</param>
        /// <param name="unmatchedErrorMessage">Error message in case of invalidation</param>
        public RegExpExcludeConstraint(string pattern, string antiPattern, string unmatchedErrorMessage)
                : base(pattern, unmatchedErrorMessage)
                _antiRegex = new Regex(antiPattern);

        /// <summary>
        /// Check if the constraint match
        /// </summary>
        public override string CheckMatch(string value)
                var baseMatch = base.CheckMatch(value);
                if (baseMatch != null || _antiRegex.IsMatch(value))
                        return UnmatchedErrorMessage;
                return null;

Not only does this programmer not fully understand regular expressions, they also haven’t fully mastered inheritance. Or maybe they know that this code is bad, as they named one of their parameters antiPattern. The RegExpExcludeConstraint accepts two regexes, requires that the first one matches, and the second one doesn’t, helpfully continuing the pattern of returning null when there’s nothing wrong with the input.

Perhaps the old saying is wrong. I don’t see two problems. I see one problem: the person who wrote this code.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


CryptogramGreyshift Sells Phone Unlocking Services

Here's another company that claims to unlock phones for a price.

Planet DebianAntoine Beaupré: The cost of hosting in the cloud

This is one part of my coverage of KubeCon Austin 2017. Other articles include:

Should we host in the cloud or on our own servers? This question was at the center of Dmytro Dyachuk's talk, given during KubeCon + CloudNativeCon last November. While many services simply launch in the cloud without the organizations behind them considering other options, large content-hosting services have actually moved back to their own data centers: Dropbox migrated in 2016 and Instagram in 2014. Because such transitions can be expensive and risky, understanding the economics of hosting is a critical part of launching a new service. Actual hosting costs are often misunderstood, or secret, so it is sometimes difficult to get the numbers right. In this article, we'll use Dyachuk's talk to try to answer the "million dollar question": "buy or rent?"

Computing the cost of compute

So how much does hosting cost these days? To answer that apparently trivial question, Dyachuk presented a detailed analysis made from a spreadsheet that compares the costs of "colocation" (running your own hardware in somebody else's data center) versus those of hosting in the cloud. For the latter, Dyachuk chose Amazon Web Services (AWS) as a standard, reminding the audience that "63% of Kubernetes deployments actually run off AWS". Dyachuk focused only on the cloud and colocation services, discarding the option of building your own data center as too complex and expensive. The question is whether it still makes sense to operate your own servers when, as Dyachuk explained, "CPU and memory have become a utility", a transition that Kubernetes is also helping push forward.

Another assumption of his talk is that server uptime isn't that critical anymore; there used to be a time when system administrators would proudly brandish multi-year uptime counters as a proof of server stability. As an example, Dyachuk performed a quick survey in the room and the record was an uptime of 5 years. In response, Dyachuk asked: "how many security patches were missed because of that uptime?" The answer was, of course "all of them". Kubernetes helps with security upgrades, in that it provides a self-healing mechanism to automatically re-provision failed services or rotate nodes when rebooting. This changes hardware designs; instead of building custom, application-specific machines, system administrators now deploy large, general-purpose servers that use virtualization technologies to host arbitrary applications in high-density clusters.

When presenting his calculations, Dyachuk explained that "pricing is complicated" and, indeed, his spreadsheet includes hundreds of parameters. However, after reviewing his numbers, I can say that the list is impressively exhaustive, covering server memory, disk, and bandwidth, but also backups, storage, staffing, and networking infrastructure.

For servers, he picked a Supermicro chassis with 224 cores and 512GB of memory from the first result of a Google search. Once amortized over an aggressive three-year rotation plan, the $25,000 machine ends up costing about $8,300 yearly. To compare with Amazon, he picked the m4.10xlarge instance as a commonly used standard, which currently offers 40 cores, 160GB of RAM, and 4Gbps of dedicated storage bandwidth. At the time he did his estimates, the going rate for such a server was $2 per hour or $17,000 per year. So, at first, the physical server looks like a much better deal: half the price and close to quadruple the capacity. But, of course, we also need to factor in networking, power usage, space rental, and staff costs. And this is where things get complicated.

First, colocation rates will vary a lot depending on location. While bandwidth costs are often much lower in large urban centers because of proximity to fast network links, real estate and power prices are often much higher. Bandwidth costs are now the main driver in hosting costs.

For the purpose of his calculation, Dyachuk picked a real-estate figure of $500 per standard cabinet (42U). His calculations yielded a monthly power cost of $4,200 for a full rack, at $0.50/kWh. Those rates seem rather high for my local data center, where that rate is closer to $350 for the cabinet and $0.12/kWh for power. Dyachuk took into account that power is usually not "metered billing", when you pay for the actual power usage, but "stepped billing" where you pay for a circuit with a (say) 25-amp breaker regardless of how much power you use in said circuit. This accounts for some of the discrepancy, but the estimate still seems rather too high to be accurate according to my calculations.

Then there's networking: all those machines need to connect to each other and to an uplink. This means finding a bandwidth provider, which Dyachuk pinned at a reasonable average cost of $1/Mbps. But the most expensive part is not the bandwidth; the cost of managing network infrastructure includes not only installing switches and connecting them, but also tracing misplaced wires, dealing with denial-of-service attacks, and so on. Cabling, a seemingly innocuous task, is actually the majority of hardware expenses in data centers, as previously reported. From networking, Dyachuk went on to detail the remaining cost estimates, including storage and backups, where the physical world is again cheaper than the cloud. All this is, of course, assuming that crafty system administrators can figure out how to glue all the hardware together into a meaningful package.

Which brings us to the sensitive question of staff costs; Dyachuk described those as "substantial". These costs are for the system and network administrators who are needed to buy, order, test, configure, and deploy everything. Evaluating those costs is subjective: for example, salaries will vary between different countries. He fixed the person yearly salary costs at $250,000 (counting overhead and an actual $150,000 salary) and accounted for three people on staff. Those costs may also vary with the colocation service; some will include remote hands and networking, but he assumed in his calculations that the costs would end up being roughly the same because providers will charge extra for those services.

Dyachuk also observed that staff costs are the majority of the expenses in a colocation environment: "hardware is cheaper, but requires a lot more people". In the cloud, it's the opposite; most of the costs consist of computation, storage, and bandwidth. Staff also introduce a human factor of instability in the equation: in a small team, there can be a lot of variability in ability levels. This means there is more uncertainty in colocation cost estimates.

In our discussions after the conference, Dyachuk pointed out a social aspect to consider: cloud providers are operating a virtual oligopoly. Dyachuk worries about the impact of Amazon's growing power over different markets:

A lot of businesses are in direct competition with Amazon. A fear of losing commercial secrets and being spied upon has not been confirmed by any incidents yet. But Walmart, for example, moved out of AWS and requested that its suppliers do the same.

Demand management

Once the extra costs described are factored in, colocation still would appear to be the cheaper option. But that doesn't take into account the question of capacity: a key feature of cloud providers is that they pool together large clusters of machines, which allow individual tenants to scale up their services quickly in response to demand spikes. Self-hosted servers need extra capacity to cover for future demand. That means paying for hardware that stays idle waiting for usage spikes, while cloud providers are free to re-provision those resources elsewhere.

Satisfying demand in the cloud is easy: allocate new instances automatically and pay the bill at the end of the month. In a colocation, provisioning is much slower and hardware must be systematically over-provisioned. Those extra resources might be used for preemptible batch jobs in certain cases, but workloads are often "transaction-oriented" or "realtime" which require extra resources to deal with spikes. So the "spike to average" ratio is an important metric to evaluate when making the decision between the cloud and colocation.

Cost reductions are possible by improving analytics to reduce over-provisioning. Kubernetes makes it easier to estimate demand; before containerized applications, estimates were per application, each with its margin of error. By pooling together all applications in a cluster, the problem is generalized and individual workloads balance out in aggregate, even if they fluctuate individually. Therefore Dyachuk recommends to use the cloud when future growth cannot be forecast, to avoid the risk of under-provisioning. He also recommended "The Art of Capacity Planning" as a good forecasting resource; even though the book is old, the basic math hasn't changed so it is still useful.

The golden ratio

Colocation prices finally overshoot cloud prices after adding extra capacity and staff costs. In closing, Dyachuk identified the crossover point where colocation becomes cheaper at around $100,000 per month, or 150 Amazon m4.2xlarge instances, which can be seen in the graph below. Note that he picked a different instance type for the actual calculations: instead of the largest instance (m4.10xlarge), he chose the more commonly used m4.2xlarge instance. Because Amazon pricing scales linearly, the math works out to about the same once reserved instances, storage, load balancing, and other costs are taken into account.

He also added that the figure will change based on the workload; Amazon is more attractive with more CPU and less I/O. Inversely, I/O-heavy deployments can be a problem on Amazon; disk and network bandwidth are much more expensive in the cloud. For example, bandwidth can sometimes be more than triple what you can easily find in a data center.

Your mileage may vary; those numbers shouldn't be taken as an absolute. They are a baseline that needs to be tweaked according to your situation, workload and requirements. For some, Amazon will be cheaper, for others, colocation is still the best option.

He also emphasized that the graph stops at 500 instances; beyond that lies another "wall" of investment due to networking constraints. At around the equivalent of 2000-3000 Amazon instances, networking becomes a significant bottleneck and demands larger investments in networking equipment to upgrade internal bandwidth, which may make Amazon affordable again. It might also be that application design should shift to a multi-cluster setup, but that implies increases in staff costs.

Finally, we should note that some organizations simply cannot host in the cloud. In our discussions, Dyachuk specifically expressed concerns about Canada's government services moving to the cloud, for example: what is the impact on state sovereignty when confidential data about its citizen ends up in the hands of private contractors? So far, Canada's approach has been to only move "public data" to the cloud, but Dyachuk pointed out this already includes sensitive departments like correctional services.

In Dyachuk's model, the cloud offers significant cost reduction over traditional hosting in small clusters, at least until a deployment reaches a certain size. However, different workloads significantly change that model and can make colocation attractive again: I/O and bandwidth intensive services with well-planned growth rates are clear colocation candidates. His model is just a start; any project manager would be wise to make their own calculations to confirm the cloud really delivers the cost savings it promises. Furthermore, while Dyachuk wisely avoided political discussions surrounding the impact of hosting in the cloud, data ownership and sovereignty remain important considerations that shouldn't be overlooked.

A YouTube video and the slides [PDF] from Dyachuk's talk are available online.

This article first appeared in the Linux Weekly News, under the title "The true costs of hosting in the cloud".

Worse Than FailureDaylight Losing Time

The second Sunday of March has come to pass, which means if you're a North American reader, you're getting this an hour earlier than normal. What a bonus! That's right, we all got to experience the mandatory clock-changing event known as Daylight Saving Time. While the sun, farm animals, toddlers, etc. don't care about an arbitrary changing of the clock, computers definitely do.

Early in my QA career, I had the great (dis)pleasure of fully regression testing electronic punch clocks on every possible software version every time a DST change was looming. It was every bit as miserable as it sounds but was necessary because if punches were an hour off for thousands of employees, it would wreak havoc on our clients' payroll processing.

Submitter Iain would know this all too well after the financial services company he worked for experienced a DST-related disaster. As a network engineer, Iain was in charge of the monitoring systems. Since their financial transactions were very dependent on accurate time, he created a monitor that would send him an alert if any of the servers drifted three or more seconds from what the domain controllers said the time should be. It rarely ever went off since the magic of NTP was in use to keep all the server clocks correct.

Victory! Congress passes daylight saving bill, a early 20th century propaganda poster, featuring Uncle Sam telling you to get your hoe ready

One fateful early morning of the 2nd Sunday in March, Iain's phone exploded with alerts from the monitor. Two load-balanced web servers were alternately complaining about being an entire hour off from the actual time. The servers in question were added in recent months and had never caused an issue before.

He rolled out of bed to grab his laptop to begin troubleshooting. The servers were supposed to connect to time sync with their domain controller, which would NTP with an external stratum 1 time server. He figured one or more of the servers were having network connectivity issues when the time change occurred and were now confused as to who had the right time.

Iain sent an NTP packet to each of the troubled servers expecting to see the domain controller as the reference server. Instead, he saw the IP addresses of TroublesomeServer1 and TroublesomeServer2. Thinking he did something wrong in an early morning fog, he ran it again only to get the same result. It seemed that the two servers were pointed to each other for NTP.

While that was a ridiculous setup, it wouldn't explain why they were off by an entire hour and kept switching their times. Iain noticed that the old-fashioned clock on his desk showed the time was a bit after 2 AM, while the time on his laptop was a bit after 3 AM. It dawned on him that the time issues had to be related to the Daylight Saving Time change. The settings for that were kept in the load balancer, which he had read-only access to.

In the load balancer console, he found that TroublesomeServer1 was correctly set to update its time for Daylight Saving, while TroublesomeServer2 was not. Since they were incorrectly set to each other for NTP, when TroublesomeServer1 jumped ahead an hour, TroublesomeServer2 would follow. But then TroublesomeServer2 would realize it wasn't supposed to adjust for DST, so it would jump back an hour, bringing TroublesomeServer1 with it. This kept repeating itself, which explained the volume of alerts Iain got.

Since he was powerless to correct the setting on the load balancer, he made a call to his manager, who escalated to another manager and so on until they tracked down who had access to make the setting change. Three hours later, the servers were on the correct time. But the mess of correcting all the overnight transactions that happened during this window were just beginning. The theoretical extra hour of daylight provided was negated by everyone spending hours in a windowless conference room adjusting financial data by hand.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

CryptogramTwo New Papers on the Encryption Debate

Seems like everyone is writing about encryption and backdoors this season.

I recently blogged about the new National Academies report on the same topic.

Here's a review of the National Academies report, and another of the East West Institute's report.

EDITED TO ADD (3/8): Commentary on the National Academies study by the EFF.

Planet DebianJunichi Uekawa: I've been writing js more for chrome extensions.

I've been writing js more for chrome extensions. I write python using pandas for plotting graphs now. I wonder if there's good graphing solution for js. I don't remember how I crafted R graphs annymore.

Planet Linux AustraliaDavid Rowe: Measuring SDR Noise Figure in Real Time

I’m building a sensitive receiver for FreeDV 2400A signals. As a first step I tried a HackRF with an external Low Noise Amplifier (LNA), and attempted to measure the Noise Figure (NF) using the system Mark and I developed two years ago.

However I was getting results that didn’t make sense and were not repeatable. So over the course of a few early morning sessions I came up with a real time NF measurement system, and wrinkled several bugs out of it. I also purchased a few Airspy SDRs, and managed to measure NF on them as well as the HackRF.

It’s a GNU Octave script called nf_from_stdio.m that accepts a sample stream from stdio. It assumes the signal contains a sine wave test tone from a calibrated signal generator, and noise from the receiver under test. By sampling the test tone it can establish the gain of the receiver, and by sampling the noise spectrum an estimate of the noise power.

The script can be driven from command line utilities like hackrf_transfer or airspy_rx or via software receivers like gqrx that can send SSB-demodaulted samples over UDP. Instructions are at the top of the script.


I’m working from a home workbench, with rudimentary RF skills, a strong signal processing background and determination. I do have a good second hand signal generator (Marconi 2031), that cost AUD$1000 at a Hamfest, and a Rigol 815 Spec An (generously donated by Mel K0PFX, and Jim, N0OB) to support my FreeDV work. Both very useful and highly recommended. I cross-checked the sig-gen calibrated output using an oscilloscope and external attenuator (within 0.5dB). The Rigol is less accurate in amplitude (1.5dB on its specs), but useful for relative measurements, e.g. comparing cable attenuation.

For the NF test method I have used a calibrated signal source is required. I performed my tests at 435MHz using a -100dBm carrier generated from the Marconi 2031 sig-gen.

Usage and Results

The script accepts real samples from a SSB demod, or complex samples from an IQ source. Tune your receiver so that the sinusoidal test tone is in the 2000 to 4000 Hz range as displayed on Fig 2 of the script. In general for minimum NF turn all SDR gains up to maximum. Check Fig 1 to ensure the signal is not clipping, reduce the baseband gain if necessary.

Noise is measured between 5000 and 10000 Hz, so ensure the receiver passband is flat in that region. When using gqrx, I drag the filter bandwidth out to 12000 Hz.

The noise estimates are less stable than the tone power estimate, leading to some sample/sample variation in the NF estimate. I take the median of the last five estimates.

I tried supplying samples to nf_from_stdio using two methods:

  1. Using gqrx in UDP mode to supply samples over UDP. This allows easy tuning and the ability to adjust the SDR gains in real time, but requires a few steps to set up
  2. Using a “single” command line approach that consists of a chain of processing steps concatenated together. Once your signal is tuned you can start the NF measurements with a single step.

Instructions on how to use both methods are at the top of nf_from_stdio.m

Here are some results using both gqrx and command line methods, with and without an external (20dB gain/1dB NF) LNA. They were consistent across two laptops.

SDR Gqrx LNA Cmd Line LNA Cmd Line no LNA
AirSpy Mini 2.0 2.2 7.9
AirSpy R2 1.7 1.7 7.0
HackRF One 2.6 3.4 11.1

The results with LNA are what we would expect for system noise figures with a good LNA at the front end.

The “no LNA” Airspy NF results are curious – the Airspy specs state a NF of just 3.5dB. So we contacted Airspy via Twitter and email to see how they measured their stated NF. We haven’t received a response to date. I posted to the Airspy mailing list and one gentleman (Dave – WØLEV) kindly replied and has measured noise figures of 4dB using calibrated noise sources and attenuators.

Looking into the data sheets for the Airspy, it appears the R820T tuner at the front end of the Airspy has a NF of 3.5dB. However a system NF will always be worse than the first device, as other devices (e.g. the ADC) also inject noise.

Other possibilities for my figures are measurement error, ambient noise sources at my site, frequency dependent NF, or variations in individual R820T samples.

In our past work we have used Bit Error Rate (BER) results as an independent method of confirming system noise figure. We found a close match between theoretical and measured BER when testing with and without a LNA. I’ll be repeating similar low level BER tests with FreeDV 2400A soon.

Real Time Noise Figure

It’s really nice to read the system noise figure in real time. For example you can start it running, then experiment with grounding, tightening connectors, or moving the SDR away from the laptop, or connect/disconnect a LNA in real time and watch the results. Really helps catch little issues in these difficult to perform tests. After all – we are measuring thermal noise, a very weak signal.

Some of the NF problems I could find and remove with a real time measurement:

  • The Airspy mini is nearly 1dB worse on the front left USB port than the rear left USB port on my X220 Thinkpad!
  • The Airspy mini really likes USB extension cables with ferrite clamps – without the ferrite I found the LNA was ineffective in reducing the NF – being swamped by conducted laptop noise I guess.
  • Loose connectors can make the noise figure a few dB worse. Wiggle and tighten them all.
  • Position of SDR/LNA near the radio and other bench equipment.
  • My magic touch can decrease noise figure! Grounding effect I guess?

Development Bugs

I had to work through several problems before I started getting sensible numbers. This was quite discouraging for a while as the numbers were jumping all over the place. However its fair to say measuring NF is a tough problem. From what I can Google its an uncommon measurement for people in home workshops.

These bugs are worth mentioning as traps for anyone else attempting home NF measurements:

  1. Cable loss: I found a 1.5dB loss is some cable I was using between the sig gen and the SDR under test. I Measured the loss by comparing a few cables connected between my sig gen and spec an. While the 815 is not accurate in terms of absolute calibration (rated at 1.5dB), it can still be used for comparative measurements. The cable loss can be added to the calculations or just choose a low loss cable.
  2. Filter shape: I had initially placed the test tone under 1000Hz. However I noticed that the gqrx signal had a few dB of high pass filtering in this region (Fig 2 below). Not an issue for regular USB demodulation, but a few dB really matters for NF! So I moved the test tone to the 2-4kHz region where the gqrx output was nice and flat.
  3. A noisy USB port, especially without a clamp, on the Airspy Mini (photo below). Found by trying different SDRs and USB ports, and finally a clamp. Oh Boy, never expected that one. I was connecting the LNA and the NF was stuck at 4dB – swamped by noise from the USB Port I guess.
  4. Compression: Worth checking the SDR output is not clipped or in compression. I adjusted the sig gen output up and down 3dB, and checked the power estimate from the script changed by 3dB. Also worth monitoring Fig 1 from the script, make sure it’s not hitting the limits. The HackRF needed it’s baseband gain reduced, but the Airspys were OK.
  5. I used latest Airspy tools built from source (rather than Ubuntu 17 package) to get stdout piping working properly and not have other status information from printfs injected into the sample stream!


Thanks Mark, for the use of your RF hardware, and I’d also like to mention the awesome CSDR tools and fantastic gqrx software – both very handy for SDR work.

Valerie AuroraAdvice for women in tech who are tired of talking about women in tech

To be a woman in tech is to be asked to talk about being a woman in tech, regardless of the desires or knowledge of the individual, unique woman in tech in question (see The Unicorn Law). This is a frustrating part of being a member of a marginalized group in any field of endeavor: being expected to speak for, represent, and advocate for your group, regardless of your own personal inclinations. Even women in tech who actively embrace talking about women in tech want to choose if, when, and how they talk about women in tech, and not do so on command by others.

As a woman in tech activist, I’m here to to tell women in tech: it’s 100% fine for you to not talk about women in tech if you don’t want to! It’s literally not your job! Your job is to do tech stuff. If someone really wants you to talk about women in tech, they can darn well offer to pay you for it, and you can still say, “Nope, don’t want to.”

Here are the reasons for you not to feel guilty about not wanting to be an activist, followed by some coping strategies for when you are asked to talk about women in tech. But first, some disclaimers.

This post presumes that you don’t want to harm women in tech as a whole; if you don’t feel solidarity with other women in tech or feel fine harming other women in tech to get ahead, this post isn’t for you. Likewise, if you are a woman in tech and want to talk about women in tech more than you are now, I fully support your decision, speaking as a programmer who became a full-time activist herself. Doing this work is difficult and often unrewarding; let me at least thank you and support you for doing it. If you want to point out that the ideas in this post apply to another marginalized group, or to fields other than tech: I agree, I just know the most about being a woman in tech and so that’s what I’m writing about.

Reasons not to feel guilty

Men should do more for women in tech. Many women in tech feel guilty for not helping other women in tech more, despite the fact that equivalent men often have more time, energy, power, and influence to support women in tech. I once felt guilty as a junior engineer when an older, more experienced woman in my group left, because she had previously asked me to mentor her (!!!) and I refused because I felt unqualified. At the same time, my group was filled with dozens of more knowledgeable and powerful men who felt no personal responsibility at all for her departure. Men aren’t putting in their fair share of work to support women in tech yet. Until they do, feel free to flip the question around and ask what men are doing to support women in tech.

Women are punished for advocating for women in tech. Women who do speak about women in tech are often accused of doing it for personal gain, which is hilarious. I can’t think of a single woman in tech whose lifetime earnings were improved by saying anything about women in tech that wasn’t “work harder and make more money for corporations.” In reality, the research shows that the careers of women and other members of marginalized groups are actually harmed if they appear to be advocating for members of their own group. Feel free to decline to do work that will harm your career. (And if you do it anyway: thank you!!!)

Women in tech already have to do more work. Women in tech already have to do more work in order to get the same credit as an equivalent man. In addition to having to do more of our technical work to be perceived as contributing equally, we are also expected to do emotional labor for free: listening to people’s problems, expressing empathy, doing “office housework” like arranging parties and birthday cards, smiling and being cheerful, taking care of visitors, and welcoming new employees. We are also expected to help and assist men with their jobs without getting credit, and punished when we stick to our own work. Add on to that the job of talking about women in tech, which is not only unrewarded but often punished. While you’ll get pushback for turning down any of this free labor, feel free to wiggle out of as much of it as possible.

Activism is a whole separate job. Activism is a different job from a job in tech. It needs different skills and requires different aptitudes from most tech jobs. Some people have both the skills and aptitude (and the free time) to work a tech job and also be an activist; don’t feel strange if you’re not one of those people.

You can support women in tech in other ways. If you do want to support women in tech, but don’t feel comfortable being an activist yourself, there are plenty of other ways to support women in tech. You can give money to organizations that support women in tech. You can hire more women in tech. You can invest in women in tech. You can be a supportive spouse to a woman in tech. You can mentor women in tech. Feel free to be creative about how you support women in tech and don’t let other people guilt you into their ideas for how you should be supporting women in tech.

You are being a role model for women in tech. Women in tech can help women in tech simply by existing and not actively harming other women in tech. You can speak or write about your tech job. You can agree to interviews with the condition of not being asked about women in tech. You can get promoted and raise your salary. In other words, keep doing your job, and avoid doing things that harm women in tech in the long-term. Avoiding harm is harder than it sounds and takes some expertise and learning to get right, but some rules of thumb are: don’t push other marginalized folks down to give yourself a leg up, do recognize there are many different ways to be a women in tech, do default to listening over speaking when it comes to subjects you’re not an expert in (which may be activism).

Coping strategies

Here are a few coping strategies for when you are inevitably asked to talk about women in tech. You can use these strategies if you never want to talk about women in tech, or if you just don’t want to talk about women in tech in this particular situation. I personally find talking about women in tech fairly boring when the other person thinks they know more than they actually do about the topic, so I often use one of these techniques in that situation.

Make a list of other people to pass requests on to. Sure, you don’t want to give the one millionth talk on What It’s Like to Be a Woman in Programming Community X. But perhaps someone else has started a Women in Programming Community X group and would love to give a talk on the subject. You can also make a list of books or websites or other resources and tell people that while you don’t know much about career advice for women in tech, you’ve heard that “What Works for Women at Work” has some good tips.

Suggest that men do the work instead. When you suggest men do the work to support women in tech, you’ll get some predictable pushback. Lack of knowledge: Remind them that the research exists and can be learned by reading it. Feel afraid/scared/out of place: Remind them that that is how women feel in male-dominated spaces. Don’t you feel guilty: No, but if had the power men did I’d feel guilty for not using it. After a few of these annoying discussions, many people will stop asking you to do women in tech stuff.

Point out your lack of expertise. There’s nothing about being a woman in tech that necessarily makes you an expert on how to support women in tech in general. People will often ask women in tech to do things or make statements in areas they don’t have expertise in; get used to saying “I don’t know about that,” or “I haven’t studied that.” Lots of requests to speak for all women in tech or to reassure people that they aren’t personally sexist can be shot down this way.

Change the subject. If people ask you about women in tech, you often have an easy subject change: your job! Tell them about your project, ask them about their project, ask about a controversial research topic in your area of tech – it’s hard to object to a woman in tech wanting to talk about tech.

Practice saying no. For many people, it’s hard to say no, and it’s even harder when you’re a member of a marginalized group and people expect you to do what they say. Practicing some go-to words and phrases can help with saying no in the moment. It can also help reduce the feelings of guilt if you imagine the situation in your head and then go over all the reasons not to feel guilty.

Some examples of putting these coping strategies into practice:

“Will you write a blog post for International Women’s Day?”
“Thanks for the invitation, but I’m focusing on other projects right now. Have you thought about writing something yourself?”

“We need a woman keynote speaker for my conference. Will you speak? We pay travel.”
“I appreciate the invitation, but I’m only taking paid speaking engagements right now.”

“What do you think about Susan Fowler’s blog post?”
“You know, I haven’t had time to think about because I’ve been so busy. Can I bring you up to date on my project?”

“We’re doing great on gender equality at our company. Right?”
“I’m afraid I don’t have enough information to say either way. If you really wanted to know, I’d suggest paying an outside expert to do a rigorous study.”

“Will you join this panel on women in computing for Ada Lovelace Day?”
“Thanks for thinking of me, but I’m taking a break from non-technical speaking appearances.”

“I got approval for you to go to Grace Hopper Celebration! I assumed you wanted to go.”
“Wow, that was really kind of you, but I think other people on my team will get more out of it than I would.”

“Boy, that Ellen Pao really screwed things up for women in venture capital, don’t you agree?”
“That’s not really something I feel confident speaking about. I’ve got to get back to work, see you at lunch!”

“How does it feel to be the only woman at this conference?”
“That’s not something I’m comfortable talking about. What talk are you going to next?”

“We really want to hire more women, but they just aren’t applying to our job postings! What do you think we’re doing wrong?”
“I’m not a recruiting expert, sorry! That sounds like something you should hire a professional to figure out.”

“I’m putting together a book of essays on women in tech! Will you write a chapter for me for free?”

“Why are you so selfish? Why won’t you do more to help other women?”
“I’m doing what’s right for me.”

For more advice on shutting down unwelcome conversations, check out Captain Awkward’s “Broken Record” technique.

Whatever your decision about if, when, and how you want to talk about women in tech, we hope these techniques are useful to you!

Planet DebianBen Hutchings: Debian LTS work, February 2018

I was assigned 15 hours of work by Freexian's Debian LTS initiative and worked 13 hours. I will carry over 2 hours to March.

I made another release on the Linux 3.2 longterm stable branch (3.2.99) and started the review cycle for the next update (3.2.100). I rebased the Debian package onto 3.2.99 but didn't upload an update to Debian this month.

I also discussed the possibilities for cooperation between Debian LTS and CIP, briefly reviewed leptonlib for additional security issues, and updated the wiki page about the status of Spectre and Meltdown in Debian.


Krebs on SecurityChecked Your Credit Since the Equifax Hack?

A recent consumer survey suggests that half of all Americans still haven’t checked their credit report since the Equifax breach last year exposed the Social Security numbers, dates of birth, addresses and other personal information on nearly 150 million people. If you’re in that fifty percent, please make an effort to remedy that soon.

Credit reports from the three major bureaus — Equifax, Experian and TransUnion — can be obtained online for free at — the only Web site mandated by Congress to serve each American a free credit report every year. is run by a Florida-based company, but its data is supplied by the major credit bureaus, which struggled mightily to meet consumer demand for free credit reports in the immediate aftermath of the Equifax breach. Personally, I was unable to order a credit report for either me or my wife even two weeks after the Equifax breach went public: The site just kept returning errors and telling us to request the reports in writing via the U.S. Mail.

Based on thousands of comments left here in the days following the Equifax breach disclosure, I suspect many readers experienced the same but forgot to come back and try again. If this describes you, please take a moment this week to order your report(s) (and perhaps your spouse’s) and see if anything looks amiss. If you spot an error or something suspicious, contact the bureau that produced the report to correct the record immediately.

Of course, keeping on top of your credit report requires discipline, and if you’re not taking advantage of all three free reports each year you need to get a plan. My strategy is to put a reminder on our calendar to order a new report every four months or so, each time from a different credit bureau.

Whenever stories about credit reports come up, so do the questions from readers about the efficacy and value of credit monitoring services. KrebsOnSecurity has not been particularly kind to the credit monitoring industry; many stories here have highlighted the reality that they are ineffective at preventing identity theft or existing account fraud, and that the most you can hope for from them is that they alert you when an ID thief tries to get new lines of credit in your name.

But there is one area where I think credit monitoring services can be useful: Helping you sort things out with the credit bureaus in the event that there are discrepancies or fraudulent entries on your credit report. I’ve personally worked with three different credit monitoring services, two of which were quite helpful in resolving fraudulent accounts opened in our names.

At $10-$15 a month, are credit monitoring services worth the cost? Probably not on an annual basis, but perhaps during periods when you actively need help. However, if you’re not already signed up for one of these monitoring services, don’t be too quick to whip out that credit card: There’s a good chance you have at least a year’s worth available to you at no cost.

If you’re willing to spend the time, check out a few of the state Web sites which publish lists of companies that have had a recent data breach. In most cases, those publications come with a sample consumer alert letter providing information about how to sign up for free credit monitoring. California publishes probably the most comprehensive such lists at this link. Washington state published their list here; and here’s Maryland’s list. There are more.

It’s important for everyone to remember that as bad as the Equifax breach was (and it was a dumpster fire all around), most of the consumer data exposed in the breach has been for sale in the cybercrime underground for many years on a majority of Americans. If anything, the Equifax breach may have simply refreshed some of those criminal data stores.

That’s why I’ve persisted over the years in urging my fellow Americans to consider freezing their credit files. A security freeze essentially blocks any potential creditors from being able to view or “pull” your credit file, unless you affirmatively unfreeze or thaw your file beforehand.

With a freeze in place on your credit file, ID thieves can apply for credit in your name all they want, but they will not succeed in getting new lines of credit in your name because few if any creditors will extend that credit without first being able to gauge how risky it is to loan to you (i.e., view your credit file).

Bear in mind that if you haven’t yet frozen your credit file and you’re interested in signing up for credit monitoring services, you’ll need to sign up first before freezing your file. That’s because credit monitoring services typically need to access your credit file to enroll you, and if you freeze it they can’t do that.

The previous two tips came from a primer I wrote a few days after the Equifax breach, which is an in-depth Q&A about some of the more confusing aspects of policing your credit, including freezes, credit monitoring, fraud alerts, credit locks and second-tier credit bureaus.

Planet DebianElena Gjevukaj: CoderGals Hackathon

CoderGals Hackathon was organized for the first time in my country. This event took place in the beautiful city of Prizren. This hackathon held for 24 to 48 hours, was an idea which started from two girls majoring in Computer Science, Qendresa and Albiona Hoti.

Thanks to them, we had the chance to work on exciting projects as well as be mentored by key tech people including: Mergim Cahani, Daniel Pocock, Taulant Mehmeti, Mergim Krasniqi, Kolos Pukaj, Bujar Dervishaj, Arta Shehu Zaimi and Edon Bajrami.

We brainstormed for about 3-4 hours to decide for the project. We discussed many ideas that ranged from Doppler effect to GUI interfaces for phone calls. Finally we ended up making an project for linking the PC with your phone so it will ease the procedure not to use both when you need to add a contact, make a call or even sent text messages. We called it Phone Client project.

You can check our work online:

Phone Client

It was a challenge for us because we worked for the first time on Debian OS.

Projects that other girls worked on:

Planet DebianVasudev Kamath: Biboumi - A XMPP - IRC Gateway

IRC is a communication mode (technically a communication protocol) used by many Free Software projects for communication and collaboration. It is serving these projects well even 30 years after its inception. Though I'm pretty much okay with IRC I had a problem of not able to use IRC from the mobile phones. Main problem is the inconsistent network connection, where IRC needs always to be connected. This is where I came across Biboumi.

Biboumi by itself does not have anything to do with mobile phones, its just a gateway which will allow you to connect with IRC channel as if it is a XMPP MUC room from any XMPP client. Benefit of this is it allows to enjoy some of XMPP feature in your IRC channel (not all but those which can be mapped).

I run Biboumi with my ejabbered instance and there by now I can connect to some of the Debian IRC channel directly from my phone using Conversations XMPP client for Android.

Biboumi is packaged for Debian, though I'm co-maintainer of the package most hardwork is done by Jonas Smedegaard in keeping the package in shape. It is also available for stretch-backports (though slightly outdated as its not packaged by us for backports). Once you install the package, copy example configuration file from /usr/share/doc/biboumi/examples/example.conf to /etc/biboumi/biboumi.cfg and modify the values as needed. Below is my sample file with password redacted.


Explanation for all the key, values in the configuration file is available in the man page (man biboumi).

Biboumi is configured as external component of the XMPP server. In my case I'm using ejabberd to host my XMPP service. Below is the configuration needed for allowing biboumi to connect with ejabberd.

  port: 8888
  ip: ""
  module: ejabberd_service
  acess: all
       password: xxx

password field in biboumi configuration should match password value in your XMPP server configuration.

After doing above configuration reload ejabberd (or your XMPP server) and start biboumi. Biboumi package provides systemd service file so you might need to enable it first. That's it now you have an XMPP to IRC gateway ready.

You might notice that I'm using local host name for hostname key as well as ip field in ejabberd configuration. This is because TLS support was added to biboumi Debian package only after 7.2 release as botan 2.x was not available till that point in Debian. Hence using proper domain name and making biboumi listen to public will be not safe at least prior to Debian package version 7.2-2. Also making the biboumi service public means you will also need to handle spam bots trying to connect from your service to IRC, which might get your VPS banned from IRC.

Connection Semantics

Once biboumi is configured and running you can now use XMPP client of your choice (Gajim, Conversation etc.) to connect to IRC. To connect to OFTC from your XMPP client you need to following address in Group Chat section

Replace part after @ to what you have configured in hostname field in biboumi configuration. To join a specific channel on a IRC server you need to join the group conversation with following format

If your nick name is registered and you would want to identify yourself to IRC server you can do that by joining in group conversation with NickServ using following address

Once connected you can send NickServ command directly in this virtual channel. Like identify password nick. It is also possible to configure your XMPP clients like Gajim to send Ad-Hoc commands on connection to particular IRC server for identifying your self with IRC servers. But this part I did not get working in Gajim.

If you are running your own XMPP server then biboumi gives you best way to connect to IRC from your mobile phones. And with applications like Conversation running XMPP application won't be hard on your phone battery.



Planet DebianJeremy Bicha: webkitgtk in Debian Stretch: Report Card

webkitgtk is the GTK+ port of WebKit. webkitgtk provides web functionality for many things including GNOME Online Accounts’ login panels; Evolution’s HTML email editor and viewer; and the engine for the Epiphany web browser (also known as GNOME Web).

Last year, I announced here that Debian 9 “Stretch” included the latest version of webkitgtk (Debian’s package is named webkit2gtk). At the time, I hoped that Debian 9 would get periodic security and bugfix updates. Nine months later, let’s see how we’ve been doing.

Release History

Debian 9.0, released June 17, 2017, included webkit2gtk 2.16.3 (up to date).

Debian 9.1 was released July 22, 2017 with no webkit2gtk update (2.16.5 was the current release at the time).

Debian 9.2, released October 8, 2017, included 2.16.6 (There was a 2.18.0 release available then but for the first stable update, we kept it simple by not taking the brand new series.)

Debian 9.3 was released December 9, 2017 with no webkit2gtk update (2.18.3 was the current release at the time).

Debian 9.4 released March 10, 2018 (today!), includes 2.18.6 (up to date).

Release Schedule

webkitgtk development follows the GNOME release schedule and produces new major updates every March and September. Only the current stable series is supported (although sometimes there can be a short overlap; 2.14.6 was released at the same time as 2.16.1). Distros need to adopt the new series every six months.

Like GNOME, webkitgtk uses even numbers for stable releases (2.16 is a stable series, 2.16.3 is a point release in that series, but 2.17.3 is a development release leading up to 2.18, the next stable series).

There are webkitgtk bugfix releases, approximately monthly. Debian stable point releases happen approximately every two or three months (the first point release was quicker).

In a few days, webkitgtk 2.20 will be released. Debian 9.5 will need to include 2.20.1 (or 2.20.2) to keep users on a supported release.

Report Card

From five Debian 9 releases, we have been up to date in 2 or 3 of them (depending on how you count the 9.2 release).

Using a letter grade scale, I think I’d give Debian a B or B- so far. But this is significantly better than Debian 8 which offered no webkitgtk updates at all except through backports. In my grading, Debian could get a A- if we consistently updated webkitgtk in these point releases.

To get a full A, I think Debian would need to push the new webkitgtk updates (after a brief delay for regression testing) directly as security updates without waiting for point releases. Although that proposal has been rejected for Debian 9, I think it is reasonable for Debian 10 to use this model.

If you are a Debian Developer or Maintainer and would like to help with webkitgtk updates, please get in touch with Berto or me. I, um, actually don’t even run Debian (except briefly in virtual machines for testing), so I’d really like to turn over this responsibility to someone else in Debian.


I find the Repology webkitgtk tracker to be fascinating. For one thing, I find it humorous how the same package can have so many different names in different distros.

Planet DebianAndrew Shadura: Say no to Slack, say yes to Matrix

Of all proprietary chatting systems, Slack has always seemed one of the worst to me. Not only it’s a closed proprietary system with no sane clients, open source or not, but it not just one walled garden, as Facebook or WhatsApp are, but a constellation of walled gardens, isolated from each other. To be able to participate in multiple Slack communities, the user has to create multiple accounts and keep multiple chat windows open all the time. Federation? Self-hosting? Owning your data? All of those are not a thing in Slack. Until recently, it was possible to at least keep the logs of all conversations locally by connecting to the chat using IRC or XMPP if the gateway was enabled.

Now, with Slack shutting down gateways not only you cannot keep the logs on your computer, you also cannot use a client of your choice to connect to Slack. They also began changing the bots API which was likely the reason the Matrix-to-Slack gateway didn’t work properly at times. The issue has since resolved itself, but Slack doesn’t give any guarantees the gateway will continue working, and obviously they aren’t really interested in keeping it working.

So, following Gunnar Wolf’s advice (consider also reading this article by Megan Squire), I recommend you stop using Slack. If you prefer an isolated chat system with features Slack provides, and you can self-host, consider MatterMost or Rocket.Chat. Both seem to provide more or less the same features as Slack, but don’t lock you in, and you can choose to either use their paid cloud offering, or run it on your own server. We’ve been using MatterMost at Collabora since July last year, and while it’s not perfect, it’s not a bad piece of software.

If you woulde prefer a system you can federate, you may be interested to have a look at Matrix. Matrix is an open decentralised protocol and ecosystem, which architecturally looks similar to XMPP, but uses different technologies and offers a richer and more modern baseline, including VoIP, end-to-end encryption, decentralised history and content storage, easy bot integration and more. The web client for Matrix, Riot is comparable to Slack, but unlike Slack, there are more clients you can use, including Weechat, libpurple, a bunch of Qt-based clients and, importantly, Riot for Android and iOS.

You don’t have to self-host a Matrix homeserver, since runs one you can use, but it’s quite easy to run one if you decide to, and you don’t even have to migrate your existing chats — you just join them from accounts on your own homeserver, and that’s it!

To help you with the decision to move from Slack to Matrix, you should know that since Matrix has a Slack gateway, you can gradually migrate your colleagues to the new infrastructure, by joining the Slack and Matrix chats together, and dropping the gateway only when everyone moves from Slack.

Repeating Gunnar, say no to predatory tactics. Say no to Embrace, Extend and Extinguish. Say no to Slack.

Planet DebianGunnar Wolf: On the demise of Slack's IRC / XMPP gateways

I have grudgingly joined three Slack workspaces , due to me being part of proejects that use it as a communications center for their participants. Why grudgingly? Because there is very little that it adds to well-established communications standards that we have had for long years decades.

On this topic, I must refer you to the talk and article presented by Megan Squire, one of the clear highlights of my participation last year at the 13th International Conference on Open Source Systems (OSS2017): «Considering the Use of Walled Gardens for FLOSS Project Communication». Please do have a good read of this article.

Thing is, after several years of playing open with probably the best integration gateway I have seen, Slack is joining the Embrace, Extend and Extinguish-minded companies. Of course, I strongly doubt they will manage to extinguish XMPP or IRC, but they want to strengthen the walls around their walled garden...

So, once they have established their presence among companies and developer groups alike, Slack is shutting down their gateways to XMPP and IRC, arguing it's impossible to achieve feature-parity via the gateway.

Of course, I guess all of us recognize and understand there has long not been feature parity. But that's a feature, not a bug! I expressly dislike the abuse of emojis and images inside what's supposed to be a work-enabling medium. Of course, connecting to Slack via IRC, I just don't see the content not meant for me.

The real motivation is they want to control the full user experience.

Well, they have lost me as a user. The day my IRC client fails to connect to Slack, I will delete my user account. They already had record of all of my interactions using their system. Maybe I won't be able to move any of the groups I am part of away from Slack – But many of us can help create a flood.

Say no to predatory tactics. Say no to Embrace, Extend and Extinguish. Say no to Slack.

Planet Linux AustraliaDonna Benjamin: I said, let me tell you now

Montage of Library Bookshelves

Ever since I heard this month’s #AusGlamBlog theme was “Happiness” I’ve had that Happy song stuck in my head.

“Clap along if you know what happiness is to you”

I’m new to the library world as a professional, but not new to libraries. A sequence of fuzzy memories swirl in my mind when I think of libraries.

First, was my local public library children’s cave filled with books that glittered with colour like jewels.

Next, I recall the mesmerising tone and timbre of the librarian’s voice at primary school. Each week she transported us into a different story as we sat, cross legged in front of her, in some form of rapture.

Coming into closer focus I recall opening drawers in the huge wooden catalogue in the library at high school. Breathing in the deeply lovely, dusty air wafting up whilst flipping through those tiny cards was a tactile delight. Some cards were handwritten, some typewritten, some plastered with laser printed stickers.

And finally, I remember relishing the peace and quiet afforded by booking one of 49 carrel study booths at La Trobe University.

I love libraries. Libraries make me happy.

The loss of libraries makes me sad. I think of Alexandria, and more recently in Timbuktu, and closer to home, I mourn the libraries lost to the dreaming by the ravages of destructive colonial force on this little continent so many of us now call home.

Preservation and digitisation, and open collections give me hope. There can only ever be one precious original of a thing, but facsimiles, and copies and 3D blueprints increasingly means physical things can now too be shared and studied without needing to handle, or risk damaging the original.

Sending precious things from collection to collection is fraught with danger. The revelations of what Australian customs did to priceless plant specimens from France & New Zealand still gives me goosebumps of horror.

Digital. Copies. Catalogues, Circulation, Fines, Holds, Reserves, and Serial patterns. I’m learning new things about the complexities under the surface as I start to work seriously with the Koha Community Integrated Library System. I first learned about the Koha ILS more than a decade ago, but I'm only now getting a chance to work with it. It brings my secret love of libraries and my publicly proclaimed love of open source together in a way I still can’t believe is possible.

So yeah.

OH HAI! I’m Donna, and I’m here to help.

“Clap along if you feel like that's what you wanna do”


CryptogramFriday Squid Blogging: Interesting Interview

Here's an hour-long audio interview with squid scientist Sarah McAnulty.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianAdnan Hodzic: Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

TEDTED gets a fresh new look on TV apps

TED fans with an Android TV or Amazon FireTV will see a newly reimagined app — one that offers far more than just a sleek new design — beginning today.

We’re giving you more relevant talk suggestions, provided daily on the homepage. With our new layout, the app’s playlists and talks are easier than ever to navigate.

TED fans can now use the TED TV app in 21 languages, and even take advantage of Google Assistant on compatible TVs for controls like play, pause and fast forward. Watching your favorite talks has never been easier.

The move is all part of TED’s ongoing effort to fulfill its mission of making the ideas that matter more accessible — regardless of where you are and how you like to tune in. With these changes to curation, design and internationalization, we want to make sure each fan has a more personalized and seamless experience while engaging with TED Talks.

To download the new TED Android TV app, visit the Google Play store. Apps are also available on iOS, Android, Roku, AppleTV and FireTV.

Planet DebianSven Hoexter: half-assed Oracle JRE/JDK 10 support for java-package

I spent an hour to add very basic support for the upcoming Java 10 to my fork of java-package. It still has some edges and the list of binary executables managed via the alternatives system requires some major cleanup. I think once Java 8 is EOL in September it's a good point to consolidate and strip everything except for Java 11 support. If someone requires an older release he can still get back on an earlier version, but by then we won't see any new releases of Java 8, 9, 10. Not speaking about even older stuff.

[sven@digital lib (master)]$ java -version
java version "10" 2018-03-20
Java(TM) SE Runtime Environment 18.3 (build 10+46)
Java HotSpot(TM) 64-Bit Server VM 18.3 (build 10+46, mixed mode)

Planet DebianOlivier Berger: Adding a reminder notification in XFCE systray that I should launch a backup script

I’ve started using borg and borgmatic for backups of my machines. I won’t be using a fully automated backup via a crontab for a start. Instead, I’ve added a recurrent reminder system that will appear on my XFCE desktop to tell me it may be time to do backups.

I’m using yad (a zenity on steroids) to add notifications in the desktop via an anacron.

The notification icon, when clicked, will start a shell script that performs the backups, starting borgmatic.

Here are some bits of my setup :

crontab -l excerpt:

@hourly /usr/sbin/anacron -s -t $HOME/.anacron/etc/anacrontab -S $HOME/.anacron/spool

~/.anacron/etc/anacrontab excerpt:

7 15      borgmatic-home  /home/olivier/bin/

The idea of this anacrontab is to remind me weekly that I should do a backup, 15 minutes after I’ve booted the machine. Another reminding mechanism may be more handy… time will tell.

Then, the script :

notify-send 'Borg backups at home!' "It's time to do a backup." --icon=document-save

# borrowed from

# create a FIFO file, used to manage the I/O redirection from shell
PIPE=$(mktemp -u --tmpdir ${0##*/}.XXXXXXXX)
mkfifo $PIPE

# attach a file descriptor to the file
exec 3<> $PIPE

# add handler to manage process shutdown
function on_exit() {
 echo "quit" >&3
 rm -f $PIPE
trap on_exit EXIT

# add handler for tray icon left click
function on_click() {
 # echo "pid: $YAD_PID"
 echo "icon:document-save" >/proc/$YAD_PID/fd/3
 echo "visible:blink" >/proc/$YAD_PID/fd/3
 xterm -e bash -c "/home/olivier/bin/ --verbosity 1 -c /home/olivier/borgmatic/home-config.yaml; read -p 'Press any key ...'"
 echo "quit" >/proc/$YAD_PID/fd/3
 # kill -INT $YAD_PID
export -f on_click

# create the notification icon
yad --notification \
 --listen \
 --image="appointment-soon" \
 --text="Click icon to start borgmatic backup at home" \
 --command="bash -c on_click $YAD_PID" <&3

The script will start yad so that it displays an icon in the systray. When the icon is clicked, it will start borgmatic, after having changed the icon. Borgmatic will be started inside an xterm so as to get passphrase input, and display messages. Once borgmatic is done backing up, yad will be terminated.

There may be a more elegant way to pass commands to yad listening on file descriptor 3/pipe, but I couldn’t figure out, so the /proc hack. This works on Linux… but not sure in other Unices.

Hope this helps.

CryptogramOURSA Conference

Responding to the lack of diversity at the RSA Conference, a group of security experts have announced a competing one-day conference: OUR Security Advocates, or OURSA. It's in San Francisco, and it's during RSA, so you can attend both.

Worse Than FailureError'd: ICANN't Even...

Jeff W. writes, "You know, I don't think this one will pass."


"Wow! This Dell laptop is pretty sweet!...but I wonder what that other 999999913 GB of data I have can contain..." writes Nicolas A.


"XML is big news at our university!" Gordon S. wrote.


Mark B. wrote, "On Saturday afternoons this British institution lets its hair down and fully embraces the metric system."


"Apparently, my computer and I judge success by very different standards," Michael C. writes.


"I agree you can't disconnect something that doesn't exist, more so when it's named two random Unicode characters," wrote Jurjen.


[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianChristoph Berg: Cool Unix Features: paste

paste is one of those tools nobody uses [1]. It puts two file side by side, line by line.

One application for this came up today where some tool was called for several files at once and would spit out one line by file, but unfortunately not including the filename.

$ paste <(ls *.rpm) <(ls *.rpm | xargs -r rpm -q --queryformat '%{name} \n' -p)

[1] See "J" in The ABCs of Unix

[PS: I meant to blog this in 2011, but apparently never committed the file...]

Planet DebianChristoph Berg: Stepping down as DAM

After quite some time (years actually) of inactivity as Debian Account Manager, I finally decided to give back that Debian hat. I'm stepping down as DAM. I will still be around for the occasional comment from the peanut gallery, or to provide input if anyone actually cares to ask me about the old times.

Thanks for the fish!

Planet DebianIustin Pop: Corydalis 0.3.0 release

Short notice: just release 0.3.0, with a large number of new features and improvements - see the changelog for details.

Without aiming for, this release follows almost exactly a month after v0.2, so maybe a monthly release cycle while I still have lots of things to add (and some time to actually do it) would be an interesting goal.

One potentially interesting thing: since v0.2, I've added a demo site at using a few photos from my own collection, so if you're curious what this actually is, check that out.

Planet DebianSteve Kemp: A change of direction ..

In my previous post I talked about how our child-care works here in wintery Finland, and suggested there might be a change in the near future.

So here is the predictable update; I've resigned from my job and I'm going to be taking over childcare/daycare. Ideally this will last indefinitely, but it is definitely going to continue until November. (Which is the earliest any child could be moved into public day-care if there problems.)

I've loved my job, twice, but even though it makes me happy (in a way that several other positions didn't) there is no comparison. Child-care makes me happier-still. Sure there are days when your child just wants to scream, refuse to eat, and nothing works. But on average everything is awesome.

It's a hard decision, a "brave" decision too apparently (which I read negatively!), but also an easy one to make.

It'll be hard. I'll have no free time from 7AM-5PM, except during nap-time (11AM-1PM, give or take). But it will be worth it.

And who knows, maybe I'll even get to rant at people who ask "Where's his mother?" I live for those moments. Truly.

Don MartiPeople's personal data: take it or ask for it?

We know that advertising on the web has reached a low point of fraud, security risks, and lack of brand safety. And it's not making much money for publishers anyway. So a lot of people are talking about how to fix it, by building a new user data sharing system, in which individuals are in control of which data they choose to reveal to which companies.

Unlike today's surveillance marketing, people wouldn't be targeted for advertising based on data that someone figures out about them and that they might not choose to share.

A big win here will be that the new system would tend to lower the ROI on creepy marketing investments that have harmful side effects such as identity theft and facilitation of state-sponsored misinformation, and increase the ROI for funding ad-supported sites that people trust and choose to share personal information with.

A user-permissioned data sharing system is an excellent goal with the potential to help clean up a lot of the Internet's problems. But I have to be realistic about it. Adam Smith once wrote,

The pride of man makes him love to domineer, and nothing mortifies him so much as to be obliged to condescend to persuade his inferiors.

So the big question is still:

Why would buyers of user data choose to deal with users (or publishers who hold data with the user's permission) when they can just take the data from users, using existing surveillance marketing firms?

Some possible answers.

  • GDPR? Unfortunately, regulatory capture is still a thing even in Europe. Sometimes I wish that American privacy nerds would quit pretending that Europe is ruled by Galadriel or something.

  • brand safety problems? Maybe a little around the edges when a particularly bad video gets super viral. But platforms and adtech can easily hide brand-unsafe "dark" material from marketers, who can even spend time on Youtube and Facebook without ever developing a clue about how brand-unsafe they are for regular people. Even as news-gatherers get better at finding the worst stuff, platforms will always make hiding brand-unsafe content a high priority.

  • fraud concerns? Now we're getting somewhere. Fraud hackers are good at making realistic user data. Even "people-based" platforms mysteriously have more users in desirable geography/demography combinations than are actually there according to the census data. So, where can user-permissioned data be a fraud solution?

  • signaling? The brand equity math must be out there somewhere, but it's nowhere near as widely known as the direct response math that backs up the creepy stuff. Maybe some researcher at one of the big brand advertisers developed the math internally in the 1980s but it got shredded when the person retired. Big possible future win for the right behavioral economist at the right agency, but not in the short term.

  • improvements in client-side privacy? Another good one. Email spam filtering went from obscure nerdery to mainstream checklist feature quickly—because email services competed on it. Right now the web browser is a generic product, and browser makers need to differentiate. One promising angle is for the browser to help build a feeling of safety in the user by reducing user-perceived creepiness, and the browser's need to compete on this is aligned with the interests of trustworthy sites and with user-permissioned data sharing.

(And what's all this "we" stuff, anyway? Post-creepy advertising is an opportunity for individual publishers and brands to get out ahead, not a collective action problem.)

Planet Linux AustraliaOpenSTEM: Amelia Earhart in the news

Recently Amelia Earhart has been in the news once more, with publication of a paper by an American forensic anthropologist, Richard Jantz. Jantz has done an analysis of the measurements made of bones found in 1940 on the island of Nikumaroro Island in Kiribati. Unfortunately, the bones no longer survive, but they were analysed in […]

Planet DebianJoey Hess: prove you are not an Evil corporate person

In which Google be Google and I drop a hot AGPL tip.


Google Is Quietly Providing AI Technology for Drone Strike Targeting Project
Google Is Helping the Pentagon Build AI for Drones

to automate the identification and classification of images taken by drones — cars, buildings, people — providing analysts with increased ability to make informed decisions on the battlefield

These news reports don't mention reCaptcha explicitly, but it's been asking about a lot of cars lately. Whatever the source of the data that Google is using for this, it's disgusting that they're mining it from us without our knowledge or consent.

Google claims that "The technology flags images for human review, and is for non-offensive uses only". So, if a drone operator has a neural network that we all were tricked & coerced into training to identify cars and people helping to highlight them on their screen and center the crosshairs just right, and the neural network is not pressing the kill switch, is it being used for "non-offensive purposes only"?

Google is known to be deathly allergic to the AGPL license. Not only on servers; they don't even allow employees to use AGPL software on workstations. If you write free software, and you'd prefer that Google not use it, a good way to ensure that is to license it under the AGPL.

I normally try to respect the privacy of users of my software, and of personal conversations. But at this point, I feel that Google's behavior has mostly obviated those moral obligations. So...

Now seems like a good time to mention that I have been contacted by multiple people at Google about several of my AGPL licensed projects (git-annex and either keysafe or debug-me I can't remember which) trying to get me to switch them to the GPL, and had long conversations with them about it.

Google has some legal advice that the AGPL source provision triggers much more often than it's commonly understood to. I encouraged them to make that legal reasoning public, so the community could address/debunk it, but I don't think they have. I won't go into details about it here, other than it seemed pretty bonkers.

Mixing in some AGPL code with an otherwise GPL codebase also seems sufficient to trigger Google's allergy. In the case of git-annex, it's possible to build all releases (until next month's) with a flag that prevents linking with any AGPL code, which should mean the resulting binary is GPL licensed, but Google still didn't feel able to use it, since the git-annex source tree includes AGPL files.

I don't know if Google's allergy to the AGPL extends to software used for drone murder applications, but in any case I look forward to preventing Google from using more of my software in the future.

(Illustration by scatter//gather)

Planet DebianRuss Allbery: My friend Stirge

Eric Sturgeon, one of my oldest and dearest friends, died this week of complications from what I'm fairly certain was non-alcoholic fatty liver disease.

It was not entirely unexpected. He'd been getting progressively worse over the past six months. But at the same time there's no way to expect this sort of hole in my life.

I've known Stirge for twenty-five years, more than half of my life. We were both in college when we first met on Usenet in 1993 in the rec.arts.comics.* hierarchy, where Stirge was the one with the insane pull list and the canonical knowledge of the Marvel Universe. We have been friends ever since: part of on-line fiction groups, IRC channels, and free-form role-playing groups. He's been my friend through school and graduation, through every step of my career, through four generations of console systems, through two moves for me and maybe a dozen for him, through a difficult job change... through my entire adult life.

For more than fifteen years, he's been spending a day or a week or two, several times a year, sitting on my couch and playing video games. Usually he played and I navigated, researching FAQs and walkthroughs. Twitch was immediately obvious to me the moment I discovered it existed; it's the experience I'd had with Stirge for years before that. I don't know what video games are without his thoughts on them.

Stirge rarely was able to put his ideas into stories he could share with other people. He admired other people's art deeply, but wasn't an artist himself. But he loved fictional worlds, loved their depth and complexity and lore, and was deeply and passionately creative. He knew the stories he played and read and watched, and he knew the characters he played, particularly in World of Warcraft and Star Wars: The Old Republic. His characters had depth and emotions, histories, independent viewpoints, and stories that I got to hear. Stirge wrote stories the way that I do: in our heads, shared with a small number of people if anyone, not crafted for external consumption, not polished, not always coherent, but deeply important to our thoughts and our emotions and our lives. He's one of the very few people in the world I was able to share that with, who understood what that was like.

He was the friend who I could not see for six months, a year, and then pick up a conversation with as if we'd seen each other yesterday.

After my dad had a heart attack and emergency surgery to embed a pacemaker while we were on vacation in Oregon, I was worrying about how we would manage to get him back home. Stirge immediately volunteered to drive down from Seattle to drive us. He had a crappy job with no vacation, and if he'd done that he almost certainly would have gotten fired, and I knew with absolute certainty that he would have done it anyway.

I didn't take him up on the offer (probably to his vast relief). When I told him years later how much it meant to me, he didn't feel like it should have counted, since he didn't do anything. But he did. In one of the worst moments of my life, he said exactly the right thing to make me feel like I wasn't alone, that I wasn't bearing the burden of figuring everything out by myself, that I could call on help if I needed it. To this day I start crying every time I think about it. It's one of the best things that anyone has ever done for me.

Stirge confided in me, the last time he visited me, that he didn't think he was the sort of person anyone thought about when he wasn't around. That people might enjoy him well enough when he was there, but that he'd quickly fade from memory, with perhaps a vague wonder about what happened to that guy. But it wasn't true, not for me, not ever. I tried to convince him of that while he was alive, and I'm so very glad that I did.

The last time I talked to him, he explained the Marvel Cinematic Universe to me in detail, and gave me a rundown of the relative strength of every movie, the ones to watch and the ones that weren't as good, and then did the same thing for the DC movies. He got to see Star Wars before he died. He would have loved Black Panther.

There were so many games we never finished, and so many games we never started.

I will miss you, my friend. More than I think you would ever have believed.

Planet DebianDaniel Pocock: Bug Squashing and Diversity

Over the weekend, I was fortunate enough to visit Tirana again for their first Debian Bug Squashing Party.

Every time I go there, female developers (this is a hotspot of diversity) ask me if they can host the next Mini DebConf for Women. There have already been two of these very successful events, in Barcelona and Bucharest. It is not my decision to make though: anybody can host a MiniDebConf of any kind, anywhere, at any time. I've encouraged the women in Tirana to reach out to some of the previous speakers personally to scope potential dates and contact the DPL directly about funding for necessary expenses like travel.

The confession

If you have read Elena's blog post today, you might have seen my name and picture and assumed that I did a lot of the work. As it is International Women's Day, it seems like an opportune time to admit that isn't true and that as in many of the events in the Balkans, the bulk of the work was done by women. In fact, I only bought my ticket to go there at the last minute.

When I arrived, Izabela Bakollari and Anisa Kuci where already at the venue getting everything ready. They looked busy, so I asked them if they would like a bonus responsibility, presenting some slides about bug squashing that they had never seen before while translating them into Albanian in real-time. They delivered the presentation superbly, it was more entertaining than any TED talk I've ever seen.

The bugs that won't let you sleep

The event was boosted by a large contingent of Kosovans, including 15 more women. They had all pried themselves out of bed at 03:00 am to take the first bus to Tirana. It's rare to see such enthusiasm for bugs amongst developers anywhere but it was no surprise to me: most of them had been at the hackathon for girls in Prizren last year, where many of them encountered free software development processes for the first time, working long hours throughout the weekend in the summer heat.

and a celebrity guest

A major highlight of the event was the presence of Jona Azizaj, a Fedora contributor who is very proactive in supporting all the communities who engage with people in the Balkans, including all the recent Debian events there. Jona is one of the finalists for Red Hat's Women in Open Source Award. Jona was a virtual speaker at DebConf17 last year, helping me demonstrate a call from the Fedora community WebRTC service to the Debian equivalent, At Mini DebConf Prishtina, where fifty percent of talks were delivered by women, I invited Jona on stage and challenged her to contemplate being a speaker at Red Hat Summit. Giving a talk there seemed like little more than a pipe dream just a few months ago in Prishtina: as a finalist for this prestigious award, her odds have shortened dramatically. It is so inspiring that a collaboration between free software communities helps build such fantastic leaders.

With results like this in the Balkans, you may think the diversity problem has been solved there. In reality, while the ratio of female participants may be more natural, they still face problems that are familiar to women anywhere.

One of the greatest highlights of my own visits to the region has been listening to some of the challenges these women have faced, things that I never encountered or even imagined as the stereotypical privileged white male. Yet despite enormous social, cultural and economic differences, while I was sitting in the heat of the summer in Prizren last year, it was not unlike my own time as a student in Australia and the enthusiasm and motivation of these young women discovering new technologies was just as familiar to me as the climate.

Hopefully more people will be able to listen to what they have to say if Jona wins the Red Hat award or if a Mini DebConf for Women goes ahead in the Balkans (subscribe before posting).


Planet DebianMarkus Koschany: My Free Software Activities in February 2018

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Last month I wrote about „The state of Debian Games“ and I was pleasantly surprised that someone apparently read my post and offered some help with saving endangered games. Well, I don’t know how it will turn out but at least it is encouraging to see that there are people who still care about some old fashioned games. As a matter of fact the GNOME maintainers would like to remove some obsolete GNOME 2 libraries which makes a few of our games RC-buggy. Ideally they should be ported to GNOME 3 but if they could be replaced with a similar game written in a different and awesome programming language (such as Java or Clojure?), for a different desktop environment, that would do as well. 😉 If you’re bored to death or just want a challenge contact us at
  • I packaged a new release of mupen64plus-qt to fix a FTBFS bug (#887576)
  • I uploaded a new version of freeciv to stretch-backports.
  • Pygame-sdl2 and renpy got some love too. (new upstream releases)
  • I sponsored a new revision of redeclipse for Martin-Erik Werner to fix #887744.
  • Yangfl introduced ddnet to Debian which is a popular modification/standalone game similar to teeworlds. I reviewed and eventually sponsored a new upstream release for him. If you are into multiplayer games then ddnet is certainly something you should look forward to.
  • I gladly applied another patch by Peter Green to fix #889059 in warzone2100 and Aurelien Jarno’s fix for btanks (#890632).

Debian Java

  • The Eclipse problem: The Eclipse IDE is seriously threatened to be removed from Debian. Once upon a time we even had a dedicated team that cared about the package but nowadays there is nobody. We regularly get requests to update the IDE to the latest version but there is no one who wants to do the necessary work. The situation is best described in #681726. This alone is worrying enough but due to an interesting dependency chain (batik -> maven -> guice -> libspring-java -> aspectj -> eclipse-platform) Eclipse cannot be removed without breaking dozens of other Java packages. So long story short I started to work on it and packaged a standalone libequinox-osgi-java package, so that we can save at least all reverse-dependencies for this package. Next was tycho which is required to build newer Eclipse versions. Annoyingly it requires said newer version of Eclipse to build…which means we must bootstrap it. I’m still in the process to upgrade tycho to version 1.0 and hope to make some progress in March.
  • I prepared security updates for jackson-databind, lucene-solr and tomcat-native.
  • New upstream releases: jboss-xnio, commons-parent, jboss-logging, jboss-module, mongo-java-driver and libspring-java (#890001).
  • Bug fixes and triaging: wagon2 (#881815, #889427), byte-buddy, (#884207), commons-io, maven-archiver (#886875), jdeb (#889642), commons-math, jflex (#890345), commons-httpclient (#871142)
  • I introduced jboss-bridger which is a new build-dependency of jboss-modules.
  • I sponsored a freeplane update for Felix Natter.

Debian LTS

This was my twenty-fourth month as a paid contributor and I have been paid to work 23,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 05.02.2018 until 11.02.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in binutils, graphicsmagick, wayland, unzip, kde-runtime, libjboss-remoting-java, libvirt, exim4, libspring-java, puppet, audacity, leptonlib, librsvg, suricata, exiv2, polarssl and imagemagick.
  • I tested a security update for exim4 and uploaded a package for Abhijith.
  • DLA-1275-1. Issued a security update for uwsgi fixing 1 CVE.
  • DLA-1276-1. Issued a security update for tomcat-native fixing 1 CVE.
  • DLA-1280-1. Issued a security update for pound fixing 1 CVE.
  • DLA-1281-1. Issued a security update for advancecomp fixing 1 CVE.
  • DLA-1295-1. Issued a security update for drupal7 fixing 4 CVE.
  • DLA-1296-1. Issued a security update for xmltooling fixing 1 CVE.
  • DLA-1301-1. Issued a security update for tomcat7 fixing 2 CVE.


  • I NMUed vdk2 (#885760) to prevent the removal of langdrill.

Thanks for reading and see you next time.

Planet DebianSteinar H. Gunderson: Nageru 1.7.0 released

I've just released version 1.7.0 of Nageru, my free software video mixer. The poster child feature for this release is the direct integration of CEF, yielding high-performance HTML5 graphics directly into Nageru. This obsoletes the earlier CasparCG integration through playing a video from a socket (although video support is of course still very much present!), which were significantly slower and more flimsy. (Also, when CEF gets around to integrating with clients on the GPU level, you'll see even higher performance, and also stuff like WebGL, which I've turned off the for time being.)

Unfortunately, Debian doesn't carry CEF, and I haven't received any answers to my probes of whether it would be possible to do so—it would certainly involve some coordination with the Chromium maintainers. Thus, it is an optional dependency, and the packages that are coming into unstable are built without CEF support.

As always, the changelog is below, and the documentation has been updated to reflect new features and developments. Happy mixing!

Nageru 1.7.0, March 8th, 2018

  - Support for HTML5 graphics directly in Nageru, through CEF
    (Chromium Embedded Framework). This performs better and is more
    flexible than integrating with CasparCG over a socket. Note that
    CEF is an optional component; see the documentation for more

  - Add an HTTP endpoint for enumerating channels and one for getting
    only their colors. Intended for remote tally applications;
    set the documentation.

  - Add a video grid display that removes the audio controls and shows
    the video channels only, potentially in multiple rows if that makes
    for a larger viewing area.

  - Themes can now present simple menus in the Nageru UI. See the
    documentation for more information.

  - Various bugfixes.

Cory DoctorowClassroom materials for Little Brother from Mary Kraus

Mary Kraus — who created a key to page-numbers in the Little Brother audiobook for students with reading disabilities — continues to create great classroom materials for Little Brother: Who’s Who in “Little Brother” is a Quizlet that teaches about the famous people mentioned in the book, from Alan Turing to Rosa Luxembourg; while the Acronym Challenge asks students to unpack acronyms like DHS, NPR, IM, DNS, and ACLU.

TEDMeet the 2018 class of TED Fellows and Senior Fellows

The TED Fellows program is excited to announce the new group of TED2018 Fellows and Senior Fellows.

Representing a wide range of disciplines and countries — including, for the first time in the program, Syria, Thailand and Ukraine — this year’s TED Fellows are rising stars in their fields, each with a bold, original approach to addressing today’s most complex challenges and capturing the truth of our humanity. Members of the new Fellows class include a journalist fighting fake news in her native Ukraine; a Thai landscape architect designing public spaces to protect vulnerable communities from climate change; an American attorney using legal assistance and policy advocacy to bring justice to survivors of campus sexual violence; a regenerative tissue engineer harnessing the body’s immune system to more quickly heal wounds; a multidisciplinary artist probing the legacy of slavery in the US; and many more.

The TED Fellows program supports extraordinary, iconoclastic individuals at work on world-changing projects, providing them with access to the global TED platform and community, as well as new tools and resources to amplify their remarkable vision. The TED Fellows program now includes 453 Fellows who work across 96 countries, forming a powerful, far-reaching network of artists, scientists, doctors, activists, entrepreneurs, inventors, journalists and beyond, each dedicated to making our world better and more equitable. Read more about their visionary work on the TED Fellows blog.

Below, meet the group of Fellows and Senior Fellows who will join us at TED2018, April 10–14, in Vancouver, BC, Canada.

Antionette Carroll
Antionette Carroll (USA)
Social entrepreneur + designer
Designer and founder of Creative Reaction Lab, a nonprofit using design to foster racially equitable communities through education and training programs, community engagement consulting and open-source tools and resources.

Psychiatrist Essam Daod comforts a Syrian refugee as she arrives ashore at the Greek island of Lesvos. His organization Humanity Crew provides psychological aid to refugees and recently displaced populations. (Photo: Laurence Geai)

Essam Daod
Essam Daod (Palestine | Israel)
Mental health specialist
Psychiatrist and co-founder of Humanity Crew, an NGO providing psychological aid and first-response mental health interventions to refugees and displaced populations.

Laura L. Dunn
Laura L. Dunn (USA)
Victims’ rights attorney
Attorney and Founder of SurvJustice, a national nonprofit increasing the prospect of justice for survivors of campus sexual violence through legal assistance, policy advocacy and institutional training.

Rola Hallam
Rola Hallam (Syria | UK)
Humanitarian aid entrepreneur 
Medical doctor and founder of CanDo, a social enterprise and crowdfunding platform that enables local humanitarians to provide healthcare to their own war-devastated communities.

Olga Iurkova
Olga Yurkova (Ukraine)
Journalist + editor
Journalist and co-founder of, an independent Ukrainian organization that trains an international cohort of fact-checkers in an effort to curb propaganda and misinformation in the media.

Glaciologist M Jackson studies glaciers like this one — the glacier Svínafellsjökull in southeastern Iceland. The high-water mark visible on the mountainside indicates how thick the glacier once was, before climate change caused its rapid recession. (Photo: M Jackson)

M Jackson
M Jackson (USA)
Geographer + glaciologist
Glaciologist researching the cultural and social impacts of climate change on communities across all eight circumpolar nations, and an advocate for more inclusive practices in the field of glaciology.

Romain Lacombe
Romain Lacombe (France)
Environmental entrepreneur
Founder of Plume Labs, a company dedicated to raising awareness about global air pollution by creating a personal electronic pollution tracker that forecasts air quality levels in real time.

Saran Kaba Jones
Saran Kaba Jones (Liberia | USA)
Clean water advocate
Founder and CEO of FACE Africa, an NGO that strengthens clean water and sanitation infrastructure in Sub-Saharan Africa through innovative community support services.

Yasin Kakande
Yasin Kakande (Uganda)
Investigative journalist + author
Journalist working undercover in the Middle East to expose the human rights abuses of migrant workers there.

In one of her long-term projects, “The Three: Senior Love Triangle,” documentary photographer Isadora Kosofsky shadowed a three-way relationship between aged individuals in Los Angeles, CA – Jeanie (81), Will (84), and Adina (90). Here, Jeanie and Will kiss one day after a fight.

Isadora Kosofsky
Isadora Kosofsky (USA)
Photojournalist + filmmaker
Photojournalist exploring underrepresented communities in America with an immersive approach, documenting senior citizen communities, developmentally disabled populations, incarcerated youth, and beyond.

Adam Kucharski
Adam Kucharski (UK)
Infectious disease scientist
Infectious disease scientist creating new mathematical and computational approaches to understand how epidemics like Zika and Ebola spread, and how they can be controlled.

Lucy Marcil
Lucy Marcil (USA)
Pediatrician + social entrepreneur
Pediatrician and co-founder of StreetCred, a nonprofit addressing the health impact of financial stress by providing fiscal services to low-income families in the doctor’s waiting room.

Burçin Mutlu-Pakdil
Burçin Mutlu-Pakdil (Turkey | USA)
Astrophysicist studying the structure and dynamics of galaxies — including a rare double-ringed elliptical galaxy she discovered — to help us understand how they form and evolve.

Faith Osier
Faith Osier (Kenya | Germany)
Infectious disease doctor
Scientist studying how humans acquire immunity to malaria, translating her research into new, highly effective malaria vaccines.

In “Birth of a Nation” (2015), artist Paul Rucker recast Ku Klux Klan robes in vibrant, contemporary fabrics like spandex, Kente cloth, camouflage and white satin – a reminder that the horrors of slavery and the Jim Crow South still define the contours of American life today. (Photo: Ryan Stevenson)

Paul Rucker
Paul Rucker (USA)
Visual artist + cellist
Multidisciplinary artist exploring issues related to mass incarceration, racially motivated violence, police brutality and the continuing impact of slavery in the US.

Kaitlyn Sadtler
Kaitlyn Sadtler (USA)
Regenerative tissue engineer
Tissue engineer harnessing the body’s natural immune system to create new regenerative medicines that mend muscle and more quickly heal wounds.

DeAndrea Salvador (USA)
Environmental justice advocate
Sustainability expert and founder of RETI, a nonprofit that advocates for inclusive clean-energy policies that help low-income families access cutting-edge technology to reduce their energy costs.

Harbor seal patient Bogey gets a checkup at the Marine Mammal Center in California. Veterinarian Claire Simeone studies marine mammals like harbor seals to understand how the health of animals, humans and our oceans are interrelated. (Photo: Ingrid Overgard / The Marine Mammal Center)

Claire Simeone
Claire Simeone (USA)
Marine mammal veterinarian
Veterinarian and conservationist studying how the health of marine mammals, such as sea lions and dolphins, informs and influences both human and ocean health.

Kotchakorn Voraakhom
Kotchakorn Voraakhom (Thailand)
Urban landscape architect
Landscape architect and founder of Landprocess, a Bangkok-based design firm building public green spaces and green infrastructure to increase urban resilience and protect vulnerable communities from climate change.

Mikhail Zygar
Mikhail Zygar (Russia)
Journalist + historian
Journalist covering contemporary and historical Russia and founder of Project1917, a digital documentary project that narrates the 1917 Russian Revolution in an effort to contextualize modern-day Russian issues.

TED2018 Senior Fellows

Senior Fellows embody the spirit of the TED Fellows program. They attend four additional TED events, mentor new Fellows and continue to share their remarkable work with the TED community.

Prosanta Chakrabarty
Prosanta Chakrabarty (USA)
Evolutionary biologist and natural historian researching and discovering fish around the world in an effort to understand fundamental aspects of biological diversity.

Aziza Chaouni
Aziza Chaouni (Morocco)
Civil engineer and architect creating sustainable built environments in the developing world, particularly in the deserts of the Middle East.

Shohini Ghose
Shohini Ghose (Canada)
Quantum physicist + educator
Theoretical physicist developing quantum computers and novel protocols like teleportation, and an advocate for equity, diversity and inclusion in science.

A pair of shrimpfish collected in Tanzanian mangroves by ichthyologist Prosanta Chakrabarty and his colleagues this past year. They may represent an unknown population or even a new species of these unusual fishes, which swim head down among aquatic plants.

Zena el Khalil
Zena el Khalil (Lebanon)
Artist + cultural activist
Artist and cultural activist using visual art, site-specific installation, performance and ritual to explore and heal the war-torn history of Lebanon and other global sites of trauma.

Bektour Iskender
Bektour Iskender (Kyrgyzstan)
Independent news publisher
Co-founder of Kloop, an NGO and leading news publication in Kyrgyzstan, committed to freedom of speech and training young journalists to cover politics and investigate corruption.

Mitchell Jackson
Mitchell Jackson (USA)
Writer + filmmaker
Writer exploring race, masculinity, the criminal justice system, and family relationships through fiction, essays and documentary film.

Jessica Ladd
Jessica Ladd (USA)
Sexual health technologist
Founder and CEO of Callisto, a nonprofit organization developing technology to combat sexual assault and harassment on campus and beyond.

Jorge Mañes Rubio
Jorge Mañes Rubio (Spain)
Artist investigating overlooked places on our planet and beyond, creating artworks that reimagine and revive these sites through photography, site-specific installation and sculpture.

An asteroid impact is the only natural disaster we have the technology to prevent, but since prevention takes time, we must search for near-Earth asteroids now. Astronomer Carrie Nugent does just that, discovering and studying asteroids like this one. (Illustration: Tim Pyle and Robert Hurt / NASA/JPL-Caltech)

Carrie Nugent (USA)
Asteroid hunter
Astronomer using machine learning to discover and study near-Earth asteroids, our smallest and most numerous cosmic neighbors.

David Sengeh
David Sengeh (Sierra Leone + South Africa)
Biomechatronics engineer
Research scientist designing and deploying new healthcare technologies, including artificial intelligence, to cure and fight disease in Africa.

CryptogramExtracting Secrets from Machine Learning Systems

This is fascinating research about how the underlying training data for a machine-learning system can be inadvertently exposed. Basically, if a machine-learning system trains on a dataset that contains secret information, in some cases an attacker can query the system to extract that secret information. My guess is that there is a lot more research to be done here.

EDITED TO ADD (3/9): Some interesting links on the subject.

CryptogramNew DDoS Reflection-Attack Variant

This is worrisome:

DDoS vandals have long intensified their attacks by sending a small number of specially designed data packets to publicly available services. The services then unwittingly respond by sending a much larger number of unwanted packets to a target. The best known vectors for these DDoS amplification attacks are poorly secured domain name system resolution servers, which magnify volumes by as much as 50 fold, and network time protocol, which increases volumes by about 58 times.

On Tuesday, researchers reported attackers are abusing a previously obscure method that delivers attacks 51,000 times their original size, making it by far the biggest amplification method ever used in the wild. The vector this time is memcached, a database caching system for speeding up websites and networks. Over the past week, attackers have started abusing it to deliver DDoSes with volumes of 500 gigabits per second and bigger, DDoS mitigation service Arbor Networks reported in a blog post.

Cloudflare blog post. BoingBoing post.

EDITED TO ADD (3/9): Brian Krebs covered this.

Krebs on SecurityLook-Alike Domains and Visual Confusion

How good are you at telling the difference between domain names you know and trust and impostor or look-alike domains? The answer may depend on how familiar you are with the nuances of internationalized domain names (IDNs), as well as which browser or Web application you’re using.

For example, how does your browser interpret the following domain? I’ll give you a hint: Despite appearances, it is most certainly not the actual domain for software firm CA Technologies (formerly Computer Associates Intl Inc.), which owns the original domain name:


Go ahead and click on the link above or cut-and-paste it into a browser address bar. If you’re using Google Chrome, Apple’s Safari, or some recent version of Microsoft‘s Internet Explorer or Edge browsers, you should notice that the address converts to “” This is called “punycode,” and it allows browsers to render domains with non-Latin alphabets like Cyrillic and Ukrainian.

Below is what it looks like in Edge on Windows 10; Google Chrome renders it much the same way. Notice what’s in the address bar (ignore the “fake site” and “Welcome to…” text, which was added as a courtesy by the person who registered this domain):

The domain https://www.са.com/ as rendered by Microsoft Edge on Windows 10. The rest of the text in the image (beginning with “Welcome to a site…”) was added by the person who registered this test domain, not the browser.

IE, Edge, Chrome and Safari all will convert https://www.са.com/ into its punycode output (, in part to warn visitors about any confusion over look-alike domains registered in other languages. But if you load that domain in Mozilla Firefox and look at the address bar, you’ll notice there’s no warning of possible danger ahead. It just looks like it’s loading the real

What the fake domain looks like when loaded in Mozilla Firefox. A browser certificate ordered from Comodo allows it to include the green lock (https://) in the address bar, adding legitimacy to the look-alike domain. The rest of the text in the image (beginning with “Welcome to a site…”) was added by the person who registered this test domain, not the browser. Click to enlarge.

The domain “” pictured in the first screenshot above is punycode for the Ukrainian letters for “s” (which is represented by the character “c” in Russian and Ukrainian), as well as an identical Ukrainian “a”.

It was registered by Alex Holden, founder of Milwaukee, Wis.-based Hold Security Inc. Holden’s been experimenting with how the different browsers handle punycodes in the browser and via email. Holden grew up in what was then the Soviet Union and speaks both Russian and Ukrainian, and he’s been playing with Cyrillic letters to spell English words in domain names.

Letters like A and O look exactly the same and the only difference is their Unicode value. There are more than 136,000 Unicode characters used to represent letters and symbols in 139 modern and historic scripts, so there’s a ton of room for look-alike or malicious/fake domains.

For example, “a” in Latin is the Unicode value “0061” and in Cyrillic is “0430.”  To a human, the graphical representation for both looks the same, but for a computer there is a huge difference. Internationalized domain names (IDNs) allow domain names to be registered in non-Latin letters (RFC 3492), provided the domain is all in the same language; trying to mix two different IDNs in the same name causes the domain registries to reject the registration attempt.

So, in the Cyrillic alphabet (Russian/Ukrainian), we can spell АТТ, УАНОО, ХВОХ, and so on. As you can imagine, the potential opportunity for impersonation and abuse are great with IDNs. Here’s a snippet from a larger chart Holden put together showing some of the more common ways that IDNs can be made to look like established, recognizable domains:

Image: Hold Security.

Holden also was able to register a valid SSL encryption certificate for https://www.са.com from, which would only add legitimacy to the domain were it to be used in phishing attacks against CA customers by bad guys, for example.


To be clear, the potential threat highlighted by Holden’s experiment is not new. Security researchers have long warned about the use of look-alike domains that abuse special IDN/Unicode characters. Most of the major browser makers have responded in some way by making their browsers warn users about potential punycode look-alikes.

With the exception of Mozilla, which by most accounts is the third most-popular Web browser. And I wanted to know why. I’d read the Mozilla Wiki’s IDN Display Algorithm FAQ,” so I had an idea of what Mozilla was driving at in their decision not to warn Firefox users about punycode domains: Nobody wanted it to look like Mozilla was somehow treating the non-Western world as second-class citizens.

I wondered why Mozilla doesn’t just have Firefox alert users about punycode domains unless the user has already specified that he or she wants a non-English language keyboard installed. So I asked that in some questions I sent to their media team. They sent the following short statement in reply:

“Visual confusion attacks are not new and are difficult to address while still ensuring that we render everyone’s domain name correctly. We have solved almost all IDN spoofing problems by implementing script mixing restrictions, and we also make use of Safe Browsing technology to protect against phishing attacks. While we continue to investigate better ways to protect our users, we ultimately believe domain name registries are in the best position to address this problem because they have all the necessary information to identify these potential spoofing attacks.”

If you’re a Firefox user and would like Firefox to always render IDNs as their punycode equivalent when displayed in the browser address bar, type “about:config” without the quotes into a Firefox address bar. Then in the “search:” box type “punycode,” and you should see one or two options there. The one you want is called “network.IDN_show_punycode.” By default, it is set to “false”; double-clicking that entry should change that setting to “true.”

Incidentally, anyone using the Tor Browser to anonymize their surfing online is exposed to IDN spoofing because Tor by default uses Mozilla as well. I could definitely see spoofed IDNs being used in targeting phishing attacks aimed at Tor users, many of whom have significant assets tied up in virtual currencies. Fortunately, the same “about:config” instructions work just as well on Tor to display punycode in lieu of IDNs.

Holden said he’s still in the process of testing how various email clients and Web services handle look-alike IDNs. For example, it’s clear that Twitter sees nothing wrong with sending the look-alike domain in messages to other users without any context or notice. Skype, on the other hand, seems to truncate the IDN link, sending clickers to a non-existent page.

“I’d say that most email services and clients are either vulnerable or not fully protected,” Holden said.

For a look at how phishers or other scammers might use IDNs to abuse your domain name, check out this domain checker that Hold Security developed. Here’s the first page of results for, which indicate that someone at one point registered krebsoṇsecurity[dot]com (that domain includes a lowercase “n” with a tiny dot below it, a character used by several dozen scripts). The results in yellow are just possible (unregistered) domains based on common look-alike IDN characters.

The first page of warnings for from Hold Security’s IDN scanner tool.

I wrote this post mainly because I wanted to learn more about the potential phishing and malware threat from look-alike domains, and I hope the information here has been interesting if not also useful. I don’t think this kind of phishing is a terribly pressing threat (especially given how far less complex phishing attacks seem to succeed just fine for now). But it sure can’t hurt Firefox users to change the default “visual confusion” behavior of the browser so that it always displays punycode in the address bar (see the solution mentioned above).

[Author’s note: I am listed as an adviser to Hold Security on the company’s Web site. However this is not a role for which I have been compensated in any way now or in the past.]

Planet DebianAlexandre Viau: testeduploads - looking for GSOC mentor

I have been waiting for the right opportunity to participate to GSOC for a while. I have worked on a project idea that is just right for my skill set, it would be a great learning opportunity for me and I hope that it can useful to the wider Debian community.

Please take a look at the project description and let me know if you would be interested in mentoring me over the summer.

testeduploads: test your packages before they hit the archive


testeduploads is a service that provides a way to test Debian source packages. The main goal of the project is to empower Debian Developers by giving them easy access to more rigorous testing before they upload a package to the archive. It runs tests that Debian Developers don’t necessarily run because of lack of time and resources.

testeduploads can also be used to test a large number of packages in contexts such as:

  • detecting whether or not packages can be updated to a newer upstream version
  • detecting whether or not packages can be backported
  • testing new versions of compilers
  • testing new versions of debhelper add-ons


Packages can be submitted to testeduploads with dput. Depending on the upload queue that was used, it can also automatically forward the uploads to

dput testeduploads [changesfile] will upload a package to the configured testeduploads queue and trigger the following tests:

  • rebuild the source package from the .dsc and verify that the signature matches
  • build binary packages
  • run autopkgtests on the package
  • rebuild all reverse dependencies using the new package
  • run autopkgtests on all reverse dependencies

On success:

  • the uploader is notified
  • logs are made available
  • if the package was received through the test-and-upload queue, it is automatically forwarded to

On failure:

  • the uploader is notified
  • logs are made available

Results and statistics are accessible through a web interface and a REST API. All uploads are assigned an id. HTTP uploads immediately return an upload id that can be used to query test status and to perform other actions. This allows for other tools to build on top of testeduploads.

The service accepts uploads to several queues that define specific behaviours:

  • test-and-upload: test the package on all release architectures and forward it to on success
  • test-only: test the package on all release architectures but not forward it to on success
  • amd64/test-and-upload: limit the tests to amd64 and apply test-and-upload behaviour

Why me

I have been contributing to Debian for a couple of years now and I have been a Debian Developper since 2015. For now, I have mostly been conttibuting to packaging new software and fixing packaging-related bugs.

Participating to Google Summer of Code would be a great opportunity for me to contribute to Debian in other areas. Starting a new project like testeduploads is a good learning opportunity but it requires a lot of time. The summer would be more than enough for me to kick start development of the service. Then, I can see myself maintaining and improving it for a long time.

For me, this summer is just the right time. There is very few classes that I could take over the summer, it is a good opportunity to take a summer off and work on GSOC.

For general GSOC questions, please refer to the debian-outreach mailing list or to #debian-outreach on

If you are interested in the project and want to mentor it over the summer, please get in touch with me at

Debian GSOC coordination guide

debian-outreach mailing list

testeduploads prototype

CryptogramHistory of the US Army Security Agency

Interesting history of the US Army Security Agency in the early years of Cold War Germany.

Planet DebianLars Wirzenius: New chapter of Hacker Noir on Patreon

For the 2016 NaNoWriMo I started writing a novel about software development, "Hacker Noir". I didn't finish it during that November, and I still haven't finished it. I had a year long hiatus, due to work and life being stressful, when I didn't write on the novel at all. However, inspired by both the Doctorow method and the Seinfeld method, I have recently started writing again.

I've just published a new chapter. However, unlike last year, I'm publishing it on my Patreon only, for the first month, and only for patrons. Then, next month, I'll be putting that chapter on the book's public site (, and another new chapter on Patreon.

I don't expect to make a lot of money, but I am hoping having active supporters will motivate me to keep writing.

I'm writing the first draft of the book. It's likely to be as horrific as every first-time author's first draft is. If you'd like to read it as raw as it gets, please do. Once the first draft is finished, I expect to read it myself, and be horrified, and throw it all away, and start over.

Also, I should go get some training on marketing.

Worse Than FailureCodeSOD: Let's Set a Date

Let’s imagine, for a moment, that you came across a method called setDate. Would you think, perhaps, that it stores a date somewhere? Of course it does. But what else does it do?

Matthias was fixing some bugs in a legacy project, and found himself asking exactly that question.

function setDate(objElement, strDate, objCalendar) {

    if (objElement.getAttribute("onmyfocus")) {
        eval(objElement.getAttribute("onmyfocus").replace(/this/g, "$('" + + "')"));
    } else if (objElement.onfocus && objElement.onfocus.toString()) {
        eval(GetInnerFunction(objElement.onfocus.toString()).replace(/this/g, "$('" + + "')"));

    objElement.value = parseDate(strDate);

    if (objElement.getAttribute("onmyblur")) {
        eval(objElement.getAttribute("onmyblur").replace(/this/g, "$('" + + "')"));
    } else if (objElement.onblur && objElement.onblur.toString()) {
        eval(GetInnerFunction(objElement.onblur.toString()).replace(/this/g, "$('" + + "')"));

    if (objCalendar) {
    } else {

In this code, objElement and objCalendar are both expected to be DOM elements. strDate, as the name implies, is a string holding a date. You can see a few elements in the code which obviously have something to do with the actual function of setting a date: objElement.value = parseDate(strDate) and the conditional about trying to toggle the calendar object seem like they might have something to do with managing the date.

It’s the rest of the code that gets… weird. The purpose, at a guess, is that this setDate method is emulating a user interacting with a DOM element- perhaps this is part of some bolted-on calendar widget- so they want to fire the on-focus and on-blur methods of the underlying element. That, alone, would be an ugly but serviceable hack.

But that’s not what they do.

First, they’ve apparently created attributes onmyfocus and onmyblur. Should the element have those attributes, they extract the value there, and replace any references to this with a call to $(), passing in the objElementId… and then they eval it.

If there isn’t a special onmyfocus/onmyblur attribute, they instead check for the more normal onfocus/onblur event handlers. Which are functions. But this code doesn’t want functions, so it converts them to a string and replaces this again, before passing it back to eval.

Replacing this means that they were trying to reinvent function.apply, a JavaScript method that allows you to pass in whatever object you want to be this within the function you’re calling. But, at least in the case of the onfocus/onblur, this isn’t necessary, since every browser has had a method to dispatchEvent or createEvent since time immemorial. You don’t need to mangle a function to emulate an event.

The jQuery experts might notice that $ and say, “Well, heck, if they’re using jQuery, that has a .trigger() method which fires events.” That’s a good thought, but this code is actually worse than it looks. I’ll allow Matthias to explain:

$ is NOT jQuery, but a global function that does a getElementById-lookup

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianElena Gjevukaj: Bug Squashing Party in Tirana

Bug Squashing Party was organized by Debian and OpenLabs in Tirana last weekend (3,4 March 2018). BSP is a come-together of Debian Developers and Debian enthusiasts on a specified timeframe where these persons try to fix as many bugs as possible.

Unusually for tech events, in this one there were like 90% of women participating and I think if anyone saw us working together they would doubt that it was a tech event. As in other fields in general tech world is not an exeption when it comes for dicrimination and sexisem, but luckly for us in this event organized by our friend Daniel Pocock (from Debian) and OpenLabs Tirana that wasn’t our case.

We were a large group of computer science, students and graduates coming from Kosovo.

For me it was the first time in OpenLabs and I must say It was and amazing time meeting the organizers and members and working with them.

After the presentation about the Openlabs and it’s events we had some interesting topics and projects that we could choose to work on. Mainly, I worked with other girls into translating some parts of Debian text to Albanian, also we did some research for bugs into systems.

In the evning we had a nice dinner, in an Italian resturant in Tirana.

Discovering Tirana.


Planet DebianVincent Bernat: Packaging an out-of-tree module for Debian with DKMS

DKMS is a framework designed to allow individual kernel modules to be upgraded without changing the whole kernel. It is also very easy to rebuild modules as you upgrade kernels.

On Debian-like systems,1 DKMS enables the installation of various drivers, from ZFS on Linux to VirtualBox kernel modules or NVIDIA drivers. These out-of-tree modules are not distributed as binaries: once installed, they need to be compiled for your current kernel. Everything is done automatically:

# apt install zfs-dkms
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  binutils cpp cpp-6 dkms fakeroot gcc gcc-6 gcc-6-base libasan3 libatomic1 libc-dev-bin libc6-dev
  libcc1-0 libcilkrts5 libfakeroot libgcc-6-dev libgcc1 libgomp1 libisl15 libitm1 liblsan0 libmpc3
  libmpfr4 libmpx2 libnvpair1linux libquadmath0 libstdc++6 libtsan0 libubsan0 libuutil1linux libzfs2linux
  libzpool2linux linux-compiler-gcc-6-x86 linux-headers-4.9.0-6-amd64 linux-headers-4.9.0-6-common
  linux-headers-amd64 linux-kbuild-4.9 linux-libc-dev make manpages manpages-dev patch spl spl-dkms
  zfs-zed zfsutils-linux
3 upgraded, 44 newly installed, 0 to remove and 3 not upgraded.
Need to get 42.1 MB of archives.
After this operation, 187 MB of additional disk space will be used.
Do you want to continue? [Y/n]
# dkms status
spl,, 4.9.0-6-amd64, x86_64: installed
zfs,, 4.9.0-6-amd64, x86_64: installed
# modinfo zfs | head
filename:       /lib/modules/4.9.0-6-amd64/updates/dkms/zfs.ko
license:        CDDL
author:         OpenZFS on Linux
description:    ZFS
srcversion:     42C4AB70887EA26A9970936
depends:        spl,znvpair,zcommon,zunicode,zavl
retpoline:      Y
vermagic:       4.9.0-6-amd64 SMP mod_unload modversions
parm:           zvol_inhibit_dev:Do not create zvol device nodes (uint)

If you install a new kernel, a compilation of the module is automatically triggered.

Building your own DKMS-enabled package🔗

Suppose you’ve gotten your hands on an Intel XXV710-DA2 NIC. This card is handled by the i40e driver. Unfortunately, it only got support from Linux 4.10 and you are using a stock 4.9 Debian Stretch kernel. DKMS provides here an easy solution!

Download the driver from Intel, unpack it in some directory and add a debian/ subdirectory with the following files:

  • debian/changelog:

    i40e-dkms (2.4.6-0) stretch; urgency=medium
      * Initial package.
     -- Vincent Bernat <>  Tue, 27 Feb 2018 17:20:58 +0100
  • debian/control:

    Source: i40e-dkms
    Maintainer: Vincent Bernat <>
    Build-Depends: debhelper (>= 9), dkms
    Package: i40e-dkms
    Architecture: all
    Depends: ${misc:Depends}
    Description: DKMS source for the Intel i40e network driver
  • debian/rules:

    #!/usr/bin/make -f
    include /usr/share/dpkg/
            dh $@ --with dkms
            dh_install src/* usr/src/i40e-$(DEB_VERSION_UPSTREAM)/
            dh_dkms -V $(DEB_VERSION_UPSTREAM)
  • debian/i40e-dkms.dkms:

  • debian/compat:


In debian/changelog, pay attention to the version. The version of the driver is 2.4.6. Therefore, we use 2.4.6-0 for the package. In debian/rules, we install the source of the driver in /usr/src/i40e-2.4.6—the version is extracted from debian/changelog.

The content of debian/i40e-dkms.dkms is described in details in the dkms(8) manual page. The i40e driver is fairly standard and dkms is able to figure out how to compile it. However, if your kernel module does not follow the usual conventions, it is the right place to override the build command.

Once all the files are in place, you can turn the directory into a Debian package with, for example, the dpkg-buildpackage command.2 At the end of this operation, you get your DKMS-enabled package, i40e-dkms_2.4.6-0_all.deb. Put it in your internal repository and install it on the target.

Avoiding compilation on target🔗

If you feel uncomfortable installing compilation tools on the target servers, there is a simple solution. Since version,3 thanks to Thijs Kinkhorst, dkms can build lean binary packages with only the built modules. For each kernel version, you build such a package in your CI system:

KERNEL_VERSION=4.9.0-6-amd64 # could be a Jenkins parameter
apt -qyy install \
      i40e-dkms \
      linux-image-${KERNEL_VERSION} \

DRIVER_VERSION=$(dkms status i40e | awk -F', ' '{print $2}')
dkms mkbmdeb i40e/${DRIVER_VERSION} -k ${KERNEL_VERSION}

cd /var/lib/dkms/i40e/${DRIVER_VERSION}/bmdeb/
dpkg -c i40e-modules-${KERNEL_VERSION}_*
dpkg -I i40e-modules-${KERNEL_VERSION}_*

Here is the shortened output of the two last commands:

# dpkg -c i40e-modules-${KERNEL_VERSION}_*
-rw-r--r-- root/root    551664 2018-03-01 19:16 ./lib/modules/4.9.0-6-amd64/updates/dkms/i40e.ko
# dpkg -I i40e-modules-${KERNEL_VERSION}_*
 new debian package, version 2.0.
 Package: i40e-modules-4.9.0-6-amd64
 Source: i40e-dkms-bin
 Version: 2.4.6
 Architecture: amd64
 Maintainer: Dynamic Kernel Modules Support Team <>
 Installed-Size: 555
 Depends: linux-image-4.9.0-6-amd64
 Provides: i40e-modules
 Section: misc
 Priority: optional
 Description: i40e binary drivers for linux-image-4.9.0-6-amd64
  This package contains i40e drivers for the 4.9.0-6-amd64 Linux kernel,
  built from i40e-dkms for the amd64 architecture.

The generated Debian package contains the pre-compiled driver and only depends on the associated kernel. You can safely install it without pulling dozens of packages.

  1. DKMS is also compatible with RPM-based distributions but the content of this article is not suitable for these. ↩︎

  2. You may need to install some additional packages: build-essential, fakeroot and debhelper↩︎

  3. Available in Debian Stretch and in the backports for Debian Jessie. However, for Ubuntu Xenial, you need to backport a more recent version of dkms↩︎

Worse Than FailureCodeSOD: Just One More Point

Fermat Points Proof

Tim B. had been tasked with updating an older internal application implemented in Java. Its primary purpose was to read in and display files containing a series of XY points—around 100,000 points per file on average—which would then be rendered as a line chart. It was notoriously slow, taking 1-2 minutes to process each file, but otherwise remained fairly stable.

Except that lately, some newer files were failing during the loading process. Tim quickly identified the problem—date formats had changed—and fixed the necessary code. Since the code that read in the XY points happened to reside in the same class, Tim asked his boss whether he could take a crack at killing two birds with one stone. With her approval, he dug in to figure out why the loading process was so slow.

//Initial code, pulled from memory so forgive any errors.
try {
            //The 3rd party library we are passing the values to requires
            //an array of doubles
            double[][] points = null;
            BufferedReader input =  new BufferedReader(new FileReader(aFile));
            try {
                String line = null;
                while (( line = input.readLine()) != null)
                    //First, get the XY points from line using a convenience class
                    //to parse out the values.
                    XYPoint p = new XYPoint(line);
                    //Now, to store the points in the array.
                    if ( points == null )
                        //Okay, we've got one point so far.
                        points = new double[1][2];
                        points[0][0] = p.X;
                        points[0][1] = p.Y;
                        //Uh oh, we need more room. Let's create an array that's one larger
                        //and copy all of our points so far into it.
                        double[][] newPointArray = new double[points.length + 1][2];
                        for ( int i = 0; i < points.length; i++ )
                            newPointArray[i][0] = points[i][0];
                            newPointArray[i][1] = points[i][1];
                        //Now we can add the new point!
                        newPointArray[points.length][0] = p.X;
                        newPointArray[points.length][1] = p.Y;
                        points = newPointArray;
                //Now, we can pass this to our next function
                drawChart( points );
        } catch (IOException ex)
//End original code

After scouring the code twice, Tim called over a few coworkers to have a look for themselves. Unfortunately, no, he wasn't reading it wrong. Apparently the original developer, who no longer worked there, had run into the problem of not knowing ahead of time how many points would be in each file. However, he'd needed an array of doubles to pass to the next library so he could use a list, which only accepted objects. Thus had he engineered a truly brillant workaround.

Tim determined that for the average file of 100,000 points, each import required a jaw-dropping 2 billion copy operations (1 billion for the Xs, 1 billion for the Ys). After a quick refactoring to use an ArrayList, followed by a copy to a double array, the file load time went from minutes to nearly instantaneous.

//Re-factored code below.
try {
            //The 3rd party library we are passing the values to requires
            //an array of doubles
            double[][] points = null;
            ArrayList xyPoints = new ArrayList();
            BufferedReader input =  new BufferedReader(new FileReader(aFile));
            try {
                String line = null;
                while (( line = input.readLine()) != null)
                    xyPoints.add( new XYPoint(line) );
                //Now, convert the list to an array
                points = new double[xyPoints.size()][2];
                for ( int i = 0; i < xyPoints.size(); i++ )
                    points[i][0] = xyPoints.get(i).X;
                    points[i][1] = xyPoints.get(i).Y;
                //Now, we can pass this to our next function
                drawChart( points );
        } catch (IOException ex)
//End re-factored code.
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV March 2018 Workshop: Comparing window managers

Mar 17 2018 12:30
Mar 17 2018 16:30
Mar 17 2018 12:30
Mar 17 2018 16:30
Infoxchange, 33 Elizabeth St. Richmond

Comparing window managers

We'll be looking at several of the many window managers available on Linux.

We're still looking for more people who can talk about the window manager they are using, what they like and dislike about it, and maybe demonstrate a little.

Please email me at <> with the name of your window manager if you think you could help!

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

March 17, 2018 - 12:30

read more

Planet DebianCraig Sanders: brawndo-installer

Tired of being oppressed by the slack-arse distro package maintainers who waste time testing that new versions don’t break anything and then waste even more time integrating software into the system?

Well, so am I. So I’ve fixed it, and it was easy to do. Here’s the ultimate installation tool for any program:

brawndo() {
   curl $1 | sudo /usr/bin/env bash

I’ve never written a shell script before in my entire life, I spend all my time writing javascript or ruby or python – but shell’s not a real language so it can’t be that hard to get right, can it? Of course not, and I just proved it with the amazing brawndo installer (It’s got what users crave – it’s got electrolyes!)

So next time some lame sysadmin recommends that you install the packaged version of something, just ask them if apt-get or yum or whatever loser packaging tool they’re suggesting has electrolytes. That’ll shut ’em up.

brawndo-installer is a post from: Errata

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #149

Here's what happened in the Reproducible Builds effort between Sunday February 25 and Saturday March 3 2018:

diffoscope development

Version 91 was uploaded to unstable by Mattia Rizzolo. It included contributions already covered by posts of the previous weeks as well as new ones from:

In addition, Juliana — our Outreachy intern — continued her work on parallel processing; the above work is part of it.

reproducible-website development

Packages reviewed and fixed, and bugs filed

An issue with the pydoctor documentation generator was merged upstream.

Reviews of unreproducible packages

73 package reviews have been added, 37 have been updated and 26 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (46)
  • Jeremy Bicha (4)


This week's edition was written by Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet Linux AustraliaSimon Lyall: Audiobooks – Background and February 2018 list


I started listening to audiobooks around the start of January 2017 when I started walking to work (I previously caught the bus and read a book or on my phone).

I currently get them for free from the Auckland Public Library using the Overdrive app on Android. However while I download them to my phone using the Overdrive app I listen to the using Listen Audiobook Player . I switched to the alternative player mainly since it supports playback speeds greater the 2x normal.

I’ve been posting a list the books I listened to at the end of each month to twitter ( See list from Jan 2018, Dec 2017, Nov 2017 ) but I thought I’d start posting them here too.

I mostly listen to history with some science fiction and other topics.

Books listened to in February 2018

The Three-Body Problem by Cixin Liu – Pretty good Sci-Fi and towards the hard-core end I like. Looking forward to the sequels 7/10

Destiny and Power: The American Odyssey of George Herbert Walker Bush by Jon Meacham – A very nicely done biography, comprehensive and giving a good positive picture of Bush. 7/10

Starship Troopers by Robert A. Heinlein – A pretty good version of the classic. The story works well although the politics are “different”. Enjoyable though 8/10

Uncommon People: The Rise and Fall of the Rock Stars 1955-1994 by David Hepworth – Read by the Author (who sounds like a classic Brit journalist). A Story or two plus a playlist from every year. Fascinating and delightful 9/10

The Long Haul: A Trucker’s Tales of Life on the Road by Finn Murphy – Very interesting and well written about the author’s life as a long distance mover. 8/10

Mornings on Horseback – David McCullough – The Early life of Teddy Roosevelt, my McCullough book for the month. Interesting but not as engaging as I’d have hoped. 7/10

The Battle of the Atlantic: How the Allies Won the War – Jonathan Dimbleby – Overview of the Atlantic Campaign of World War 2. The author works to stress it was on of the most important fronts and does pretty well 7/10






Cory DoctorowHey, Wellington! I’m headed your way!

I’ve just finished a wonderful time at the Adelaide Festival and now I’m headed to the last stop on the Australia/New Zealand tour for Walkaway: Wellington!

I’m doing a pair of events at Writers & Readers Week at the New Zealand Festival; followed by a special one-day NetHui on copyright and then a luncheon seminar for the Privacy Commissioner on “machine learning, big data and being less wrong.”

It starts on the 9th of March and finishes on the 13th, and I really hope I see you there! Thanks to everyone who’s come out in Perth, Sydney, Melbourne and Adelaide; you’ve truly made this a tour to remember.

Harald WelteReport from the Geniatech vs. McHardy GPL violation court hearing

Today, I took some time off to attend the court hearing in the appeal hearing related to a GPL infringement dispute between former netfilter colleague Partrick McHardy and Geniatech Europe

I am not in any way legally involved in the lawsuit on either the plaintiff or the defendant side. However, as a fellow (former) Linux kernel developer myself, and a long-term Free Software community member who strongly believes in the copyleft model, I of course am very interested in this case.

History of the Case

This case is about GPL infringements in consumer electronics devices based on a GNU/Linux operating system, including the Linux kernel and at least some devices netfilter/iptables. The specific devices in question are a series of satellite TV receivers built by a Shenzhen (China) based company Geniatech, which is represented in Europe by Germany-based Geniatech Europe GmbH.

The Geniatech Europe CEO has openly admitted (out of court) that they had some GPL incompliance in the past, and that there was failure on their part that needed to be fixed. However, he was not willing to accept an overly wide claim in the preliminary injunction against his company.

The history of the case is that at some point in July 2017, Patrick McHardy has made a test purchase of a Geniatech Europe product, and found it infringing the GNU General Public License v2. Apparently no source code (and/or written offer) had been provide alongside the binary - a straight-forward violation of the license terms and hence a violation of copyright. The plaintiff then asked the regional court of Cologne to issue a preliminary injunction against the defendant, which was granted on September 8th,2017.

In terms of legal procedure, in Germany, when a plaintiff applies for a preliminary injunction, it is immediately granted by the court after brief review of the filing, without previously hearing the defendant in an oral hearing. If the defendant (like in this case) wishes to appeal the preliminary injunction, it files an appeal which then results in an oral hearing. This is what happened, after which the district court of cologne (Landgericht Koeln) on October 20, 2017 issued ruling 14 O 188/17 partially upholding the injunction.

All in all, nothing particularly unusual about this. There is no dispute about a copyright infringement having existed, and this generally grants any of the copyright holders the right to have the infringing party to cease and desist from any further infringement.

However, this injunction has a very wide scope, stating that the defendant was to cease and desist not only from ever publishing, selling, offering for download any version of Linux (unless being compliant to the license). It furthermore asked the defendant to cease and desist

  • from putting hyperlinks on their website to any version of Linux
  • from asking users to download any version of Linux

unless the conditions of the GPL are met, particularly the clauses related to providing the complete and corresponding source code.

The appeals case at OLG Cologne

The defendant now escalated this to the next higher court, the higher regional court of Cologne (OLG Koeln), asking to withdraw the earlier ruling of the lower court, i.e. removing the injunction with its current scope.

The first very positive surprise at the hearing was the depth in which the OLG court has studied the subject matter of the dispute prior to the hearing. In the many GPL related court cases that I witnessed so far, it was by far the most precise analysis of how Linux kernel development works, and this despite the more than 1000 pages of filings that parties had made to the court to this point.

Just to give you some examples:

  • the court understood that Linux was created by Linus Torvalds in 1991 and released under GPL to facilitate the open and collaborative development
  • the court recognized that there is no co-authorship / joint authorship (German: Miturheber) in the Linux kernel as a whole, as it was not a group of people planning+developing a given program together, but it is a program that has been released by Linus Torvalds and has since been edited by more than 15.000 developers without any "grand joint plan" but rather in successive iterations. This situation constitutes "editing authorship" (German: Bearbeiterurheber)
  • the court further recognized that being listed as "head of the netfilter core team" or a "subsystem maintainer" doesn't necessarily mean that one is contributing copyrightable works. Reviewing thousands of patches doesn't mean you own copyright on them, drawing an analogy to an editorial office at a publisher.
  • the court understood there are plenty of Linux versions that may not even contain any of Patric McHardy's code (such as older versions)

After about 35 minutes of the presiding judge explaining the court's understanding of the case (and how kernel development works), it went on to summarize the summary of their internal elaboration at the court prior to the meeting.

In this summary, the presiding judge stated very clearly that they believe there is some merit to the arguments of the defendant, and that they would be inclined in a ruling favorable to the defendant based on their current understanding of the case.

He cited the following main reasons:

  • The Linux kernel development model does not support the claim of Patrick McHardy having co-authored Linux. In so far, he is only an editing author (Bearbeiterurheber), and not a co-author. Nevertheless, even an editing author has the right to ask for cease and desist, but only on those portions that he authored/edited, and not on the entire Linux kernel.
  • The plaintiff did not sufficiently show what exactly his contributions were and how they were forming themselves copyrightable works
  • The plaintiff did not substantiate what copyrightable contributions he has made outside of netfilter/iptables. His mere listing as general networking subsystem maintainer does not clarify what his copyrightable contributions were
  • The plaintiff being a member of the netfilter core team or even the head of the core team still doesn't support the claim of being a co-author, as netfilter substantially existed since 1999, three years before Patrick's first contribution to netfilter, and five years before joining the core team in 2004.

So all in all, it was clear that the court also thought the ruling on all of Linux was too far-fetching.

The court suggested that it might be better to have regular main proceedings, in which expert witnesses can be called and real evidence has to be provided, as opposed to the constraints of the preliminary procedure that was applied currently.

Some other details that were mentioned somewhere during the hearing:

  • Patrick McHardy apparently unilaterally terminated the license to his works in an e-mail dated 26th of July 2017 towards the defendant. According to the defendant (and general legal opinion, including my own position), this is in turn a violation of the GPLv2, as it only allowed plaintiff to create and publish modified versions of Linux under the obligation that he licenses his works under GPLv2 to any third party, including the defendant. The defendant believes this is abuse of his rights (German: Rechtsmissbraeuchlich).
  • sworn affidavits of senior kernel developer Greg Kroah-Hartman and current netfilter maintainer Pablo Neira were presented in support of some of the defendants claims. The contents of those are unfortunately not public, neither is the contents of the sworn affidavists presented by the plaintiff.
  • The defendant has made substantiated claims in his filings that Patrick McHardy would perform his enforcement activities not with the primary motivation of achieving license compliance, but as a method to generate monetary gain. Such claims include that McHardy has acted in more than 38 cases, in at least one of which he has requested a contractual penalty of 1.8 million EUR. The total amount of monies received as contractual penalties was quoted as over 2 million EUR to this point. Please note that those are claims made by the defendant, which were just reproduced by the court. The court has not assessed their validity. However, the presiding judge explicitly stated that he received a phone calls about this case from a lawyer known to him personally, who supported that large contractual penalties are being paid in other related cases.
  • One argument by the plaintiff seems to center around being listed as a general kernel networking maintainer until 2017 (despite his latest patches being from 2015, and those were netfilter only)

Withdrawal by Patrick McHardy

At some point, the court hearing was temporarily suspended to provide the legal representation of the plaintiff with the opportunity to have a Phone call with the plaintiff to decide if they would want to continue with their request to uphold the preliminary injunction. After a few minutes, the hearing was resumed, with the plaintiff withdrawing their request to uphold the injunction.

As a result, the injunction is now withdrawn, and the plaintiff has to bear all legal costs (court fees, lawyers costs on both sides).

Personal Opinion

For me, this is all of course a difficult topic. With my history of being the first to enforce the GNU GPLv2 in (equally German) court, it is unsurprising that I am in favor of license enforcement being performed by copyright holders.

I believe individual developers who have contributed to the Linux kernel should have the right to enforce the license, if needed. It is important to have distributed copyright, and to avoid a situation where only one (possibly industry friendly) entity would be able to take [legal] action.

I'm not arguing for a "too soft" approach. It's almost 15 years since the first court cases on license violations on (embedded) Linux, and the fact that the problem still exists today clearly shows the industry is very far from having solved a seemingly rather simple problem.

On the other hand, such activities must always be oriented to compliance, and compliance only. Collecting huge amounts of contractual penalties is questionable. And if it was necessary to collect such huge amounts to motivate large corporations to be compliant, then this must be done in the open, with the community knowing about it, and the proceeds of such contractual penalties must be donated to free software related entities to prove that personal financial gain is not a motivation.

The rumors of Patrick performing GPL enforcement for personal financial gain have been around for years. It was initially very hard for me to believe. But as more and more about this became known, and Patrick would refuse to any contact requests by his former netfilter team-mates as well as the wider kernel community make it hard to avoid drawing related conclusions.

We do need enforcement, both out of court and in court. But we need it to happen out of the closet, with the community in the picture, and without financial gain to individuals. The "principles of community oriented enforcement" of the Software Freedom Conservancy as well as the more recent (but much less substantial) kernel enforcement statement represent the most sane and fair approach for how we as a community should deal with license violations.

So am I happy with the outcome? Not entirely. It's good that an over-reaching injunction was removed. But then, a lot of money and effort was wasted on this, without any verdict/ruling. It would have been IMHO better to have a court ruling published, in which the injunction is substantially reduced in scope (e.g. only about netfilter, or specific versions of the kernel, or specific products, not about placing hyperlinks, etc.). It would also have been useful to have some of the other arguments end up in a written ruling of a court, rather than more or less "evaporating" in the spoken word of the hearing today, without advancing legal precedent.

Lessons learned for the developer community

  • In the absence of detailed knowledge on computer programming, legal folks tend to look at "metadata" more, as this is what they can understand.
  • It matters who has which title and when. Should somebody not be an active maintainer, make sure he's not listed as such.
  • If somebody ceases to be a maintainer or developer of a project, remove him or her from the respective lists immediately, not just several years later.
  • Copyright statements do matter. Make sure you don't merge any patches adding copyright statements without being sure they are actually valid.

Lessons learned for the IT industry

  • There may be people doing GPL enforcement for not-so-noble motives
  • Defending yourself against claims in court can very well be worth it, as opposed to simply settling out of court (presumably for some money). The Telefonica case in 2016 <>_ has shown this, as has this current Geniatech case. The legal system can work, if you give it a chance.
  • Nevertheless, if you have violated the license, and one of the copyright holders makes a properly substantiated claim, you still will get injunctions granted against you (and rightfully so). This was just not done in this case (not properly substantiated, scope of injunction too wide/coarse).

Dear Patrick

For years, your former netfilter colleagues and friends wanted to have a conversation with you. You have not returned our invitation so far. Please do reach out to us. We won't bite, but we want to share our views with you, and show you what implications your actions have not only on Linux, but also particularly on the personal and professional lives of the very developers that you worked hand-in-hand with for a decade. It's your decision what you do with that information afterwards, but please do give us a chance to talk. We would greatly appreciate if you'd take up that invitation for such a conversation. Thanks.

Planet DebianSteinar H. Gunderson: Skellam distribution likelihood

I wondered if it was possible to make a ranking system based on the Skellam distribution, taking point spread as the only input; first step is figuring out what the likelihood looks like, so here's an example for k=4 (ie., one team beat the other by four goals):

Skellam distribution likelihood surface plot

It's pretty, but unfortunately, it shows that the most likely combination is µ1 = 0 and µ2 = 4, which isn't really that realistic. I don't know what I expected, though :-)

Perhaps it's different when we start summing many of them (more games, more teams), but you get into too high dimensionality to plot. If nothing else, it shows that it's hard to solve symbolically by looking for derivatives, as the extreme point is on an edge, not on a hill.

Krebs on SecurityWhat Is Your Bank’s Security Banking On?

A large number of banks, credit unions and other financial institutions just pushed customers onto new e-banking platforms that asked them to reset their account passwords by entering a username plus some other static identifier — such as the first six digits of their Social Security number (SSN), or a mix of partial SSN, date of birth and surname. Here’s a closer look at what may be going on (spoiler: small, regional banks and credit unions have grown far too reliant on the whims of just a few major online banking platform providers).

You might think it odd that any self-respecting financial institution would seek to authenticate customers via static data like partial SSN for passwords, and you’d be completely justified for thinking that, too. Nobody has any business using these static identifiers for authentication because they are for sale on most Americans quite cheaply in the cybercrime underground. The Equifax breach might have “refreshed” some of those data stores for identity thieves, but most U.S. adults have had their static details (DOB/SSN/MMN, address, previous address, etc) on sale for years now.

On Feb. 16, KrebsOnSecurity reader Brent Hoeft shared a copy of an email he’d just received from his financial institution Associated Bank, which at $30+ billion in assets happens to be Wisconsin’s largest by asset size.

The notice advised:

“Please read and save this information (including the password below) to prepare for your online and mobile banking upgrade.

Our refreshed online and mobile banking experience is officially launching on Monday, February 26, 2018.

We’re excited to share it with you, and want you to be aware of some important details about the transition.


Use this temporary password the first time you sign in after the upgrade. Your temporary password is the first four letters of your last name plus the last four digits of your Social Security Number.

XXXX#### [redacted by me but included in the email]

Note: your password is all lowercase without spaces.

Once the upgrade is complete, you will need your temporary password to begin the re-enrollment process.
• Beginning Monday, February 26, you will need to sign in using your existing user ID and the temporary password included above in this email. Please note that you are only required to reenroll in online or mobile banking but can access both using the same user ID and password.
• Once you sign in, you will be prompted to create a new password and establish other security features. Your user ID will remain the same.”

Hoeft said Associated Bank seems to treat the customer username as a secret, something to be protected along with the password.

“I contacted Associated’s customer service via email and received a far less satisfying explanation that the user name is required for re-activation and, that since [the username] was not provided in the email, the process they are using is in fact secure,” Hoeft said.

After speaking with Hoeft, I tweeted about whether to name and shame the bank before it was too late, or perhaps to try and talk some sense into them privately. Most readers advised that calling attention to the problem before the transition could cause more harm than good, and that at least until after Feb. 26 contacting some of the banks privately was the best idea (which is what I did).

Associated Bank wouldn’t say who their new consumer online banking platform provider was, but they did say it was one of the big ones. I took that to mean either FIS, Fiserv or Jack Henry, which collectively control approximately 70 percent of the market for bank core processors (according to, Fiserv is by far the largest).


The bank’s chief information security officer Joe Smits said Associated’s new consumer online banking platform provider required that new and existing customers log in with a username and a temporary password — which was described as choice among secondary, static data elements about customers — such as the first six digits of the customer’s SSN or date of birth.

Smits added that the bank originally started emailing customers the instructions for figuring out their temporary passwords, but then decided US mail would be a safer option and sent the rest out that way. He said only about 15 percent of Associated Bank customers (~50,000) received instructions about their temporary passwords through email.

I followed up with Hoeft to find out how his online banking upgrade went at Associated Bank. He told me that upon visiting the site, it asked for his username and the temporary password (the first four letters of his last name and the last four digits of his SSN).

“After entering that I was told to re-enter my temporary password and then create a new password,” Hoeft said. “I then was asked to select 5 security questions and provide answers. Next I was asked for a verification phone number. Upon entering that I received a text message with a 4 digit verification code. After entering the code it asked me to finish my profile information including name, email and daytime phone. After that it took me right into my online banking account.”

Hoeft said it seems like the “verification” step that was supposed to create an extra security check didn’t really add any security at all.

“If someone were able to get in with the temporary password, they would be able to create a new password, fill out all the security code information, and then provide their phone number to receive the verification code,” Hoeft said. “Armed with the verification code they then would be able to get right into my online banking account.”


A simple search online revealed Associated Bank wasn’t alone: Multiple institutions were moving to a new online banking platform all on the same day: Feb. 26, 2018.

My Credit Union also moved to a new online banking service in February, posting a notice stating that all customers will need to log in with their current username and the last four of their SSN as a temporary password.

Customers Bank, a $10 billion bank with nearly two dozen branches between Boston and Philadelphia, also told customers that starting Feb. 26 they would need to use a temporary password — the last six digits of their Social Security number — to re-enroll in online banking. Here’s part of their advice, which was published in a PDF on the bank’s site:

• You may notice a new co-branded logo for Customers Bank and BankMobile (Division Customers Bank).
• Your existing user name for Online Banking will remain the same within the new system; however, it must be entered as all lowercase letters.
• The first time you log into the new Online Banking system, your temporary password is the last 6-digits of your social security number. Your temporary
password will expire on Friday, April 20, 2018. Please be sure to log in prior to that date.
• Online Banking includes multi-factor authentication which will need to be reestablished as part of the initial sign in to the system.
• Your username and password credentials for Online Banking will be the same for Mobile Banking. Note: Before accessing the new Mobile Banking services,
you must first login to our enhanced Online Banking system to change your password.
• You will also need to enroll your mobile device, either through Online Banking by visiting the Mobile Banking Center option, or directly on the device through the
app. Both options will require additional authentication.

Columbia Bank, which has 140 branches in Washington, Oregon and Idaho, also switched gears on Feb. 26, but used a more sensible approach: Sending customers a new user ID, organization ID and temporary password in two separate mailings.


My tweet about whether to name Associated Bank attracted the attention of at least two banking industry security regulators, each of whom spoke with KrebsOnSecurity on condition of not being identified by name or regulatory agency.

Both said their agencies would be using the above examples in briefings with member institutions as instructional on how not to do online banking securely. Both also said small to mid-sized banks are massively beholden to their platform providers, and many banks simply accept the defaults instead of pushing for stronger alternatives.

“I have a lot of communications directly with the chief information security officers, chief security officers, and chief information officers in many institutions,” one regulator said. “Many of them have massively dumbed down their password requirements. A lot of smaller institutions often don’t understand the risk involved in online banking, which is why they try to outsource the whole thing to someone else. But they can’t outsource accountability.”

One of the regulators I spoke with suggested that all of the banks they’d seen transitioning to a new online banking platform on Feb. 26 were customers of Fiserv — the nation’s largest online banking platform provider.

Fiserv did not respond to specific questions for this story, saying only in a written statement that: “Fiserv regularly partners with financial institutions to provide capabilities that help mitigate and manage risk, enhance the customer experience, and allow banks to remain competitive. A variety of methodologies are used by institutions to enroll and authenticate new users onto online banking platforms, and password authentication is one of multiple layers of security used to protect customers.”

Both banking industry regulators I spoke with said a basic problem is that many smaller institutions unfortunately still treat usernames as secret codes. I have railed against this practice for years, but far too many banks treat customer usernames as part of their security, even though most customers pick something very close to the first part of their email address (before the “@” sign). I’ve even skewered some of the airline industry giants for doing the same (United does this with its super-secret frequent flyer account number).

“I think this will be an opportunity for us to coach them on that,” one banking regulator said. “This process has to involve random password generation and that needs to be standard operating procedure. If you can shortcut security just by supplying static data like SSN, it’s all screwed. Some of these organizations have had such poor control structure for so long they don’t even understand how bad it is.”

The other regulator said another challenge is how long banks should wait before disabling accounts if consumers don’t log in to the new online banking system.

“What they’re going to do is set up all these users on this brand new system and give them default passwords,” the regulator said. “Some individuals will log into their bank account every day, others once a month and sometimes quite randomly. So, how are they going to control that window of opportunity? At some point, maybe after a couple of weeks, they need to just disable those other accounts and have people start from scratch.”

The first regulator said it appears many banks (and their platform providers) are singularly focused on making these transitions as seamless and painless as possible for the financial institution and its customers.

“I think they’re looking at making it easier for their customers and lessening the fallout as they get fewer angry and frustrated calls,” the regulator said. “That’s their incentive more than anything else.”


While it may appear that banks are more afraid of calls from their customers than of fallout from identity thieves and hackers, remember that you the consumer can shop with your wallet, and should move your funds to another bank if you’re unhappy with the security practices of your current institution.

Also, don’t re-use passwords. In fact, wherever possible don’t use passwords at all. Instead, choose passphrases over passwords (remember, length is key). Unfortunately, passphrases may not be possible because some banks have chosen to truncate passwords after a certain number of characters, and to disallow special symbols.

If you’re the kind of person who likes to use the same password across multiple sites, then a password manager is definitely for you. That’s because password managers pick strong, long and secure passwords for you and the only thing you have to remember is a single master password.

Please consider any two-step or two-factor authentication options your financial institution may offer, and be sure to take full advantage of that when it’s available. Also, ask your bank to require a unique verbal password before discussing any of your account details over the phone; this prevents someone from calling in to your bank and convincing a customer service rep that he’s you just because he can regurgitate your static personal details.

Finally, take steps to prevent your security from being backdoored by your mobile provider: Check out last week’s tips on blocking mobile number port-out scams, which thieves sometimes use in cashing out hacked bank accounts.

Planet DebianArianit Dobroshi: Debian Bug Squashing Party in Tirana

On 3 March I attended a Debian Bug Squashing Party in Tirana. Organized by colleagues at Open Labs Albania Anisa and friends and Daniel. Debian is the second oldest GNU/Linux distribution still active and a launchpad for so many others.

A large number of Kosovo participants took place, mostly female students. I chose to focus on adding Kosovo to country-lists in Debian by verifying that Kosovo was missing and then filing bug reports or, even better, doing pull requests.

apt-cache rdepends iso-codes will return a list of packages that include ISO codes. However, this proved hard to examine by simply looking at these applications on Debian; one would have to search through their code to find out how the ISO MA-3166 codes are used. So I left that for another time.

I moved next to what I thought I would be able complete within the event. Coding is becoming quite popular with children in Kosovo. I looked into MIT’s Scratch and Google’s Blockly, the second one being freeer software and targeting younger children. They both work by snapping together logical building blocks into a program.

Translation of Blockly into Albanian is now complete and hopefully will get much use. You can improve on my work at Translatewiki.

Thank you for the all fish and see you at the next Debian BSP.


Planet DebianRaphaël Hertzog: My Free Software Activities in February 2018

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Distro Tracker

Since we switched to salsa, and with the arrival of prospective GSOC students interested to work on distro-tracker this summer, I have been rather active on this project as can be seen in the project’s activity summary. Among the most important changes we can note:

  • The documentation and code coverage analysis is updated on each push.
  • Unit tests, functional tests and style checks (flake8) are run on each push but also on merge requests, allowing contributors to have quick feedback on their code. Implemented with this Gitlab CI configuration.
  • Multiple bug fixes (more of it). Update code to use python3-gpg instead of deprecated python3-gpgme (I had to coordinate with DSA to get the new package installed).
  • More unit tests for team related code. Still a work in progress but I made multiple reviews already.

Debian Live

I created the live-team on to prepare for the move of the various Debian live repositories. The move itself has been done by Steve McIntyre. In the discussion, we also concluded that the live-images source package can go away. I thus filed its removal request.

Then I spent a whole day reviewing all the pending patches. I merged most of them and left comments on the remaining ones:

  • Merged #885453 cleaning up double slashes in some paths.
  • Merged #885466 allowing to set upperdir tmpfs mount point size.
  • Merged #885455 switching back the live-boot initrd to use busybox’s wget as it supports https now.
  • Merged #886328 simplifying the mount points handling by using /run/live instead of /lib/live/mount.
  • Merged #886337 adding options to build smaller initrd by disabling some features.
  • Merged #866009 fixing a race condition between live-config and systemd-tmpfiles-setup.
  • Reviewed #884355 implementing new hooks in live-boot’s initrd. Not ready for merge yet.
  • Reviewed #884553 implementing cross-architecture linux flavour selection. Not ready for merge yet.
  • Merged #891206 fixing a regression with local mirrors.
  • Merged #867539 lowering the process priority of mksquasfs to avoid rendering the machine completely unresponsive during this step.
  • Merged #885692 adding UEFI support for ARM64.
  • Merged #847919 simplifying the bootstrap of foreign architectures.
  • Merged #868559 fixing fuse mounts by switching back to klibc’s mount.
  • Wrote a patch to fix verify-checksums option in live-boot (see #856482).
  • I released a new version of live-config but wanted some external testing before releasing the new live-boot. This did not happen yet unfortunately.

Debian LTS

I started a discussion on debian-devel about how we could handle the extension of the LTS program that some LTS sponsors are asking us to do.

The response have been rather mixed so far. It is unlikely that wheezy will be kept on the official mirror after its official EOL date but it’s not clear whether it would be possible to host the wheezy updates on some other server for longer.

Debian Handbook

I moved the git repository of the book to salsa and released a new version in unstable to fix two recent bugs: #888575 asking us to implement some parallel building to speed the build and #888578 informing us that a recent debhelper update broke the build process due to the presence of a build directory in the source package.

Debian Packaging

I moved all my remaining packages to and used the opportunity to clean them up:

  • dh-linktree, ftplib, gnome-shell-timer (fixed #891305 later), logidee-tools, publican, publican-debian, vboot-utils, rozofs
  • Some also got a new upstream release for the same price: tcpdf, lpctools, elastalert, notmuch-addrlookup.
  • I orphaned tcpdf in #889731 and I asked for the removal of feed2omb in #742601.
  • I updated django-modeltranslation to 0.12.2 to fix FTBFS bug #834667 (I submitted an upstream pull request at the same time).

Dolibarr. As a sponsor of dolibarr I filed its removal request and then I started a debian-devel discussion because we should be able to provide such applications to our users even though its development practice does not conform to some of our policies.

Bash. I uploaded a bash NMU (4.4.18-1.1) to fix a regression introduced by the PIE-enabled build (see #889869). I filed an upstream bug against bash but it turns out it’s actually a bug in qemu-user that really ought to be fixed. I reported the bug to qemu upstream but it hasn’t gotten much traction.

pkg-security team. I sponsored many updates over the month: rhash 1.3.5-1, medusa 2.2-5, hashcat, dnsrecon, btscanner, wfuzz 2.2.9, pixiewps 1.4.2-1, inetsim (new from kali). I also made a new upload of sslsniff with the OpenSSL 1.1 patch contributed by Hilko Bengen.

Debian bug reports

I filed a few bug reports:

  • #889814: lintian: Improve long description of epoch-change-without-comment
  • #889816: lintian: Complain when epoch has been bumped but upstream version did not go backwards
  • #890594: devscripts: Implement a salsa-configure script to configure project repositories
  • #890700 and #890701 about missing Vcs-Git fields to siridb-server and libcleri
  • #891301: lintian: privacy-breach-generic should not complain about <link rel=”generator”> and others

Misc contributions

Saltstack formulas. I pushed misc fixes to the munin-formula, the samba-formula and the openssh-formula. I submitted two other pull requests: on samba-formula and on users-formula.

QA’s carnivore database. I fixed a bug in a carnivore script that was spewing error messages about duplicate uids. This database links together multiple identifiers (emails, GPG key ids, LDAP entry, etc.) for the same Debian contributor.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianJonathan Dowland: Software for a service like

Can anyone recommend software for running a web service similar to

We are looking for something similar to manage digital assets within the Computing History Special Interest Group.

One suggestion I've had is CKAN which looks very interesting but possibly more geared towards opening up an API to existing live data (such as an relational DB of stuff, distributed or otherwise). We are mostly concerned with relatively static data sets: source code archives, collections of various types of publications, collections of images, etc.

(Having said that, there are some interesting possibilities for projects that consume the data sets in some fashion, perhaps via a web service, for e.g. reviewing OCR results for old raster scans of papers.)

I envisage something similar to the software powering We want both something that lets people explore collections of stuff via the web, including potentially via machine-friendly APIs in some cases; but also ideally manage uploading and categorising items via the web as well.

I've also had suggestions to look at media-manager software, but what I've seen so far is designed for personal media collections like movies, photos, etc., and focussed more on streaming them to LAN clients.

Can anyone recommend something worth looking at?

Planet DebianNorbert Preining: TeX Live 2018 pretest started

Preparations for the release of TeX Live 2018 have started a few days ago with the freeze of updates in TeX Live 2017 and the announcement of the official start of the pretest period. That means that we invite people to test the new release and help fixing bugs.

This year there hasn’t seen any notorious changes but the usual updates to pdftex, xetex, luatex, and the addition of a luatex53 based on lua53, which will (probably) become the default in TeX Live 2019, and addition of a few architectures (musl based linux, aarch64-linux, nothing earth-shaking.

Please test and report bugs to our mailing list.


CryptogramSecurity Vulnerabilities in Smart Contracts

Interesting research: "Finding The Greedy, Prodigal, and Suicidal Contracts at Scale":

Abstract: Smart contracts -- stateful executable objects hosted on blockchains like Ethereum -- carry billions of dollars worth of coins and cannot be updated once deployed. We present a new systematic characterization of a class of trace vulnerabilities, which result from analyzing multiple invocations of a contract over its lifetime. We focus attention on three example properties of such trace vulnerabilities: finding contracts that either lock funds indefinitely, leak them carelessly to arbitrary users, or can be killed by anyone. We implemented MAIAN, the first tool for precisely specifying and reasoning about trace properties, which employs inter-procedural symbolic analysis and concrete validator for exhibiting real exploits. Our analysis of nearly one million contracts flags 34,200 (2,365 distinct) contracts vulnerable, in 10 seconds per contract. On a subset of 3,759 contracts which we sampled for concrete validation and manual analysis, we reproduce real exploits at a true positive rate of 89%, yielding exploits for 3,686 contracts. Our tool finds exploits for the infamous Parity bug that indirectly locked 200 million dollars worth in Ether, which previous analyses failed to capture.

Worse Than FailureThe Unbidden Password

English - Mortise Lock with Key - Walters 52173

So here's a thing that keeps me up at night: we get a lot of submissions about programmers who cannot seem to think like users. There's a type of programmer who has never not known how computers worked, whose theory of computers in their mind has been so accurate for so long that they can't look at things in a different way. Many times, they close themselves off from users, insisting that if the user had a problem with using the software, they just don't know how computers work and need to educate themselves. Rather than focus on what would make the software more usable, they program what is easiest for the computer to do, and call it a day.

The same is sometimes true of security concerns. Rather than focus on what would be secure, on what the best practices are in the industry, these programmers hammer out something easy and straightforward and consider it good enough. Today's submitter, Rick, recently ran across just such a "security system."

Rick was shopping at a small online retailer, and found some items he liked. He got through the "fill in all your personal information and hope they have good security" stage of the online check-out process and placed his order. At no time was he asked if he wanted an account—which is good, because he never signs up for accounts at small independent retailers, preferring for his card information not to be stored at all. He was asked to fill in his email, which is common enough; a receipt and shipping updates are usually sent to the email associated with the order.

Sure enough, Rick received an email from the retailer moments later. Only this wasn't a receipt. It was, in fact, confirmation of a new account creation ... complete with a password in plain text.

Rick was understandably alarmed. He headed back to the site immediately to change the password to a longer, more secure one-off he could store in a password manager and never, ever have emailed to him in plaintext. But once on the site, he could find no sign of a login button or secure area. So at this point, he had an insecure password he couldn't appear to use, for an account he didn't even want in the first place.

Rick sent an email, worried about this state of affairs. The reply came fairly rapidly, from someone who was likely the sole tech department for the company: this was by design. All Rick had to do next time he purchased any goods was to enter the password on the checkout screen, and it would remember his delivery address for him.

As Rick put it:


So you send a random password insecurely and don't allow the user to change it, only because you think users would rather leave your web page to login to their email, search for the email that includes the password and copy that password in your web page, instead of just filling in their address that they know by heart.

Of course in this case, it doesn't matter one bit: Rick isn't going back to buy anything else. He didn't name-and-shame, but I encourage you to do so in the comments if you know of a retailer with similarly bad security. After all, there's only one thing that can beat programmer arrogance in this kind of situation: losing customers.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

TEDStatement on incident at TEDxBrussels

March 5, 2018 — Today at TEDxBrussels, an independently organized TEDx event, speaker and performance artist Deborah De Robertis was forcibly removed from the stage by one of the event’s organizers, who objected to the talk’s content.

We have reviewed the situation and spoken with the organizer. While we know there are moments when it is difficult to decide how to respond to a situation, this response was deeply inappropriate. We are immediately revoking the TEDxBrussels license granted to this individual.

TEDFollow your dreams without fear: 4 questions with Zubaida Bai

Cartier and TED believe in the power of bold ideas to empower local initiatives to have global impact. To celebrate Cartier’s dedication to launching the ideas of female entrepreneurs into concrete change, TED has curated a special session of talks around the theme “Bold Alchemy” for the Cartier Women’s Initiative Awards, featuring a selection of favorite TED speakers.

Leading up to the session, TED talked with women’s health advocate and TED Fellow Zubaida Bai about what inspires her work to improve the health and livelihoods of women worldwide.

TED: Tell us who you are.
Zubaida Bai: I am a women’s health advocate, a mother, a designer and innovator of health and livelihood solutions for underserved women and girls. I’ve traveled to the poorest communities in the world, listened compassionately to women and observed their challenges and indignities. As an entrepreneur and thought leader, I’m putting my passion into a movement that will address market failures, break taboos, and elevate the health of women and girls as a core topic in the world.

TED: What’s a bold move you’ve made in your career?
ZB: The decision I made with my husband and co-founder to make our company a for-profit venture. We wanted to prove that the poor are not poor in mind, and if you offer them a quality product that they need, and can afford, they will buy it. We also wanted to show that our business mode — serving the bottom of the pyramid — was scalable. Being a social sustainable enterprise is tough, especially if you serve women and children. But relying on non-profit donations especially for women’s health comes with a price. And that price is often an endless cycle of fundraising that makes it hard to create jobs and economically lift up the very communities being served. We are proud that every woman in our facilities in Chennai receives healthcare in addition to her salary.

TED: Tell us about a woman who inspires you.
ZB: My mother. She worked very hard under social constraints in India that were not favorable towards women. She was always working side jobs and creating small enterprises to help keep our family going, and I learned a lot from her. She also pushed me and believed in me and always created opportunities for me that she was denied and didn’t have access to.

TED: If you could go back in time, what would you tell your 18-year-old self?
ZB: To believe in your true potential. To follow your dreams without fear, as success is believing in your dreams and having the courage to pursue them — not the end result.

The private TED session at Cartier takes place April 26 in Singapore. It will feature talks from a diverse range of global leaders, entrepreneurs and change-makers, exploring topics ranging from the changing global workforce to maternal health to data literacy, and it will include a performance from the only female double violinist in the world.

Sociological ImagesAre We Really Looking at Body Cameras?

The Washington Post has been collecting data on documented fatal police shootings of civilians since 2015, and they recently released an update to the data set with incidents through the beginning of 2018. Over at Sociology Toolbox, Todd Beer has a great summary of the data set and a number of charts on how these shootings break down by race.

One of the main policy reforms suggested to address this problem is body cameras—the idea being that video evidence will reduce the number of killings by monitoring police behavior. Of course, not all police departments implement these cameras and their impact may be quite small. One small way to address these problems is public visibility and pressure.

So, how often are body cameras incorporated into incident reporting? Not that often, it turns out. I looked at all the shootings of unarmed civilians in The Washington Post’s dataset, flagging the ones where news reports indicated a body camera was in use. The measure isn’t perfect, but it lends some important context.

(Click to Enlarge)

Body cameras were only logged in 37 of 219 cases—about 17% of the time—and a log doesn’t necessarily mean the camera present was even recording. Sociologists know that organizations are often slow to implement new policies, and they don’t often just bend to public pressure. But there also hasn’t been a change in the reporting of body cameras, and this highlights another potential stumbling block as we track efforts for police reform.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

Planet DebianJacob Adams: PGP Clean Room: GSoC Mentors Wanted

I am a prospective GSoC student and I would be very interested in working on the PGP Clean Room project for Debian this summer. Unfortunately the current confirmed mentor, Daniel Pocock, is involved in the admin team and possibly in multiple other GSoC projects as well. So I am looking for another mentor who would be willing to help me on this project.

The Problems of PGP

PGP is essential to Debian and many other free software projects. It secures almost everything these projects distribute on the Internet. But for new users it can be difficult to set up. It typically requires complex command line interactions that the user doesn’t really understand, leading to much confusion and silly mistakes. Best practice is to generate the keys offline and store them on a set of separate storage devices, but there isn’t currently a tool to handle this well at all. I eventually got TAILS to serve this purpose but it was more difficult than it should have been.

What the PGP Clean Room will do

The PGP Clean room will walk new users through setting up a set of USB flash drives or sd cards as a raid disk, generating new PGP keys, storing them there, and then exporting subkeys either on a separate USB stick or a security key like a Yubikey. I’d also like to add the ability to do things like revoke keys or extend expiration dates for them through the application. Additionally, I would like to add an import feature for new keys and support for X.509 key management. My current plan is to write a python-newt application for this and use GPGME’s python bindings to generate the keys.

My Qualifications

I am currently a package maintainer for a couple packages in Debian. I’m a freshman intending to major in Computer Science at the College of William and Mary in Virginia, USA. I’ve taken a few college-level CS classes, but as can been seen from my Github profile, I’m mostly self-taught.

I’ve started working on this a little bit and published it on Debian’s Gitlab.

PGP Clean Room GSoC Project

PGP Clean Room Wiki Page

GSoC Mentor’s Guide


Cory DoctorowHow to be better at being pissed off at Big Tech

My latest Locus column, “Let’s Get Better at Demanding Better from Tech,” looks at how science fiction can make us better critics of technology by imagining how tech could be used in difference social and economic contexts than the one we live in today.

The “pro-tech” side’s argument is some variation on, “You can’t get the social benefits of Facebook without letting us spy on you and manipulate you — if you want to stay in touch with your friends, that’s the price of admission.” All too often, the “anti-tech” side takes this premise at face value: “Since we can’t hang out with our friends online without being spied on and manipulated, you need to stop wanting to hang out with your friends online.”

But the science fiction version of this goes, “What kinds of systems could we build if we wanted to hang out with our friends without being spied on and manipulated — and what kinds of political, regulatory and technological interventions would make those systems easier to build?”

A critique of technology that focuses on its market conditions, rather than its code, yields up some interesting alternate narratives. It has become fashionable, for example, to say that advertising was the original sin of online publication. Once the norm emerged that creative work would be free and paid for through attention – that is, by showing ads – the wheels were set in motion, leading to clickbait, political polarization, and invasive, surveillant networks: “If you’re not paying for the product, you’re the product.”

But if we understand the contours of the advertising marketplace as being driven by market conditions, not “attention economics,” a different story emerges. Market conditions have driven incredible consolidation in every sector of the economy, meaning that fewer and fewer advertisers call the shots, and meaning that more and more of the money flows through fewer and fewer payment processors. Compound that with lax anti-trust enforcement, and you have companies that are poised to put pressure on publishers and control who sees which information.

In 2018, companies from John Deere to GM to Johnson & Johnson use digital locks and abusive license agreements to force you to submit to surveillance and control how you use their products. It’s true that if you don’t pay for the product, you’re the product – but if you’re a farmer who’s just shelled out $500,000 for a new tractor, you’re still the product.

The “original sin of advertising” story says that if only microtransactions had been technologically viable and commercially attractive, we could have had an attention-respecting, artist-compensating online world, but in a world of mass inequality, financializing culture and discourse means excluding huge swaths of the population from the modern public sphere. If the Supreme Court’s Citizens United decision has you convinced that money has had a corrupting influence on who gets to speak, imagine how corrupting the situation would be if you also had to pay to listen.

Let’s Get Better at Demanding Better from Tech [Cory Doctorow/Locus]

Rondam RamblingsIs it time to take the Hyperloop seriously? No.

Over four years since it was first introduced, Ars Technica asks if it is time to take the Hyperloop seriously.  And four years since I first gave it, the answer is still a resounding no.  Not only has the thermal expansion problem not been solved, there has been (AFAICT) absolutely no attention paid to simple operational concerns that could be show-stoppers.  Like terrorism.  If you think

Planet DebianGunnar Wolf: # apt install yum

# apt install yum

No, I'm not switching to Fedora or anything like that.

CryptogramIntimate Partner Threat

Princeton's Karen Levy has a good article computer security and the intimate partner threat:

When you learn that your privacy has been compromised, the common advice is to prevent additional access -- delete your insecure account, open a new one, change your password. This advice is such standard protocol for personal security that it's almost a no-brainer. But in abusive romantic relationships, disconnection can be extremely fraught. For one, it can put the victim at risk of physical harm: If abusers expect digital access and that access is suddenly closed off, it can lead them to become more violent or intrusive in other ways. It may seem cathartic to delete abusive material, like alarming text messages -- but if you don't preserve that kind of evidence, it can make prosecution more difficult. And closing some kinds of accounts, like social networks, to hide from a determined abuser can cut off social support that survivors desperately need. In some cases, maintaining a digital connection to the abuser may even be legally required (for instance, if the abuser and survivor share joint custody of children).

Threats from intimate partners also change the nature of what it means to be authenticated online. In most contexts, access credentials­ -- like passwords and security questions -- are intended to insulate your accounts against access from an adversary. But those mechanisms are often completely ineffective for security in intimate contexts: The abuser can compel disclosure of your password through threats of violence and has access to your devices because you're in the same physical space. In many cases, the abuser might even own your phone -- or might have access to your communications data because you share a family plan. Things like security questions are unlikely to be effective tools for protecting your security, because the abuser knows or can guess at intimate details about your life -- where you were born, what your first job was, the name of your pet.

Planet DebianJulien Danjou: Scaling a polling Python application with tooz

This article is the final one of the series I wrote about scaling a large number of connections in a Python application. If you don't remember what the problem we're trying to solve is, here it is, coming from one of my followers':

It so happened that I'm currently working on scaling some Python app. Specifically, now I'm trying to figure out the best way to scale SSH connections - when one server has to connect to thousands (or even tens of thousands) of remote machines in a short period of time (say, several minutes).

How would you write an application that does that in a scalable way?

The first blog post was exploring a solution based on threads, while the second blog post was exploring an architecture around asyncio.

In the two first articles, we wrote programs that could handle this problem by using multiple threads or asyncio – or both. While this worked pretty well, this had some limitations, such as only using one computer. So this time, we're going to take a different approach and use multiple computers!

The job

As we've already seen, writing a Python application that connects to a host by ssh can be done using Paramiko or asyncssh as we've seen previously. Here again, that will not be the focus of this blog post since it is pretty straightforward to do.

To keep this exercise simple, we'll reuse our ping function from the first article. It looked like this:

import subprocess

def ping(hostname):
p = subprocess.Popen(["ping", "-c", "3", "-w", "1", hostname],
return p.wait() == 0

As a reminder, running this program alone and pinging serially 255 IP addresses takes more than 10 minutes. Let's try to make it faster by running it in parallel.

The architecture

Remember: if pinging 255 hosts takes 10 minutes, pinging the whole Internet is going to take forever – around five years at this rate.

With our ping experiment, we already divided our mission (e.g. "who's alive on the Internet") into very small tasks ("ping"). If we want to ping 4 billion hosts, we need to run those tasks in parallel. But one computer is not going to be enough: we need to distribute those tasks to different hosts, so we can use some massive parallelism to go even faster!

There are two ways to distribute such a set of tasks:

  • Use a queue. That works well for jobs that are not determined in advance, such as user-submitted tasks or that are going to be executed only once.

  • Use a distribution algorithm. That works only for tasks are determined in advance, and that are scheduled regularly, such as polling.

We are going to pick the second option here, as those ping tasks (or polling in the original problem) should regularly be run. That approach will allow us to spread the jobs onto several processes whose can be even spread onto several nodes over a network. We also won't have to "maintain" the queue (e.g. make it work and monitor it) so that's also a bonus point.

That's infinite horizontal scalability!

The distribution algorithm

The algorithm we're going to use to distribute this task is based on a consistent hashring.

Here's how it works in short. Picture a circular ring. We map objects onto this ring. The ring is then split into partitions. Those partitions are distributed among all the workers. The workers take care of jobs that are in the partitions they are responsible for.

In the case where a new node joins the ring, it is inserted between 2 nodes and take a bit of their workload. In the case where a node leaves the ring, the partitions it was taking care of are reassigned to its adjacent nodes.

If you want more details, it exists plenty of explanations about how this algorithm work. Feel free to look online!

However, to make this work, we need to know which nodes are alive or dead. This is another problem to solve, and the best way to tackle it is to use a coordination mechanism. There are plenty of those, from Apache ZooKeeper to etcd.

Without going too much into details, those pieces of software provide a network service where every node can connect to and can manage its state. If a client gets disconnected or crashes, it's then easy to consider it as removed. That enables the application to get the full list of nodes, and split the ring accordingly. There's no need to have any shared state between the nodes other than who's alive and running.

Using group membership

To get a list of nodes that are available to help us pinging the Internet, we need a service that provides this and a library to interact with it. Since the use case is pretty simple and I don't know which backends you like the most, we're going to use the Tooz library.

Tooz provides a coordination mechanism on top of a large variety of backends: ZooKeeper or etcd, as suggested earlier, but also Redis or memcached for those who want to live more dangerously. Indeed, while ZooKeeper or etcd can be set up in a synchronized cluster, memcached, on the other hand, is a SPOF.

For the sake of the exercise, we're going to use a single instance of etcd here. Thanks to Tooz, switching to another backend would be a one-line change anyway.

Tooz provides a tooz.coordination.Coordinator object that represents the connection to the coordination subsystem. It then exposes an API based on groups and members. A member is a node connected through a Coordinator instance. A group is a place that members can join or leave.

Here's a first implementation of a member joining a group and printing the member list:

import sys
import time
from tooz import coordination
# Check that a client and group ids are passed as arguments
if len(sys.argv) != 3:
print("Usage: %s <client id> <group id>" % sys.argv[0])
# Get the Coordinator object
c = coordination.get_coordinator(
# Start it (initiate connection).
group = sys.argv[2].encode()
# Create the group
except coordination.GroupAlreadyExist:
# Join the group
while True:
# Print the members list
members = c.get_members(group)
# Leave the group
# Stop when we're done

Don't forget to run etcd on your machine before running this program. Running a first instance of this program will print set(['client1']) every second. As soon as you run a second instance of this program, they both start to print set(['client1', 'client2']). If you shut down one of the clients, they will print the member list with only one member of it.

This can work with any number of client. If a client crashes rather than disconnect properly, its membership will automatically expire a few seconds – you can configure this expiration period with by passing a timeout value in Tooz URL.

Using consistent hashing

Now that we have a group, which will turn out to be our ring, we can implement consistent hashring on top of it. Fortunately, Tooz also provides an implementation of this that is ready to be used. Rather than using the join_group method, we're gonna use the join_partitioned_group method.

import sys
import time
from tooz import coordination
# Check that a client and group ids are passed as arguments
if len(sys.argv) != 3:
print("Usage: %s <client id> <group id>" % sys.argv[0])
# Get the Coordinator object
c = coordination.get_coordinator(
# Start it (initiate connection).
group = sys.argv[2].encode()
# Join the partitioned group
p = c.join_partitioned_group(group)
while True:
# Leave the group
# Stop when we're done

Running this program on one node (or just one terminal) will output the following every second:

$ python client1 foobar
0 handled by set(['client1'])
1 handled by set(['client1'])
2 handled by set(['client1'])
3 handled by set(['client1'])
4 handled by set(['client1'])
5 handled by set(['client1'])
6 handled by set(['client1'])
7 handled by set(['client1'])
8 handled by set(['client1'])
9 handled by set(['client1'])

As soon as a second members join (just run another copy of the script in another terminal), the output changes and both the running programs output the same thing:

0 handled by set(['client2'])
1 handled by set(['client1'])
2 handled by set(['client1'])
3 handled by set(['client1'])
4 handled by set(['client1'])
5 handled by set(['client2'])
6 handled by set(['client2'])
7 handled by set(['client1'])
8 handled by set(['client1'])
9 handled by set(['client2'])

They just shared the ten objects between them. They did not communicate with each other. They just know each other presence, and since they are using the same algorithm to compute where an object should belong, they share the same results. You can do the test with a third copy of the program:

0 handled by set(['client2'])
1 handled by set(['client1'])
2 handled by set(['client1'])
3 handled by set(['client1'])
4 handled by set(['client1'])
5 handled by set(['client2'])
6 handled by set(['client2'])
7 handled by set(['client3'])
8 handled by set(['client1'])
9 handled by set(['client3'])

Here we got a third client in the mix, excellent! If we stop one of the clients, the rebalancing is done automatically.

While the consistent hashing approach is great, is has a few characteristics you might want to know about:

  • The distribution algorithm is not made to be perfectly even. If you have a vast number of objects, it might seem pretty even statistically, but if you are trying to distribute two objects on two nodes, it's probable one node will handle the two objects and the other one none.

  • The distribution is not done in real time, meaning there's a small chance that an object might be owned by two nodes at the same time. This is not a problem in a scenario such as this one, since pinging a host twice is not going to be a big deal, but if your job needed to be unique and executed once and only once, this might not be an adequate method of distribution. Rather use a queue which has the proper characteristics.

Distributed ping

Now that we have our hashring ready to distribute our job, we can implement our final program!

import sys
import subprocess
import time
from tooz import coordination

# Check that a client and group ids are passed as arguments
if len(sys.argv) != 3:
print("Usage: %s <client id> <group id>" % sys.argv[0])

# Get the Coordinator object
c = coordination.get_coordinator(
# Start it (initiate connection).
group = sys.argv[2].encode()
# Join the partitioned group
p = c.join_partitioned_group(group)

class Host(object):
def __init__(self, hostname):
self.hostname = hostname
def __tooz_hash__(self):
"""Returns a unique byte identifier so Tooz can distribute this object."""
return self.hostname.encode()
def __str__(self):
return "<%s: %s>" % (self.__class__.__name__, self.hostname)
def ping(self):
p = subprocess.Popen(["ping", "-q", "-c", "3", "-W", "1",
return p.wait() == 0

hosts_to_ping = [Host("192.168.2.%d" % i) for i in range(255)]
while True:
for host in hosts_to_ping:
if p.belongs_to_self(host):
print("Pinging %s" % host)
print(" %s is alive" % host)
# Leave the group
# Stop when we're done

When the first client starts, it starts iterating on the host, and since it is alone, all hosts belong to it. So it starts pinging all nodes:

$ python3 client1 ping
Pinging <Host:>
<Host:> is alive
Pinging <Host:>
<Host:> is alive
Pinging <Host:>

Then, a second client starts pinging too, and automatically the jobs are split. The client1 instance starts skipping some nodes that now belongs to client2:

# client1 output
Pinging <Host:>
<Host:> is alive
Pinging <Host:>
Pinging <Host:>
Pinging <Host:>
# client2 output
Pinging <Host:>
Pinging <Host:>
Pinging <Host:>
<Host:> is alive

On the other hand, client2 is skipping nodes that are belonging to client1. If you want to scale further our application, we can start new clients on other nodes on the network and expand our pinging system!

Just a first step

This ping job does not use a lot of CPU time or I/O bandwidth, neither would the original ssh case by Alon. However, if that would be the case, this method would be even more efficient as the scalability of the resources would be a key.

These are just the first steps of the distribution and scalability mechanism that you can implement using Python. There are a few other options available on top of this mechanism such as defining different weights for different nodes or using replicas to achieve high-availability scenario. I've covered those in my book Scaling Python, if you're interested in learning more!

Planet DebianRenata D'Avila: Women in MiniDebConf Curitiba campaign

This is the text of the crowdfunding campaign I am organizing with five other extraordinary women: Alice, Ana Paula, Anna, Luciana and Miriam.

Women in MiniDebConf. Let's show that, yes, there are many women with potential that are interested in the world of free technologies and who use Debian! Help with the campaign for diversity at MiniDebConf:

Here is what Anna has to say about Debian: "Debian is one of the strongest GNU/Linux distributions with the free software philosophy, and that's why it's so impressive. Anyone who comes in contact with Debian necessarily learns more about free software and the FLOSS culture."

The Debian project provides travel grants for participation in conferences - for people who are considered project members.

And how many of these are women? Very few.

Women interested in attending MiniDebConf Curitiba are many, but most of them do not have the means to travel to Curitiba, specially in a big country like Brazil.

It is a fact that women do not have the same opportunities in the IT world as men, but we can change that history. For this, we need your help.

Let's show that, yes, there are many women with potential interested in the world of free technologies and who use Debian. At MiniDebConf, women who could already be contributing to the community will have an opportunity to interact with it, taking part in tutorials, workshops and talks. It is in everyone's best interest that the community get itself ready to include them.

So, our way of helping to increase diversity in MiniDebConf - and perhaps among the people who contribute to Debian as well - is by giving these women the conditions they need to participate in the conference.

The Debian Women community is well developed in other countries, but in Brazil there were still no registered groups.

At last year's MiniDebConf, there was not one single woman speaker.

But it does not have to be this way.

To be able to increase diversity and to change the current situation of exclusion that we currently have in the Brazilian community, we must act on many fronts. We are already working to foster the local community and to engage other women in the use and development of Debian.

That is why we want to also bring in women who are already Debian users, so they can share their experiences, so they can act as mentors to the newbies and so we can integrate all of them into the Debian development community.

There have already been successful campaigns in Brazil to include women in conferences and technology communities, both as participants and as speakers: PyLadies in FISL, PyLadies in Python Brazil 12, PyLadies in Python Brazil 13 and the Gophercon BR Diversity Scholarship.

With your collaboration, this will be another goal achieved - and the Debian and free software communities will become a bit more representative of our own society.

Bitcoin - 15YFYKHr6CfYmBCyf4JM2g8WFkCmNGDGi5

Women in MiniDebConf. Let's show that, yes, there are many women with potential that are interested in the world of free technologies and who use Debian! Help with the campaign for diversity at MiniDebConf:

Link to the campaign:

Planet DebianNorbert Preining: Debian/TeX Live 2017.20180305-1

TeX Live 2017 has received its final update just the other day and we are moving forward to start the period of testing leading up to the release of TeX Live 2018. Thus, this release for Debian is also the last one based on the TeX Live 2017, and updates based on TeX Live 2018 pretest will hit experimental in the next days. This release does not bring too much new items, mostly a bug fix for upgrades from Jessie, plus the usual bunch of updated packages, see below.

Baring any serious bugs, this will be the last upload to Debian/unstable for quite some time till, more specifically till TeX Live 2018 is released in about 2 months.

Enjoy the break!

Updated packages

animate, beebe, bib2gls, biblatex-publist, csplain, cyber, fei, fontawesome5, glossaries-extra, lshort-english, luaxml, mathpunctspace, media9, mpostinl, newtx, nicematrix, pixelart, polexpr, reledmac, rubik, thaispec, uantwerpendocs, univie-ling, xecjk, xint.

Worse Than FailureCodeSOD: A Very Private Memory

May the gods spare us from “clever” programmers.

Esben found this little block of C# code:

System.Diagnostics.Process proc = System.Diagnostics.Process.GetCurrentProcess();
long check = proc.PrivateMemorySize64;
if (check > 1150000000)

Even before you check on the objects and methods in use, it’s hard to figure out what the heck this method is supposed to do. If some memory stat is over a certain size, pop up a message box and break out of the method? Why? Isn’t this more the case for an exception? Since the base value is hard coded, what happens if I run this code on a machine with way more RAM? Or configure the CLR to give my process more memory? Or…

If the goal was to prevent an operation from starting if there wasn’t enough free memory, this code is dumb. It’s “clever”, in the sense that the original developer said, “Hey, I’m about to do something memory intensive, let me make sure there’s enough memory” and then patted themselves on the head for practicing defensive programming techniques.

But that isn’t what this code exactly does. PrivateMemorySize simply reports how much memory is allocated to the process. Not how much is free, not how much is used, just… how much there is. That number may grow, as the process allocates objects, so if it’s too large relative to available memory… you’ll be paging a bunch, which isn’t great I suppose, but it still doesn’t explain this check.

This is almost certainly means this was a case of “works on my/one machine”, where the developer tuned the number 1550000000 based on one specific machine. Either that, or there was a memory leak in the code- and yes, even garbage collected languages can still have memory leaks if you’re an agressively “clever” programmer- and this was the developer’s way of telling people “Hey, restart the program.”

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianLars Wirzenius: dpkg maintainer script containerisation

Random crazy Debian idea of today: add support to dpkg so that it uses containers (or namespaces, or whatever works for this) for running package maintainer scripts (pre- and postinst, pre- and postrm), to prevent them from accidentally or maliciously writing to unwanted parts of the filesystem, or from doing unwanted network I/O.

I think this would be useful for third-party packages, but also for packages from Debian itself. You heard it here first! Debian package maintainers have been known to make mistakes.

Obviously there needs to be ways in which these restrictions can be overridden, but that override should be clear and obvious to the user (sysadmin), not something they notice because they happen to be running strace or tcpdump during the install.

Corollary: dpkg could restrict where a .deb can place files based on the origin of the package.

Example: Installing chrome.deb from Google installs a file in /etc/apt/sources.list.d, which is a surprise to some. If dpkg were to not allow that (as a file in the .deb, or a file created in postinst), unless the user was told and explicitly agreed to it, it would be less of a nasty surprise.

Example: Some stupid Debian package maintainer is very busy at work and does Debian hacking when they should really be sleeping, and types the following into their postrm script, while being asleep:


LIB="/var/lib/ $PKG"

rm -rf "$LIB"

See the mistake? Ideally, this would be found during automated testing before the package gets uploaded, but that assumes said package maintainer uses tools like piuparts.

I think it'd be better if we didn't rely only infallible, indefatigable people with perfect workflows and processes for safety.

Having dpkg make the whole filesystem read-only, except for the parts that clearly belong to the package, based on some sensible set of rules, or based a suitable override, would protect against mistakes like this.

Planet DebianRussell Coker: WordPress Multisite on Debian

WordPress (a common CMS for blogs) is designed to be copied to a directory that Apache can serve and run by a user with no particular privileges while managing installation of it’s own updates and plugins. Debian is designed around the idea of the package management system controlling everything on behalf of a sysadmin.

When I first started using WordPress there was a version called “WordPress MU” (Multi User) which supported multiple blogs. It was a separate archive to the main WordPress and didn’t support all the plugins and themes. As a main selling point of WordPress is the ability to select from the significant library of plugins and themes this was a serious problem.

Debian WordPress

The people who maintain the Debian package of WordPress have always supported multiple blogs on one system and made it very easy to run in that manner. There’s a /etc/wordpress directory for configuration files for each blog with names such as This allows having multiple separate blogs running from the same tree of PHP source which means only one thing to update when there’s a new version of WordPress (often fixing security issues).

One thing that appears to be lacking with the Debian system is separate directories for “media”. WordPress supports uploading images (which are scaled to several different sizes) as well as sound and apparently video. By default under Debian they are stored in /var/lib/wordpress/wp-content/uploads/YYYY/MM/filename. If you have several blogs on one system they all get to share the same directory tree, that may be OK for one person running multiple blogs but is obviously bad when several bloggers have independent blogs on the same server.


If you enable the “multisite” support in WordPress then you have WordPress support for multiple blogs. The administrator of the multisite configuration has the ability to specify media paths etc for all the child blogs.

The first problem with this is that one person has to be the multisite administrator. As I’m the sysadmin of the WordPress servers in question that’s an obvious task for me. But the problem is that the multisite administrator doesn’t just do sysadmin tasks such as specifying storage directories. They also do fairly routine tasks like enabling plugins. Preventing bloggers from installing new plugins is reasonable and is the default Debian configuration. Preventing them from selecting which of the installed plugins are activated is unreasonable in most situations.

The next issue is that some core parts of WordPress functionality on the sub-blogs refer to the administrator blog, recovering a forgotten password is one example. I don’t want users of other blogs on the system to be referred to my blog when they forget their password.

A final problem with multisite is that it makes things more difficult if you want to move a blog to another system. Instead of just sending a dump of the MySQL database and a copy of the Apache configuration for the site you have to configure it for which blog will be it’s master. If going between multisite and non-multisite you have to change some of the data about accounts, this will be annoying on both adding new sites to a server and moving sites from the server to a non-multisite server somewhere else.

I now believe that WordPress multisite has little value for people who use Debian. The Debian way is the better way.

So I had to back out the multisite changes. Fortunately I had a cron job to make snapshots of the BTRFS subvolume that has the database so it was easy to revert to an older version of the MySQL configuration.

Upload Location

update etbe_options set option_value='/var/lib/wordpress/wp-content/uploads/' where option_name='upload_path';

It turns out that if you don’t have a multisite blog then there’s no way of changing the upload directory without using SQL. The above SQL code is an example of how to do this. Note that it seems that there is special case handling of a value of ‘wp-content/uploads‘ and any other path needs to be fully qualified.

For my own blog however I choose to avoid the WordPress media management and use the following shell script to create suitable HTML code for an image that links to a high resolution version. I use GIMP to create the smaller version of the image which gives me a lot of control over how to crop and compress the image to ensure that enough detail is visible while still being small enough for fast download.

set -e

if [ "$BASE" = "" ]; then

while [ "$1" != "" ]; do
  SMALL=$(echo $1 | sed -s s/-big//)
  RES=$(identify $SMALL|cut -f3 -d\ )
  WIDTH=$(($(echo $RES|cut -f1 -dx)/2))px
  HEIGHT=$(($(echo $RES|cut -f2 -dx)/2))px
  echo "<a href=\"$BASE/$BIG\"><img src=\"$BASE/$SMALL\" width=\"$WIDTH\" height=\"$HEIGHT\" alt=\"\" /></a>"

Planet DebianLior Kaplan: Running for OSI board

After serving in the board of a few technological Israeli associations, I decided to run as an individual candidate in the OSI board elections which starts today. Hoping to add representation outside of North America and Europe. While my main interest is the licensing work, another goal I wish to achieve is to make OSI more relevant for Open Source people on a daily basis, making it more central for communities.

This year there are 12 candidates from 2 individual seats and 5 candidate for 2 affiliate seats (full list at OSI elections wiki page). Wish me luck (:

Planet DebianJan Wagner: Comparing (OVH) I/O performance

Since some time I'm using cloud resources provided by OVH for some projects I'm involved.

Recently we decided to give Zammad, an opensource support/ticketing solution, a try. We did choose the docker compose way for deployment, which also includes an elasticsearch instance. The important part of this information is, that for elasticsearch indexing the storage has a huge impact.

The documentation suggests at least 4 GB RAM for running the Zammad compose stack. So I did choose a VPS Cloud 2, it has 4 GB RAM and 50 GB Ceph storage, out of mind.

After I deployed my simple docker setup and on top the zammad compose setup everything was running smooth mostly. Unfortunately when starting the whole zammad compose stack, elasticsearch is regenerating the whole index, which might take a long(er) time depending on the size of the index and the performance of the system. This has to be done before the UI comes available and is ready for using.

To make a long story short, I had the same setup on a testground where it was several times faster then on the production setup. So I decided it's time to have a look into the performance of my OVH resources. Over the time I got access to a couple of them, even some bare metal systems.

For my test I just grabed the following sample:

  • VPS 2016 Cloud 2
  • VPS-SSD-3
  • VPS 2016 Cloud RAM 1
  • VPS 2014 Cloud 3
  • HG-7
  • SP-32 (that's a bare metal with software raid)

Looking into what would be the best way to benchmark I/O it came to my attention, that comparing I/O for cloud resources is not so uncommon. I also learned that dd might not be the first choice but fio seems a good catch for doing lazy I/O benchmarks and ioping for testing I/O latency.

As the systems all running Debian, at least 8.x, I used the following command(s) for doing my tests:

aptitude -y install -o quiet=2 ioping fio > /dev/null; && \
 time fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --output=/tmp/tempfile --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75; \
 rm -f test.*; cat /tmp/tempfile; \
 ioping -c 10 /root | tail -4

The output on my VPS 2016 Cloud 2 system:

Jobs: 1 (f=1): [m(1)] [100.0% done] [1529KB/580KB/0KB /s] [382/145/0 iops] [eta 00m:00s]
real	14m20.420s
user	0m14.620s
sys	1m4.424s
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 4096MB)

test: (groupid=0, jobs=1): err= 0: pid=19377: Fri Mar  2 18:16:12 2018
  read : io=3070.4MB, bw=3888.9KB/s, iops=972, runt=808475msec
  write: io=1025.8MB, bw=1299.2KB/s, iops=324, runt=808475msec
  cpu          : usr=1.43%, sys=6.34%, ctx=835077, majf=0, minf=9
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=785996/w=262580/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=3070.4MB, aggrb=3888KB/s, minb=3888KB/s, maxb=3888KB/s, mint=808475msec, maxt=808475msec
  WRITE: io=1025.8MB, aggrb=1299KB/s, minb=1299KB/s, maxb=1299KB/s, mint=808475msec, maxt=808475msec

Disk stats (read/write):
  sda: ios=787390/263575, merge=612/721, ticks=49277288/2701580, in_queue=51980604, util=100.00%
--- /root (ext4 /dev/sda1) ioping statistics ---
9 requests completed in 4.56 ms, 36 KiB read, 1.97 k iops, 7.71 MiB/s
generated 10 requests in 9.00 s, 40 KiB, 1 iops, 4.44 KiB/s
min/avg/max/mdev = 423.4 us / 506.8 us / 577.3 us / 43.7 us

The interesting parts:

  read : io=3070.4MB, bw=3888.9KB/s, iops=972, runt=808475msec
  write: io=1025.8MB, bw=1299.2KB/s, iops=324, runt=808475msec
min/avg/max/mdev = 423.4 us / 506.8 us / 577.3 us / 43.7 us

After comparing the results with the rest of the systems I think my samples of the VPS 2016 Cloud instances do not convince me that I would choose such a system for use cases where I/O might be a critical part.









Planet DebianRussell Coker: Compromised Guest Account

Some of the workstations I run are sometimes used by multiple people. Having multiple people share an account is bad for security so having a guest account for guest access is convenient.

If a system doesn’t allow logins over the Internet then a strong password is not needed for the guest account.

If such a system later allows logins over the Internet then hostile parties can try to guess the password. This happens even if you don’t use the default port for ssh.

This recently happened to a system I run. The attacker logged in as guest, changed the password, and installed a cron job to run every minute and restart their blockchain mining program if it had been stopped.

In 2007 a bug was filed against the Debian package openssh-server requesting that the AllowUsers be added to the default /etc/ssh/sshd_config file [1]. If that bug hadn’t been marked as “wishlist” and left alone for 11 years then I would probably have set it to only allow ssh connections to the one account that I desired which always had a strong password.

I’ve been a sysadmin for about 25 years (since before ssh was invented). I have been a Debian Developer for almost 20 years, including working on security related code. The fact that I stuffed up in regard to this issue suggests that there are probably many other people making similar mistakes, and probably most of them aren’t monitoring things like system load average and temperature which can lead to the discovery of such attacks.

Planet DebianRuss Allbery: Free software log (February 2018)

Last month, I did a single software release: a new version of pgpcontrol, the collection of tools to check signed Usenet control messages. This is a pure maintenance release to keep it alive using GnuPG 1.0. The package is kind of a mess and needs a clean rewrite that I haven't had time to do yet (which is why I don't even have a software page for it).

Other than that, I didn't finish anything sufficiently to generate a new release, but I'm close on a bunch of fronts. Most of the user-visible (eventually) work went into podlators, the conversion tools from POD (Perl's documentation format) to text and man pages. Based on an excellent series of bug reports from eponymous alias, I fixed a bunch of long-standing bugs in Pod::Text, Pod::Text::Color, and Pod::Text::Termcap, and continued the slow process of reworking the package test suite to be cleaner and easier to maintain.

In C TAP Harness, I took an idea from the Rust assert macros and changed the arguments for all the TAP functions from wanted and seen to left and right. This way, one doesn't have to care about the order in which to pass arguments (which I can never remember). It will make it easier to update the INN test suite to the current TAP library interface, since I had used the opposite order for all of the original INN tests I wrote.

I spent a bunch of time adding SPDX identifiers to my utility functions that are intended for copying into other packages, and laid the groundwork for using SPDX identifiers in all of my projects. I picked up the habit of being careful about license notices from Debian work, and SPDX (if a bit weird in places, such as its utterly opaque file specification) is the first comprehensive and unambiguous labeling system. I have a horrible Perl script that does a lot of guesswork to generate a license file for my packages now, and am hoping to replace that with something (largely) based on SPDX.

Finally, I updated my Debian packaging with Git notes, and wrote new notes on using sbuild.


Planet DebianThorsten Alteholz: My Debian Activities in February 2018

FTP master

This month everything came back to normal and I accepted 272 packages and rejected 30 uploads. The overall number of packages that got accepted this month was 423.

Debian LTS

This was my forty fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 23.75h. During that time I did LTS uploads of:

  • [DLA 1279-1] clamav security update for two CVEs
  • [DLA 1286-1] quagga security update for three CVEs
  • [DLA 1290-1] libvpx security update for one CVE
  • [DSA 4125-1] wavpack security update for three Jessie CVEs and three Stretch CVEs

The issues for wavpack did not affect Wheezy, so there has been no DLA. Instead the security team accepted my debdiff for Jessie and Stretch and published a DSA. Thanks to Sebastien for doing this.
I also started to work on a fix for ICU. Unfortunately Moritz did not agree with me on the correct patch for this. As upstream did not respond to my query yet, I did not do an upload.
I also did not finish my work on opencv, I am still searching for the correct C++ template. On the other hand I finished work on 12 of 22 CVEs for wireshark. The rest will be done in March.

Other stuff

During February I uploaded new upstream versions of …

I also moved all alljoyn packages as well as a56 to salsa.

Planet DebianNiels Thykier: Prototyping a new packaging papercut fix – DRY for the debhelper compat level

Have you ever looked at packaging and felt it is a long exercise in repeating yourself?  If you have, you are certainly not alone.  You can find examples of this on the Debian mailing lists (among other places).  Such as when Russ Allbery pointed out that the debhelper compat level vs. the version in the debhelper build-dependency that is very often but not always the same.

Russ suggests two ways of solving the problem:

  1. The first proposal is to generate the build-dependency from the compat file. However, generating the control file is (as Russ puts it) “fraught with peril”.  Probably because we do not have good or standardized tooling for it – creating such tooling and deploying it will take years.  Not to mention that most contributors appear to be uncomfortable with handling the debian/control as a generated file.
  2. The alternative proposal from Russ is to assume that the major version of the build-dependency should mark the compat level (assuming no compat file exist).  However, Russ again points out an issue here that solution might be “too magical”.  Indeed, this solution have the problem that you implicitly change compat level as soon as you bump the versioned dependency beyond a major version of debhelper.  But only if you do not have a compat file.

Looking at these two options, the concept behind second one is most likely to be deployable in the near future.  However, the solution would need some tweaking and I have spend my time coming up with an alternative.

The third alternative:

My current alternative to Russ’s second proposal is to make debhelper provide multiple versions of “debhelper-compat” and have packages use a Build-Depends on “debhelper-compat (= X)”, where X denotes the desired compat level.  The build-dependency will then replace the “debhelper (>= X~)” relation when the package does not require a specific version of debhelper (beyond what is required for the compat level).

On top of this, debhelper implements some safe-guards to ensure that it can reliably determine the compat level from the build-dependencies.  Notably, there must be exactly one debhelper compat relation, it must be in the “Build-Depends” field and it must have a “strictly equal version” as version constraint.  Furthermore, it must not have any Build-Profile or architecture restrictions and so on.


With all of this in place:

  1. We have no repetition when it is not required.
  2. debhelper can reliable auto-detect which debhelper-compat level you wanted.  Otherwise, debhelper will ensure the build fails (or apt/aptitude, if you end up misspelling the package name or using an invalid version).
  3. Bumping the debhelper compat level is still explicit and separate from bumping the debhelper dependency when you need a  feature or bug fix from a later version.

Testing the prototype:

If you want to test the prototype, you can do so in unstable and testing at the moment (caveat: it is an experimental feature and may change or disappear without notice).  However, please note that lintian is completely unaware of this and will spew out several false-positives – including one nonfatal auto-reject, so you will have to apply at least one lintian override.  Also note, I have only added “debhelper-compat” versions for non-deprecated compat levels.  In other words, you will have to use compat 9 or later to test the feature.

You can use “mscgen/0.20-11” as an example for minimum the changes required.  Admittedly, the example cheats and relies on “debhelper-compat (= 10)” implies a “debhelper (>= 11.1.5~alpha1)” as that is the first version with the provides for debhelper-compat.  Going forward, if you need a feature from debhelper that appears in a later version than that, then you will need an explicit “debhelper (>= Y)” relation for that feature on top of the debhelper-compat relation.

Will you remove the support for debian/compat if this prototype works?

I have no immediate plans to remove the debian/compat file even if this prototype is successful.

Will you upload this to stretch-backports?

Yes, although I am waiting for a fix for #889567 to be uploaded to stretch-backports first.

Will this work for backports?

It worked fine on the buildds when I deployed it in experimental and I believe the build-dependency resolution process for experimental is similar (enough) to backports.

Will the third-party debhelper tools need changes to support this?

Probably no; most third-party debhelper tools do not seem to check the compat level directly. Even then, most tools use the “compat” sub from “Debian::Debhelper::Dh_Lib”, which handles all of this automatically.

That said, if you have a third-party tool that wishes or needs to check the debhelper compat level, please file a bug requesting a cross-language API for this and I will happy look at this.

Future work

I am considering to apply this concept to the dh sequence add-ons as well (i.e. the “dh $@ –with foo”).  From my PoV, this is another case needing a DRY fix.  Plus this would also present an opportune method for solving #836699 – though, the hard part for #836699 is actually taming the dh add-on API plus dh’s sequence handling to consistently only affect the “indep” part of the sequences.

Planet DebianBits from Debian: New Debian Developers and Maintainers (January and February 2018)

The following contributors got their Debian Developer accounts in the last two months:

  • Alexandre Mestiashvili (mestia)
  • Tomasz Rybak (serpent)
  • Louis-Philippe Véronneau (pollo)

The following contributors were added as Debian Maintainers in the last two months:

  • Teus Benschop
  • Kyle John Robbertze
  • Maarten van Gompel
  • Dennis van Dok
  • Innocent De Marchi
  • David Rabel


Planet DebianJacob Adams: PGP Clean Room: GSoC Mentors Wanted

I am a prospective GSoC student and I would be very interested in working on the PGP Clean Room project for Debian this summer. Unfortunately the current confirmed mentor, Daniel Pocock, is involved in the admin team and possibly in multiple other GSoC projects as well. So I am looking for another mentor who would be willing to help me on this project.

The Problems of PGP

PGP is essential to Debian and many other free software projects. It secures almost everything these projects distribute on the Internet. But for new users it can be difficult to set up. It typically requires complex command line interactions that the user doesn’t really understand, leading to much confusion and silly mistakes. Best practice is to generate the keys offline and store them on a set of separate storage devices, but there isn’t currently a tool to handle this well at all. I eventually got TAILS to serve this purpose but it was more difficult than it should have been.

What the PGP Clean Room will do

The PGP Clean room will walk new users through setting up a set of USB flash drives or sd cards as a raid disk, generating new PGP keys, storing them there, and then exporting subkeys either on a separate USB stick or a security key like a Yubikey. I’d also like to add the ability to do things like revoke keys or extend expiration dates for them through the application. Additionally, I would like to add an import feature for new keys and support for X.509 key management. My current plan is to write a python-newt application for this and use GPGME’s python bindings to generate the keys.

My Qualifications

I am currently a package maintainer for a couple packages in Debian. I’m a freshman intending to major in Computer Science at the College of William and Mary in Virginia, USA. I’ve taken a few college-level CS classes, but as can been seen from my Github profile, I’m mostly self-taught.

I’ve started working on this a little bit and published it on Debian’s Gitlab.

PGP Clean Room GSoC Project

PGP Clean Room Wiki Page

GSoC Mentor’s Guide


Planet DebianSean Whitton: Why have combat encounters in 5e D&D?

A friend and I each run a D&D game, and we also play in each other’s games. We disagree on a number of different things about how the game is best played, and I learn a lot from seeing how both sets of choices play out in each of the two games.

One significant point of disagreement is how important it is to ensure that combat is balanced. In my game I disallow all homebrew and third party content. Only the core rulebooks, and official printed supplements, are permitted. By contrast, my friend has allowed several characters in his game to use homebrew races from the Internet, which are clearly more powerful than the PHB races. And he is quite happy to make modifications to spells and abilities without investigating the consequences for balance. Changes which seem innocuous can have balance consequences that you don’t realise for some time or do not observe without playtesting; I always assume the worst, and don’t change anything. (I constantly reflavour abilities and stats. In this post I’m interested in crunch alone.)

In this post I want to explain why I put such a premium on balance. Before getting on to that explanation, I first need to say something about the blogger Mike Shea’s claim that “D&D 5e is imbalanced by design and that’s ok. Imbalance leads to interesting stories.” (one, two). Shea is drawing a contrast between 4e and 5e. It was possible to more precisely quantify all character and monster abilities in 4e, which meant that if the calculations showed that an encounter would be easy, medium or hard, it was more likely to turn out to be easy, medium or hard. By contrast, 5e involves a number of abilities that can turn the tide of combat suddenly and against the odds. So while the XP thresholds might indicate that a planned encounter will be an easy one, a monster’s ability to petrify a character with just two failed saves could mean that the whole party goes down. Similarly for character abilities that can turn a powerful boss into a sitting duck for the entire combat. Shea points out that such abilities add an awful lot of fun and suspense to the game that might have been lacking from 4e.

I am not in a position to know whether 4e really lacked the kind of surprise and suspense described here. Regardless, Shea has identified something important about 5e. A great deal is added to combat by abilities on both sides that can quickly turn the tide. However, I find it misleading to say that this makes 5e unbalanced. Shea also uses the term ‘unpredictable’, and I think that’s a better way to refer to this phenomenon. For balance is more than determining an accurate challenge rating, and using this to pit the right number of monsters against the right number of players. In the absence of tide-turning abilities, that’s all balance is; however, balance is also a concept that applies to individual abilities, including tide-turning abilities.

I suggest that a very powerful ability, that has the potential to change the tide of a battle, is balanced by some combination of being (i) highly situational; (ii) very resource-depleting; and (iii) requires a saving throw, or similar, with the parameters set so that the full effect of the ability is unlikely to occur. Let me give a few examples. It’s been pointed out that the Fireball spell deals more damage than a multi-target 3rd level spell is meant to deal (DMG, p. 384). However, the spell is highly situational because it is highly likely to also hit your allies. (Evokers have a mitigation for this, but that is at the cost of a full class feature.) Power Word Kill might down a powerful enemy much sooner than expected. But there’s another enemy in the next room, and then that spell slot is gone.

We should conclude that 5e is not imbalanced by design, but unpredictable by design. In fact, I suggest that 5e spells and abilities are a mixture of the predictable and the unpredictable, and the concept of balance applies differently to these two kinds of abilities. A creature’s standard attack is predictable; balancing it is simply a matter of adjusting the to-hit bonus and the damage dice. Balancing its tide-turning ability is a matter of adjusting the factors I discussed in the previous paragraph, and possibly others. Playtesting will be necessary for both of these balancing efforts to succeed. Predictable abilities are unbalanced when they don’t do enough damage often enough, or too much damage too often, as compared with their CR. Unpredictable abilities are unbalanced when they offer systematic ways to change the tide of battle. Indeed, this makes them predictable.

Now that I’ve responded to Shea, I’ll say what I think the point of combat encounters is, and why this leads me to disallow content that has not been rigorously playtested. (My thinking here is very much informed by how Rodrigo Lopez runs D&D on the Critical Hit podcast, and what he says about running D&D in Q&A. Thank you Rodrigo!) Let me first set aside combat encounters that are meant to be a walkover, and combat encounters that are meant to end in multiple deaths or retreat. The purpose of walkover encounters is to set a particular tone in the story. It allows characters to know that certain things are not challenging to them, and this can be built into roleplaying (”we are among the most powerful denizens of the realm. That gives us a certain responsibility.”). The purpose of unwinnable combat encounters is to work through turning points in a campaign’s plot. The fact that an enemy cannot be defeated by the party is likely to drive a whole story arc; actually running that combat, rather than just saying that their attempt to fight the enemy fails, helps drive that home, and gives the characters points of reference (”you saw what happened when he turned his evil gaze upon you, Mazril. We’ve got to find another way!”).

Consider, then, other combat encounters. This is what I think they are all about. The GM creates an encounter that the rules say is winnable, or unwinnable but otherwise surviveable. Then the GM and the players fight out that encounter within the rules, each side trying to fully exploit the resources available to them, though without doing anything that would not make sense from the points of view of the characters and the monsters. Rolls are not made in secret or fudged, and HP totals are not arbitrarily adjusted. The GM does not pull punches. There are no restrictions on tactical thinking; for example, it’s fine for players to deduce enemy ACs, openly discuss them and act accordingly. However, actions taken must have an in-character justification. The outcome of the battle depends on a combination of tactics and luck: unpredictable abilities can turn the tide suddenly, and that might be enough to win or lose, but most of the time good tactical decision-making on the part of the players is rewarded. (The nature of monster abilities means that less interesting tactics are possible; further, the players have multiple brains between them, so ought to be able to out-think the GM in most cases.)

The result is that combat is a kind of minigame within D&D. The GM takes on a different role. In particular, GM fiat is suspended. The rules of the game are in charge (except, of course, where the GM has to make a call about a situation not covered by the rules). But isn’t this to throw out the advantages tabletop roleplaying games have over video games? Isn’t the GM’s freedom to bend the rules what makes D&D more fun and flexible? My basic response is that the rules for combat are only interesting when they do not depend on GM fiat, or other forms of arbitrariness, and for the parts of the game where GM fiat works well, it is better to use ability checks, or skills challenges, or straight roleplaying.

The thought is that the complexity of the combat rules is justified only when those rules are not arbitrary. If the players must think tactically within a system that can change arbitrarily, there’s no longer much point in investing energy in that tactical thinking. It is not intellectually interesting, it is much less fun, and it does not significantly contribute to the story. Tabletop games have an important role for a combination of GM fiat and dice rolls—the chance of those rolls succeeding remaining under the GM’s control—but that can be leveraged with simpler rules than those for combat. Now, I think that the combat rules are fun, so it is good to include them alongside the parts of the game that are more straightforwardly a collaboration between the GM and the players. But they should be deployed differently in order to bring out their strengths.

It should be clear, based on this, why I put such a premium on balance in combat: imbalance introduces arbitrariness to the combat system. If my tactical thinking is nullified by the systematic advantage that another party member has over my character, there’s no point in my engaging in that tactical thinking. Unpredictable abilities nullify tactical thinking in ways that are fun, but only when they are balanced in the ways I described above.

All of this is a matter of degree. I don’t think that combat is fun only when the characters and monsters are restricted to the core rulebooks; I enjoy combat when I play in my friend’s game. My view is just that combat is more fun the less arbitrary it is. I have certainly experienced the sense that my attempt to intellectually engage with the combat is being undermined by certain house rules and the overpowered abilities of homebrew races. Fortunately, thus far this problem has only affected a few turns of combat at a time, rather than whole combats.

Another friend is of the view that the GM should try to convince the players that they really are treating combat as I’ve described, but still fudge dice rolls in order to prevent, e.g., uninteresting character deaths. In response, I’ll note that I don’t feel capable of making those judgements, in the heat of the moment, about whether a death would be interesting. Further, having to worry about this would make combat less fun for me as the GM, and GM fun is important too.

Cory DoctorowA key to page-numbers in the Little Brother audiobook

Mary Kraus teaches my novel Little Brother to health science interns learning about cybersecurity; to help a student who has a print disability, Mary created a key that maps the MP3 files in the audiobook to the Tor paperback edition. She was kind enough to make her doc public to help other people move easily from the audiobook to the print edition — thanks, Mary!


Krebs on SecurityPowerful New DDoS Method Adds Extortion

Attackers have seized on a relatively new method for executing distributed denial-of-service (DDoS) attacks of unprecedented disruptive power, using it to launch record-breaking DDoS assaults over the past week. Now evidence suggests this novel attack method is fueling digital shakedowns in which victims are asked to pay a ransom to call off crippling cyberattacks.

On March 1, DDoS mitigation firm Akamai revealed that one of its clients was hit with a DDoS attack that clocked in at 1.3 Tbps, which would make it the largest publicly recorded DDoS attack ever.

The type of DDoS method used in this record-breaking attack abuses a legitimate and relatively common service called “memcached” (pronounced “mem-cash-dee”) to massively amp up the power of their DDoS attacks.

Installed by default on many Linux operating system versions, memcached is designed to cache data and ease the strain on heavier data stores, like disk or databases. It is typically found in cloud server environments and it is meant to be used on systems that are not directly exposed to the Internet.

Memcached communicates using the User Datagram Protocol or UDP, which allows communications without any authentication — pretty much anyone or anything can talk to it and request data from it.

Because memcached doesn’t support authentication, an attacker can “spoof” or fake the Internet address of the machine making that request so that the memcached servers responding to the request all respond to the spoofed address — the intended target of the DDoS attack.

Worse yet, memcached has a unique ability to take a small amount of attack traffic and amplify it into a much bigger threat. Most popular DDoS tactics that abuse UDP connections can amplify the attack traffic 10 or 20 times — allowing, for example a 1 mb file request to generate a response that includes between 10mb and 20mb of traffic.

But with memcached, an attacker can force the response to be thousands of times the size of the request. All of the responses get sent to the target specified in the spoofed request, and it requires only a small number of open memcached servers to create huge attacks using very few resources.

Akamai believes there are currently more than 50,000 known memcached systems exposed to the Internet that can be leveraged at a moment’s notice to aid in massive DDoS attacks.

Both Akamai and Qrator — a Russian DDoS mitigation company — published blog posts on Feb. 28 warning of the increased threat from memcached attacks.

“This attack was the largest attack seen to date by Akamai, more than twice the size of the September, 2016 attacks that announced the Mirai botnet and possibly the largest DDoS attack publicly disclosed,” Akamai said [link added]. “Because of memcached reflection capabilities, it is highly likely that this record attack will not be the biggest for long.”

According to Qrator, this specific possibility of enabling high-value DDoS attacks was disclosed in 2017 by a Chinese group of researchers from the cybersecurity 0Kee Team. The larger concept was first introduced in a 2014 Black Hat U.S. security conference talk titled “Memcached injections.”


On Thursday, KrebsOnSecurity heard from several experts from Cybereason, a Boston-based security company that’s been closely tracking these memcached attacks. Cybereason said its analysis reveals the attackers are embedding a short ransom note and payment address into the junk traffic they’re sending to memcached services.

Cybereason said it has seen memcached attack payloads that consist of little more than a simple ransom note requesting payment of 50 XMR (Monero virtual currency) to be sent to a specific Monero account. In these attacks, Cybereason found, the payment request gets repeated until the file reaches approximately one megabyte in size.

The ransom demand (50 Monero) found in the memcached attacks by Cybereason on Thursday.

Memcached can accept files and host files in temporary memory for download by others. So the attackers will place the 1 mb file full of ransom requests onto a server with memcached, and request that file thousands of times — all the while telling the service that the replies should all go to the same Internet address — the address of the attack’s target.

“The payload is the ransom demand itself, over and over again for about a megabyte of data,” said Matt Ploessel, principal security intelligence researcher at Cybereason. “We then request the memcached ransom payload over and over, and from multiple memcached servers to produce an extremely high volume DDoS with a simple script and any normal home office Internet connection. We’re observing people putting up those ransom payloads and DDoSsing people with them.”

Because it only takes a handful of memcached servers to launch a large DDoS, security researchers working to lessen these DDoS attacks have been focusing their efforts on getting Internet service providers (ISPs) and Web hosting providers to block traffic destined for the UDP port used by memcached (port 11211).

Ofer Gayer, senior product manager at security firm Imperva, said many hosting providers have decided to filter port 11211 traffic to help blunt these memcached attacks.

“The big packets here are very easy to mitigate because this is junk traffic and anything coming from that port (11211) can be easily mitigated,” Gayer said.

Several different organizations are mapping the geographical distribution of memcached servers that can be abused in these attacks. Here’s the world at-a-glance, from our friends at

The geographic distribution of memcached servers exposed to the Internet. Image:

Here are the Top 20 networks that are hosting the most number of publicly accessible memcached servers at this moment, according to data collected by Cybereason:

The global ISPs with the most number of publicly available memcached servers.

DDoS monitoring site publishes a live, running list of the latest targets getting pelted with traffic in these memcached attacks.

What do the stats at tell us? According to netlab@360, memcached attacks were not super popular as an attack method until very recently.

“But things have greatly changed since February 24th, 2018,” netlab wrote in a Mar. 1 blog post, noting that in just a few days memcached-based DDoS went from less than 50 events per day, up to 300-400 per day. “Today’s number has already reached 1484, with an hour to go.”

Hopefully, the global ISP and hosting community can come together to block these memcached DDoS attacks. I am encouraged by what I have heard and seen so far, and hope that can continue in earnest before these attacks start becoming more widespread and destructive.

Here’s the Cybereason video from which that image above with the XMR ransom demand was taken:

Cory DoctorowI’m coming to the Adelaide Festival this weekend (and then to Wellington, NZ!)

I’m on the last two cities in my Australia/NZ tour for my novel Walkaway: today, I’m flying to Adelaide for the Adelaide Festival, where I’m appearing in several program items: Breakfast with Papers on Sunday at 8AM; a book signing on Monday at 10AM in Dymocks at Rundle Mall; “Dust Devils,” a panel followed by a signing on Monday at 5PM on the West Stage at Pioneer Women’s Memorial Garden; and “Craphound,” a panel/signing on Tuesday at 5PM on the East Stage at Pioneer Women’s Memorial Garden.

After Adelaide, I’m off to Wellington for Writers and Readers Week and then the NetHui one-day copyright event.

I’ve had a fantastic time in Perth, Melbourne and Sydney and it’s been such a treat to meet so many of you — I’m looking so forward to these last two stops!

CryptogramFriday Squid Blogging: Searching for Humboldt Squid with Electronic Bait

Video and short commentary.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianJohn Goerzen: Emacs #3: More on org-mode

This is third in a series on Emacs and org-mode.

Todo tracking and keywords

When using org-mode to track your TODOs, it can have multiple states. You can press C-c C-t for a quick shift between states. I have set this:

(setq org-todo-keywords '(
  (sequence "TODO(t!)" "NEXT(n!)" "STARTED(a!)" "WAIT(w@/!)" "OTHERS(o!)" "|" "DONE(d)" "CANCELLED(c)")

Here, I set up 5 states that are for a task that is not yet done: TODO, NEXT, STARTED, WAIT, and OTHERS. Each has a single-character shortcut (t, n, a, etc). The states after the pipe symbol are ones that are considered “done”. I have two: DONE (for things that I have done) and CANCELED (for things that I haven’t done, but for whatever reason, won’t).

The exclamation mark means to log the time when an item was changed to a state. I don’t add this to the done states because those are already logged anyhow. The @ sign means to prompt for a reason; so when switching to WAIT, org-mode will ask me why and add this to the note.

Here’s an example of an entry that has had some state changes:

** DONE This is a test
   CLOSED: [2018-03-02 Fri 03:05]
   - State "DONE"       from "WAIT"       [2018-03-02 Fri 03:05]
   - State "WAIT"       from "TODO"       [2018-03-02 Fri 03:05] \\
     waiting for pigs to fly
   - State "TODO"       from "NEXT"       [2018-03-02 Fri 03:05]
   - State "NEXT"       from "TODO"       [2018-03-02 Fri 03:05]

Here, the most recent items are on top.

Agenda mode, schedules, and deadlines

When you’re in a todo item, C-c C-s or C-c C-d can set a schedule or a deadline for it, respectively. These show up in agenda mode. The difference is in intent and presentation. A schedule is something that you expect to work on at around a time, while a deadline is something that is due at a specific time. By default, the agenda view will start warning you about deadline items in advance.

And while we’re at it, the agenda view will show you the items that you have coming up, offers a nice way to search for items based on plain text or tags, and handles bulk manipulation of items even across multiple files. I covered setting the files for agenda mode in part 2 of this series.


Of course org-mode has tags. You can quickly set them with C-c C-q.

You can set shortcuts for tags you might like to use often. Perhaps something like this:

  (setq org-tag-persistent-alist 
        '(("@phone" . ?p) 
          ("@computer" . ?c) 
          ("@websurfing" . ?w)
          ("@errands" . ?e)
          ("@outdoors" . ?o)
          ("MIT" . ?m)
          ("BIGROCK" . ?b)
          ("CONTACTS" . ?C)
          ("INBOX" . ?i)

You can also add tags to this list on a per-file basis, and also set tags for something on a per-file basis. I use that for my and files to set an INBOX tag. I can then review all items tagged INBOX from the agenda view each day, and the simple act of refiling them into other files will cause them to lost the INBOX tag.


“Refiling” is moving things around, either within a file or elsewhere. It has completion using your headlines. C-c C-w does this. I like these settings:

(setq org-outline-path-complete-in-steps nil)         ; Refile in a single go
(setq org-refile-use-outline-path 'file)


After awhile, you’ll get your files all cluttered with things that are done. org-mode has an archive feature to move things out of your main .org files and into some other files for future reference. If you have your org files in git or something, you may wish to delete these other files since you’d have things in history anyhow, but I find them handy for grepping and searching.

I periodically want to go through and archive everything in my files. Based on a stackoverflow discussion, I have this code:

(defun org-archive-done-tasks ()
   (lambda ()
     (setq org-map-continue-from (outline-previous-heading)))
   "/DONE" 'file)
   (lambda ()
     (setq org-map-continue-from (outline-previous-heading)))
   "/CANCELLED" 'file)

This is based on a particular answer — see the comments there for some additional hints. Now you can run M-x org-archive-done-tasks and everything in the current file marked DONE or CANCELED will be pulled out into a different file.

Up next

I’ll wrap up org-mode with a discussion of automatically receiving emails into org, and syncing org between machines.

Resources to accompany this article

Planet DebianUrvika Gola: BOB Konferenz’18 in Berlin

Recently Pranav Jain and I attended Bob Conference in Berlin, Germany. The conference started with a keynote on a very interesting topic, A language for making movies. Using Non Linear Video Editor for making movies was time consuming, ofcourse. The speaker talked about the struggle of merging presentation, video and high quality sound for conferences. Clearly, Automation was needed here which could be achieved by 1. Making a plugin for non linear VE, 2. Writing a UI automation tool like an operating system macro 3. Using shell scripting. However, dealing shell script for this purpose could be time consuming no matter how great shell scripts are. While the goal to achieve here was to edit videos using a language only and let the language get in the way of solving this. In other words a DSL Domain-Specific Language was required along with Syntax Parse. Video ( a language for making movies which integrated with Racket ecosystem. It combines the power of a traditional video editor with the capabilities of a full programming language.

The next session was about Reactive Streaming with Akka Streams. Streaming Big Data applications is a challenge in itself by ensuring there is near to real time processing, i.e there is no time to batch data and process later. Streaming has to be done in a fault tolerant way, we have no time to deal with faults. Talking about streams, they are two types of streams Bounded and Unbounded! Bounded streams basically mean that the incoming stream is batched, processed to give some output whereas an Unbounded streams just keeps on flowing… just like that. Akka Streams make it easy to model type-safe message processing pipelines. Type-safe means that at compile time, it’s checks that data definitions are compatible. Akka streams has explicit semantics, which is quite important.
Basic building blocks for Akka streams are Sources (produce element of a type A), Sinks (take item of type A and consume A) and Flow (consume element of type A and produce elements of type B). The source will send data via the flow to the sinks. There are situations where data is not consumed or produced. Materialized values are useful when we, for example want to know if the stream was successful or not, result of which could be true/false. Another concept involved was of Backpressure. When we read things from file, it’s fast. If we split that file based on \n, it’s faster. If we want via http from somewhere, it can be slow due to net connectivity. So what backpressure does is that, any component can say ‘wooh! slow down, I need more time’. Everything is just as fast as the slowest component in the flow, which means that slowest component in the chain would determine the throughput. However, there are situations when we really don’t want to/ can’t control the speed of source. To have explicit control over back pressuring we can use buffering. If many requests are coming and reaches a limit, can set a buffer after which the requests can be discarded or we can also push the back pressure upstream when the buffer is full.

Next we saw a fun demo on GRiSP, Bare Metal Functional Programming. GRiSP allows you to run Erlang on bare metal hardware, without a kernel. GRiSP board could be an alternative to raspberry pi Or arduino. The robot was stubborn however, interesting to watch! Since, Pranav and I have worked on a Real Time Communications projects we were inclined towards attending a talk on Understanding real time ecosystems which was very informative. Learned about HTTP, AJAX polling, AJAX Long polling, HTTP/2, Pub/Sub and other concepts which were relatable. Learned more about protocols/ layers in the last talk of the conference, Engineering TCP/IP with logic.

This is just a summary of our experiences and what we were able to grasp at the conference and also share our individual experience with Debian on GSoC and Outreachy.

Thank you Dr. Michael Sperber for the opportunity and the organizers for putting up the conference.

CryptogramMalware from Space

Since you don't have enough to worry about, here's a paper postulating that space aliens could send us malware capable of destroying humanity.

Abstract: A complex message from space may require the use of computers to display, analyze and understand. Such a message cannot be decontaminated with certainty, and technical risks remain which can pose an existential threat. Complex messages would need to be destroyed in the risk averse case.

I think we're more likely to be enslaved by malicious AIs.

Worse Than FailureError'd: I Don't Always Test my Code, but When I do...

"Does this mean my package is here or is it also in development?" writes Nariim.


Stuart L. wrote, "Who needs a development environment when you can just test in production on the 'Just In' feed?"


"It was so nice of Three to unexpectedly include me - a real user - in their User Acceptance Testing. Yeah, it's still not fixed," wrote Paul P.


"I found this great nearby hotel option that transcended into the complex plane," Rosenfield writes.


Stuart L. also wrote in, "I can't think of a better place for BoM to test out cyclone warnings than in production."


"The Ball Don't Lie blog at Yahoo! Sports seems to have run out of content during the NBA Finals so they started testing instead," writes Carlos S.


[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

CryptogramCellebrite Unlocks iPhones for the US Government

Forbes reports that the Israeli company Cellebrite can probably unlock all iPhone models:

Cellebrite, a Petah Tikva, Israel-based vendor that's become the U.S. government's company of choice when it comes to unlocking mobile devices, is this month telling customers its engineers currently have the ability to get around the security of devices running iOS 11. That includes the iPhone X, a model that Forbes has learned was successfully raided for data by the Department for Homeland Security back in November 2017, most likely with Cellebrite technology.


It also appears the feds have already tried out Cellebrite tech on the most recent Apple handset, the iPhone X. That's according to a warrant unearthed by Forbes in Michigan, marking the first known government inspection of the bleeding edge smartphone in a criminal investigation. The warrant detailed a probe into Abdulmajid Saidi, a suspect in an arms trafficking case, whose iPhone X was taken from him as he was about to leave America for Beirut, Lebanon, on November 20. The device was sent to a Cellebrite specialist at the DHS Homeland Security Investigations Grand Rapids labs and the data extracted on December 5.

This story is based on some excellent reporting, but leaves a lot of questions unanswered. We don't know exactly what was extracted from any of the phones. Was it metadata or data, and what kind of metadata or data was it.

The story I hear is that Cellebrite hires ex-Apple engineers and moves them to countries where Apple can't prosecute them under the DMCA or its equivalents. There's also a credible rumor that Cellebrite's mechanisms only defeat the mechanism that limits the number of password attempts. It does not allow engineers to move the encrypted data off the phone and run an offline password cracker. If this is true, then strong passwords are still secure.

EDITED TO ADD (3/1): Another article, with more information. It looks like there's an arms race going on between Apple and Cellebrite. At least, if Cellebrite is telling the truth -- which they may or may not be.


Krebs on SecurityFinancial Cyber Threat Sharing Group Phished

The Financial Services Information Sharing and Analysis Center (FS-ISAC), an industry forum for sharing data about critical cybersecurity threats facing the banking and finance industries, said today that a successful phishing attack on one of its employees was used to launch additional phishing attacks against FS-ISAC members.

The fallout from the back-to-back phishing attacks appears to have been limited and contained, as many FS-ISAC members who received the phishing attack quickly detected and reported it as suspicious. But the incident is a good reminder to be on your guard, remember that anyone can get phished, and that most phishing attacks succeed by abusing the sense of trust already established between the sender and recipient.

The confidential alert FS-ISAC sent to members about a successful phishing attack that spawned phishing emails coming from the FS-ISAC.

Notice of the phishing incident came in an alert FS-ISAC shared with its members today and obtained by KrebsOnSecurity. It describes an incident on Feb. 28 in which an FS-ISAC employee “clicked on a phishing email, compromising that employee’s login credentials. Using the credentials, a threat actor created an email with a PDF that had a link to a credential harvesting site and was then sent from the employee’s email account to select members, affiliates and employees.”

The alert said while FS-ISAC was already planning and implementing a multi-factor authentication (MFA) solution across all of its email platforms, “unfortunately, this incident happened to an employee that was not yet set up for MFA. We are accelerating our MFA solution across all FS-ISAC assets.”

The FS-ISAC also said it upgraded its Office 365 email version to provide “additional visibility and security.”

In an interview with KrebsOnSecurity, FS-ISAC President and CEO Bill Nelson said his organization has grown significantly in new staff over the past few years to more than 75 people now, including Greg Temm, the FS-ISAC’s chief information risk officer.

“To say I’m disappointed this got through is an understatement,” Nelson said. “We need to accelerate MFA extremely quickly for all of our assets.”

Nelson observed that “The positive messaging out of this I guess is anyone can become victimized by this.” But according to both Nelson and Temm, the phishing attack that tricked the FS-ISAC employee into giving away email credentials does not appear to have been targeted — nor was it particularly sophisticated.

“I would classify this as a typical, routine, non-targeted account harvesting and phishing,” Temm said. “It did not affect our member portal, or where our data is. That’s 100 percent multifactor. In this case it happened to be an asset that did not have multifactor.”

In this incident, it didn’t take a sophisticated actor to gain privileged access to an FS-ISAC employee’s inbox. But attacks like these raise the question: How successful might such a phishing attack be if it were only slightly more professional and/or organized?

Nelson said his staff members all participate in regular security awareness training and testing, but that there is always room to fill security gaps and move the needle on how many people click when they shouldn’t with email.

“The data our members share with us is fully protected,” he said. “We have a plan working with our board of directors to make sure we have added security going forward,” Nelson said. “But clearly, recognizing where some of these softer targets are is something every company needs to take a look at.”

CryptogramRussians Hacked the Olympics

Two weeks ago, I blogged about the myriad of hacking threats against the Olympics. Last week, the Washington Post reported that Russia hacked the Olympics network and tried to cast the blame on North Korea.

Of course, the evidence is classified, so there's no way to verify this claim. And while the article speculates that the hacks were a retaliation for Russia being banned due to doping, that doesn't ring true to me. If they tried to blame North Korea, it's more likely that they're trying to disrupt something between North Korea, South Korea, and the US. But I don't know.

Worse Than FailureCodeSOD: What a Stream

In Java 8, they added the Streams API. Coupled with lambdas, this means that developers can write the concise and expressive code traditionally oriented with functional programming. It’s the best bits of Java blended with the best bits of Clojure! The good news, is that it allows you to write less code! The better news is that you can abuse it to write more code, if you’re so inclined.

Antonio inherited some code written by “Frenk”, who was thus inclined. Frenk wasn’t particularly happy with their job, but were one of the “rockstar programmers” in the eyes of management, so Frenk was given the impossible-to-complete tasks and given complete freedom in the solution.

Frenk had a problem, though. Nothing Frenk was doing was actually all that impossible. If they solved everything with code that anyone else could understand, they wouldn’t look like an amazing genius. So Frenk purposefully obfuscated every line of code, ignoring indentation, favoring one-character variable names, and generally trying to solve each problem in the most obtuse way possible.

Which yielded this.

    Resource[] r; //@Autowired ndr
    Map<File, InputStream> m = null;
    if (file != null)
    m.put(file, new FileInputStream(file));}else

    m = -> { try { return x.getFile(); }
catch (Exception e) { throw new IllegalStateException(e);}},
    x -> {try{return x.getInputStream();}catch (Exception e){throw new IllegalStateException(e)

As purposefully unreadable code, I’d say that Frenk fails. That’s not to say that it isn’t bad, but Frenk’s attempts to make it unreadable… just make it annoying. I understand what the code does, but I’m just left wondering at why.

I can definitely say that this has never been tested in a case where the file variable is non-null, because that wouldn’t work. Antonio confirms that their IDE was throwing up plenty of warnings about calling a method on a variable that was probably null, with the m.put(…) line. It’s nice that they half-way protect against nulls- one variable is checked, but the other isn’t.

Frenk’s real artistry is in employing streams to convert an array to a map. On its surface, it’s not an objectively wrong approach- this is the kind of things streams are theoretically good at. Examine each element in the array, and apply a lambda that extracts the key and another lambda that extracts the value and put it into a map.

There are many real-world cases where I might use this exact technique. But in this case, Antonio refactored it to something a bit cleaner:

        Resource[] resources; //@Autowired again
        Map<File, InputStream> resourceMap = new HashMap<>();
        if (file != null)
            resourceMap.put(file, new FileInputStream(file));
            for (Resource res : resources)
                resourceMap.put(res.getFile(), res.getInputStream());

Here, the iterative approach is much simpler, and the intent of the code is more clear. Just because you have a tool doesn’t make it the right tool for the job. And before you wonder about the lack of exception handling- both the original block and the refactored version were already wrapped up in an exception handling block that can handle the IOException that failed access to the files would throw.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Cory DoctorowHey, Sydney! I’m coming to see you tonight (then Adelaide and Wellington!)

I’m just about to go to the airport to fly to Sydney for tonight’s event, What should we do about Democracy?

It’s part of the Australia/New Zealand tour for Walkaway, and from Sydney, I’m moving on to the Adelaide Festival and then to Wellington for Writers and Readers Week and the NetHui one-day event on copyright.

It feels like democracy is under siege, even in rich, peaceful countries like Australia that have escaped financial shocks and civil strife. Populist impulses have been unleashed in the UK and USA. There is a record lack of trust in the institutions of politics and government, exacerbated by the ways in which social media and digital technology can spread ‘fake news’ and are being harnessed by foreign powers to meddle in politics. Important issues that citizens care about, like climate change, are sidelined by professional politicians, enhancing the appeal of outsider figures. Do these problems add up to the failure of democracy? Are Brexit and Trump outliers, or the new normal? Join a lively panel of experts and commentators explore some big questions about the future of democracy, and think more clearly about what we ought to do.

Speakers Cory Doctorow, A.C. Grayling, Rebecca Huntley and Lenore Taylor

Chair Jeremy Moss

Cory DoctorowMy short story about better cities, where networks give us the freedom to schedule our lives to avoid heat-waves and traffic jams

I was lucky enough to be invited to submit a piece to Ian Bogost’s Atlantic series on the future of cities (previously: James Bridle, Bruce Sterling, Molly Sauter, Adam Greenfield); I told Ian I wanted to build on my 2017 Locus column about using networks to allow us to coordinate our work and play in a way that maximized our freedom, so that we could work outdoors on nice days, or commute when the traffic was light, or just throw an impromptu block party when the neighborhood needed a break.

The story is out today, with a gorgeous illustration by Molly Crabapple; the Atlantic called it “The City of Coordinated Leisure,” but in my heart it will always be “Coase’s Day Off: a microeconomics of coordinated leisure.”

There had been some block parties on Lima Street when Arturo had been too small to remember them, but then there had been a long stretch of unreasonably seasonable weather and no one had tried it, not until the year before, on April 18, a Thursday after a succession of days that vied to top each other for inhumane conditions, the weather app on the hallway wall showing 112 degrees before breakfast.

Mr. Papazian was the block captain for that party, and the first they’d known of it was when Arturo’s dad called out to his mom that Papazian had messaged them about a block party, and there was something funny in Dad’s tone, a weird mix of it’s so crazy and let’s do it.

That had been a day to remember, and Arturo had remembered, and watched the temperature.

The City of Coordinated Leisure [Cory Doctorow/The Atlantic]

Krebs on SecurityHow to Fight Mobile Number Port-out Scams

T-Mobile, AT&T and other mobile carriers are reminding customers to take advantage of free services that can block identity thieves from easily “porting” your mobile number out to another provider, which allows crooks to intercept your calls and messages while your phone goes dark. Tips for minimizing the risk of number porting fraud are available below for customers of all four major mobile providers, including Sprint and Verizon.

Unauthorized mobile phone number porting is not a new problem, but T-Mobile said it began alerting customers about it earlier this month because the company has seen a recent uptick in fraudulent requests to have customer phone numbers ported over to another mobile provider’s network.

“We have been alerting customers via SMS that our industry is experiencing a phone number port out scam that could impact them,” T-Mobile said in a written statement. “We have been encouraging them to add a port validation feature, if they’ve not already done so.”

Crooks typically use phony number porting requests when they have already stolen the password for a customer account (either for the mobile provider’s network or for another site), and wish to intercept the one-time password that many companies send to the mobile device to perform two-factor authentication.

Porting a number to a new provider shuts off the phone of the original user, and forwards all calls to the new device. Once in control of the mobile number, thieves can request any second factor that is sent to the newly activated device, such as a one-time code sent via text message or or an automated call that reads the one-time code aloud.

In these cases, the fraudsters can call a customer service specialist at a mobile provider and pose as the target, providing the mark’s static identifiers like name, date of birth, social security number and other information. Often this is enough to have a target’s calls temporarily forwarded to another number, or ported to a different provider’s network.

Port out fraud has been an industry problem for a long time, but recently we’ve seen an uptick in this illegal activity,” T-Mobile said.  “We’re not providing specific metrics, but it’s been enough that we felt it was important to encourage customers to add extra security features to their accounts.”

In a blog post published Tuesday, AT&T said bad guys sometimes use illegal porting to steal your phone number, transfer the number to a device they control and intercept text authentication messages from your bank, credit card issuer or other companies.

“You may not know this has happened until you notice your mobile device has lost service,” reads a post by Brian Rexroad, VP of security relations at AT&T. “Then, you may notice loss of access to important accounts as the attacker changes passwords, steals your money, and gains access to other pieces of your personal information.”

Rexroad says in some cases the thieves just walk into an AT&T store and present a fake ID and your personal information, requesting to switch carriers. Porting allows customers to take their phone number with them when they change phone carriers.

The law requires carriers to provide this number porting feature, but there are ways to reduce the risk of this happening to you.

T-Mobile suggests adding its port validation feature to all accounts. To do this, call 611 from your T-Mobile phone or dial 1-800-937-8997 from any phone. The T-Mobile customer care representative will ask you to create a 6-to-15-digit passcode that will be added to your account.

“We’ve included alerts in the T-Mobile customer app and on, but we don’t want customers to wait to get an alert to take action,” the company said in its statement. “Any customer can call 611 at any time from their mobile phone and have port validation added to their accounts.”

Verizon requires a match on a password or a PIN associated with the account for a port to go through. Subscribers can set their PIN via their Verizon Wireless website account or by visiting a local shop.

Sprint told me that in order for a customer to port their number to a different carrier, they must provide the correct Sprint account number and PIN number for the port to be approved. Sprint requires all customers to create a PIN during their initial account setup.

AT&T calls its two-factor authentication “extra security,” which involves creating a unique passcode on your AT&T account that requires you to provide that code before any changes can be made — including ports initiated through another carrier. Follow this link for more information. And don’t use something easily guessable like your SSN (the last four of your SSN is the default PIN, so make sure you change it quickly to something you can remember but that’s non-obvious).

Bigger picture, these porting attacks are a good reminder to use something other than a text message or a one-time code that gets read to you in an automated phone call. Whenever you have the option, choose the app-based alternative: Many companies now support third-party authentication apps like Google Authenticator and Authy, which can act as powerful two-factor authentication alternatives that are not nearly as easy for thieves to intercept.

Several of the mobile companies referred me to the work of a Mobile Authentication task force created by the carriers last fall. They say the issue of unauthorized ports to commit fraud is being addressed by this initiative.

For more on tightening your mobile security stance, see last year’s story, “Is Your Mobile Carrier Your Weakest Link?

CryptogramApple to Store Encryption Keys in China

Apple is bowing to pressure from the Chinese government and storing encryption keys in China. While I would prefer it if it would take a stand against China, I really can't blame it for putting its business model ahead of its desires for customer privacy.

Two more articles.

Worse Than FailureCodeSOD: The Part Version

Once upon a time, there was a project. Like most projects, it was understaffed, under-budgeted, under-estimated, and under the gun. Death marches ensued, and 80 hour weeks became the norm. The attrition rate was so high that no one who was there at the start of the project was there at the end of the project. Like the Ship of Theseus, each person was replaced at least once, but it was still the same team.

Eric wasn’t on that team. He was, however, a consultant. When the project ended and nothing worked, Eric got called in to fix it. And then called back to fix it some more. And then called back to implement new features. And called back…

While diagnosing one problem, Eric stumbled across the method getPartVersions. A part number was always something like “123456–1”, where the first group of numbers were the part number itself, and the portion after the “-” was the version of that part.

So, getPartVersions, then, should be something like:

String getPartVersions(String part) {
    //sanity checks omitted
    return part.split("-")[1];

The first hint that things weren’t implemented in a sane way was the method’s signature:

    private List<Integer> getPartVersions(final String searchString)

Why was it returning a list? The calling code always used the first element in the list, and the list was always one element long.

    private List<Integer> getPartVersions(final String searchString) {
        final List<Integer> partVersions = new ArrayList<>();
        if (StringUtils.indexOfAny(searchString, DELIMITER) != -1) {
            final String[] splitString = StringUtils.split(searchString, DELIMITER);
            if (splitString != null && splitString.length > 1) {
                //this is the partIdentifier, we make it empty it so it will not be parsed as a version
                splitString[0] = "";
                for (String s : splitString) {
                    s = s.trim();
                    try {
                        if (s.length() <= 2) {
                    } catch (final NumberFormatException ignored) {
                        //Do nothing probably not an partVersion
        return partVersions;

A part number is always in the form “{PART}-{VERSION}”. That is what the variable searchString should contain. So, they do their basic sanity checks- is there a dash there, does it split into two pieces, etc. Even these sanity checks hint at a WTF, as StringUtils obviously is just wrappers around built-in string functions.

Things get really odd, though, with this:

                splitString[0] = "";
                for (String s : splitString) //…

Throw away the part number, then iterate across the entire series of strings we made by splitting. Check the length- if it’s less than or equal to two, it must be the part version. Parse it into an integer and put it in the list. The real “genius” element of this code is that since the first entry in the splitString array is set to an empty string, Integer.parseInt will throw an exception, thus ensuring we don’t accidentally put the part number in our list.

I’ve personally written methods that have this sort of tortured logic, and given what Eric tells us about the history of the project, I suspect I know what happened here. This method was written before the requirement it fulfilled was finalized. No one, including the business users, actually knew the exact format or structure of a part number. The developer got five different explanations, which turned out to be wrong in 15 different ways, and implemented a compromise that just kept getting tweaked until someone looked at the results and said, “Yes, that’s right.” The dev then closed out the requirement and moved onto the next one.

Eric left the method alone: he wasn’t being paid to refactor things, and too much downstream code depended on the method signature returning a List<Integer>.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Krebs on SecurityBot Roundup: Avalanche, Kronos, NanoCore

It’s been a busy few weeks in cybercrime news, justifying updates to a couple of cases we’ve been following closely at KrebsOnSecurity. In Ukraine, the alleged ringleader of the Avalanche malware spam botnet was arrested after eluding authorities in the wake of a global cybercrime crackdown there in 2016. Separately, a case that was hailed as a test of whether programmers can be held accountable for how customers use their product turned out poorly for 27-year-old programmer Taylor Huddleston, who was sentenced to almost three years in prison for making and marketing a complex spyware program.

First, the Ukrainian case. On Nov. 30, 2016, authorities across Europe coordinated the arrest of five individuals thought to be tied to the Avalanche crime gang, in an operation that the FBI and its partners abroad described as an unprecedented global law enforcement response to cybercrime. Hundreds of malicious web servers and hundreds of thousands of domains were blocked in the coordinated action.

The global distribution of servers used in the Avalanche crime machine. Source:

The alleged leader of the Avalanche gang — 33-year-old Russian Gennady Kapkanov — did not go quietly at the time. Kapkanov allegedly shot at officers with a Kalashnikov assault rifle through the front door as they prepared to raid his home, and then attempted to escape off of his 4th floor apartment balcony. He was later released, after police allegedly failed to file proper arrest records for him.

But on Monday Agence France-Presse (AFP) reported that Ukrainian authorities had once again collared Kapkanov, who was allegedly living under a phony passport in Poltav, a city in central Ukraine. No word yet on whether Kapkanov has been charged, which was supposed to happen Monday.

Kapkanov’s drivers license. Source:


Lawyers for Taylor Huddleston, a 27-year-old programmer from Hot Springs, Ark., originally asked a federal court to believe that the software he sold on the sprawling hacker marketplace Hackforums — a “remote administration tool” or “RAT” designed to let someone remotely administer one or many computers remotely — was just a benign tool.

The bad things done with Mr. Huddleston’s tools, the defendant argued, were not Mr. Huddleston’s doing. Furthermore, no one had accused Mr. Huddleston of even using his own software.

The Daily Beast first wrote about Huddleston’s case in 2017, and at the time suggested his prosecution raised questions of whether a programmer could be held criminally responsible for the actions of his users. My response to that piece was “Dual-Use Software Criminal Case Not So Novel.

Photo illustration by Lyne Lucien/The Daily Beast

The court was swayed by evidence that yes, Mr. Huddleston could be held criminally responsible for those actions. It sentenced him to 33 months in prison after the defendant acknowledged that he knew his RAT — a Remote Access Trojan dubbed “NanoCore RAT” — was being used to spy on webcams and steal passwords from systems running the software.

Of course Huddleston knew: He didn’t market his wares on some Craigslist software marketplace ad, or via video promos on his local cable channel: He marketed the NanoCore RAT and another software licensing program called Net Seal exclusively on Hackforums[dot]net.

This sprawling, English language forum has a deep bench of technical forum discussions about using RATs and other tools to surreptitiously record passwords and videos of “slaves,” the derisive term for systems secretly infected with these RATs.

Huddleston knew what many of his customers were doing because many NanoCore users also used Huddleston’s Net Seal program to keep their own RATs and other custom hacking tools from being disassembled or “cracked” and posted online for free. In short: He knew what programs his customers were using Net Seal on, and he knew what those customers had done or intended to do with tools like NanoCore.

The sentencing suggests that where you choose to sell something online says a lot about what you think of your own product and who’s likely buying it.

Daily Beast author Kevin Poulsen noted in a July 2017 story that Huddleston changed his tune and pleaded guilty. The story pointed to an accompanying plea in which Huddleston stipulated that he “knowingly and intentionally aided and abetted thousands of unlawful computer intrusions” in selling the program to hackers and that he “acted with the purpose of furthering these unauthorized computer intrusions and causing them to occur.”


Bleeping Computer’s Catalin Cimpanu observes that Huddleston’s case is similar to another being pursued by U.S. prosecutors against Marcus “MalwareTech” Hutchins, the security researcher who helped stop the spread of the global WannaCry ransomware outbreak in May 2017. Prosecutors allege Hutchins was the author and proprietor of “Kronos,” a strain of malware designed to steal online banking credentials.

Marcus Hutchins, just after he was revealed as the security expert who stopped the WannaCry worm. Image:

On Sept. 5, 2017, KrebsOnSecurity published “Who is Marcus Hutchins?“, a breadcrumbs research piece on the public user profiles known to have been wielded by Hutchins. The data did not implicate him in the Kronos trojan, but it chronicles the evolution of a young man who appears to have sold and published online quite a few unique and powerful malware samples — including several RATs and custom exploit packs (as well as access to hacked PCs).

MalwareTech declined to be interviewed by this publication in light of his ongoing prosecution. But Hutchins has claimed he never had any customers because he didn’t write the Kronos trojan.

Hutchins has pleaded not guilty to all four counts against him, including conspiracy to distribute malicious software with the intent to cause damage to 10 or more affected computers without authorization, and conspiracy to distribute malware designed to intercept protected electronic communications.

Hutchins said through his @MalwareTechBlog account on Twitter Feb. 26 that he wanted to publicly dispute my Sept. 2017 story. But he didn’t specify why other than saying he was “not allowed to.”

MWT wrote: “mrw [my reaction when] I’m not allowed to debunk the Krebs article so still have to listen to morons telling me why I’m guilty based on information that isn’t even remotely correct.”

Hutchins’ tweet on Feb. 26, 2018.

According to a story at BankInfoSecurity, the evidence submitted by prosecutors for the government includes:

  • Statements made by Hutchins after he was arrested.
  • A CD containing two audio recordings from a county jail in Nevada where he was detained by the FBI.
  • 150 pages of Jabber chats between the defendant and an individual.
  • Business records from Apple, Google and Yahoo.
  • Statements (350 pages) by the defendant from another internet forum, which were seized by the government in another district.
  • Three to four samples of malware.
  • A search warrant executed on a third party, which may contain some privileged information.

The case against Hutchins continues apace in Wisconsin. A scheduling order for pretrial motions filed Feb. 22 suggests the court wishes to have a speedy trial that concludes before the end of April 2018.

TEDYou are here for a reason: 4 questions with Halla Tómasdóttir

Cartier and TED believe in the power of bold ideas to empower local initiatives to have global impact. To celebrate Cartier’s dedication to launching the ideas of female entrepreneurs into concrete change, TED has curated a special session of talks around the theme “Bold Alchemy” for the Cartier Women’s Initiative Awards, featuring a selection of favorite TED speakers.

Leading up to the session, TED talked with financier, entrepreneur and onetime candidate for president of Iceland, Halla Tómasdóttir, about what influences, inspires and drives her to be bold.

TED: Tell us who you are.
Halla Tómasdóttir: I think of myself first and foremost as a change catalyst who is passionate about good leadership and a gender-balanced world. My leadership career started in corporate America with Mars and Pepsi Cola, but since then I have served as an entrepreneur, educator, investor, board director, business leader and presidential candidate. I am married, a proud mother of two teenagers and a dog and am perhaps best described by the title given to me by the New Yorker: “A Living Emoji of Sincerity.”

TED: What’s a bold move you’ve made in your career?
HT: I left a high-profile position as the first female CEO of the Iceland Chamber of Commerce to become an entrepreneur with the vision to incorporate feminine values into finance. I felt the urge to show a different way in a sector that felt unsustainable to me, and I longed to work in line with my own values.

TED: Tell us about a woman who inspires you.
HT: The women of Iceland inspired me at an early age, when they showed incredible courage, solidarity and sisterhood and “took the day off” (went on a strike) and literally brought the country to its knees — as nothing worked when women didn’t do any work. Five years later, Iceland was the first country in the world to democratically elect a woman as president. I was 11 years old at the time, and her leadership has inspired me ever since. Her clarity on what she cares about and her humble way of serving those causes is truly remarkable.

TED: If you could go back in time, what would you tell your 18-year-old self?
HT: I would say: Halla, just be you and know that you are enough. People will frequently tell you things like: “This is the way we do things around here.” Don’t ever take that as a valid answer if it doesn’t feel right to you. We are not here to continue to do more of the same if it doesn’t work or feel right anymore. We are here to grow, ourselves and our society. You are here for a reason: make your life and leadership matter.

The private TED session at Cartier takes place April 26 in Singapore. It will feature talks from a diverse range of global leaders, entrepreneurs and change-makers, exploring topics ranging from the changing global workforce to maternal health to data literacy, and it will include a performance from the only female double violinist in the world.

Worse Than Failure-0//

In software development, there are three kinds of problems: small, big and subtle. The small ones are usually fairly simple to track down; a misspelled label, a math error, etc. The large ones usually take longer to find; a race condition that you just can't reproduce, an external system randomly feeding you garbage, and so forth.

Internet word cloud

The subtle problems are an entirely different beast. It can be as simple as somebody entering 4321 instead of 432l (432L), or similar with 'i', 'l', '1', '0' and 'O'. It can be an interchanged comma and period. It can be something more complex, such as an unsupported third party library that throws back errors for undefined conditions, but randomly provides so little information as to be useful to neither user nor developer.

Brujo B encountered such a beast back in 2003 in a sub-equatorial bank that had been especially fond of VB6. This bank had tried to implement standards. In particular, they wanted all of their error messages to appear consistently for their users. To this end, they put a great deal of time and effort into building a library to display error messages in a consistent format. Specifically:


An example error message might be:

  File Not Found - 127 / File 'your.file' could not be found / FileImporter

Unfortunately, the designers of this routine could not compensate for all of the third party tools and libraries that did NOT set some/most/all of those variables. This led to interesting presentations of errors to both users and developers:

  - 34 / Network Connection Lost /
  Unauthorized - 401 //

Crystal Reports was particularly unhelpful, in that it refused to populate any field from which error details could be obtained, leading to the completely unhelpful:


...which could only be interpreted as Something really bad happened, but we don't know what that is and you have no way to figure that out. It didn't matter what Brujo and peers did. Everything that they tried to cajole Crystal Reports into giving context information failed to varying degrees; they could only patch specific instances of errors; but the Ever-Useless™ -0// error kept popping up to bite them in the arse.

After way too much time trying to slay the beast, they gave up, accepted it as one of their own and tried their best to find alternate ways of figuring out what the problems were.

Several years after moving on to saner pastures, Brujo returned to visit old friends. On the wall they had added a cool painting with many words that "describe the company culture". Layered in were management approved words, like "Trust" and "Loyalty". Some were more specific in-jokes, names of former employees, or references to big achievements the organization had made.

One of them was -0//

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Don MartiWhat I don't get about Marketing

I want to try to figure out something I still don't understand about Marketing.

First, read this story by Sarah Vizard at Marketing Week: Why Google and Facebook should heed Unilever’s warnings.

All good points, right?

With the rise of fake news and revelations about how the Russians used social platforms to influence both the US election and EU referendum, the need for change is pressing, both for the platforms and for the advertisers that support them.

We know there's a brand equity crisis going on. Brand-unsafe placements are making mainstream brands increasingly indistinguishable from scams. So the story makes sense so far. But here's what I don't get.

For the call to action to work, Unilever really needs other brands to rally round but these have so far been few and far between.

Other brands? Why?

If brands are worth anything, they can at least help people tell one product apart from another.

Think Small VW ad

Saying that other brands need to participate in saving Unilever's brands from the three-ring shitshow of brand-unsafe advertising is like saying that Volkswagen really needs other brands to get into simple layouts and natural-sounding copy just because Volkswagen's agency did.

Not everybody has to make the same stuff and sell it the same way. Brands being different from each other is a good thing. (Right?)

generic food

Sometimes a problem on the Internet isn't a "let's all work together" kind of problem. Sometimes it's an opportunity for one brand to get out ahead of another.

What if every brand in a category kept on playing in the trash fire except one?

Planet Linux AustraliaLev Lafayette: Drupal "Access denied" Message

It happens rarely enough, but on occasion (such as an upgrade to a database system (e.g., MySQL, MariaDB) or system version of a web-scripting language (e.g., PHP), you can end up with one's Drupal site failing to load, displaying only the error message similar to:

PDOException: SQLSTATE[HY000] [1044] Access denied for user 'username'@'localhost' to database 'database' in lock_may_be_available() (line 167 of /website/includes/

read more


CryptogramE-Mail Leaves an Evidence Trail

If you're going to commit an illegal act, it's best not to discuss it in e-mail. It's also best to Google tech instructions rather than asking someone else to do it:

One new detail from the indictment, however, points to just how unsophisticated Manafort seems to have been. Here's the relevant passage from the indictment. I've bolded the most important bits:

Manafort and Gates made numerous false and fraudulent representations to secure the loans. For example, Manafort provided the bank with doctored [profit and loss statements] for [Davis Manafort Inc.] for both 2015 and 2016, overstating its income by millions of dollars. The doctored 2015 DMI P&L submitted to Lender D was the same false statement previously submitted to Lender C, which overstated DMI's income by more than $4 million. The doctored 2016 DMI P&L was inflated by Manafort by more than $3.5 million. To create the false 2016 P&L, on or about October 21, 2016, Manafort emailed Gates a .pdf version of the real 2016 DMI P&L, which showed a loss of more than $600,000. Gates converted that .pdf into a "Word" document so that it could be edited, which Gates sent back to Manafort. Manafort altered that "Word" document by adding more than $3.5 million in income. He then sent this falsified P&L to Gates and asked that the "Word" document be converted back to a .pdf, which Gates did and returned to Manafort. Manafort then sent the falsified 2016 DMI P&L .pdf to Lender D.

So here's the essence of what went wrong for Manafort and Gates, according to Mueller's investigation: Manafort allegedly wanted to falsify his company's income, but he couldn't figure out how to edit the PDF. He therefore had Gates turn it into a Microsoft Word document for him, which led the two to bounce the documents back-and-forth over email. As attorney and blogger Susan Simpson notes on Twitter, Manafort's inability to complete a basic task on his own seems to have effectively "created an incriminating paper trail."

If there's a lesson here, it's that the Internet constantly generates data about what people are doing on it, and that data is all potential evidence. The FBI is 100% wrong that they're going dark; it's really the golden age of surveillance, and the FBI's panic is really just its own lack of technical sophistication.

Krebs on SecurityUSPS Finally Starts Notifying You by Mail If Someone is Scanning Your Snail Mail Online

In October 2017, KrebsOnSecurity warned that ne’er-do-wells could take advantage of a relatively new service offered by the U.S. Postal Service that provides scanned images of all incoming mail before it is slated to arrive at its destination address. We advised that stalkers or scammers could abuse this service by signing up as anyone in the household, because the USPS wasn’t at that point set up to use its own unique communication system — the U.S. mail — to alert residents when someone had signed up to receive these scanned images.

Image: USPS

The USPS recently told this publication that beginning Feb. 16 it started alerting all households by mail whenever anyone signs up to receive these scanned notifications of mail delivered to that address. The notification program, dubbed “Informed Delivery,” includes a scan of the front of each envelope destined for a specific address each day.

The Postal Service says consumer feedback on its Informed Delivery service has been overwhelmingly positive, particularly among residents who travel regularly and wish to keep close tabs on any bills or other mail being delivered while they’re on the road. It has been available to select addresses in several states since 2014 under a targeted USPS pilot program, but it has since expanded to include many ZIP codes nationwide. U.S. residents can find out if their address is eligible by visiting

According to the USPS, some 8.1 million accounts have been created via the service so far (Oct. 7, 2017, the last time I wrote about Informed Delivery, there were 6.3 million subscribers, so the program has grown more than 28 percent in five months).

Roy Betts, a spokesperson for the USPS’s communications team, says post offices handled 50,000 Informed Delivery notifications the week of Feb. 16, and are delivering an additional 100,000 letters to existing Informed Delivery addresses this coming week.

Currently, the USPS allows address changes via the USPS Web site or in-person at any one of more than 35,000 USPS retail locations nationwide. When a request is processed, the USPS sends a confirmation letter to both the old address and the new address.

If someone already signed up for Informed Delivery later posts a change of address request, the USPS does not automatically transfer the Informed Delivery service to the new address: Rather, it sends a mailer with a special code tied to the new address and to the username that requested the change. To resume Informed Delivery at the new address, that code needs to be entered online using the account that requested the address change.

A review of the methods used by the USPS to validate new account signups last fall suggested the service was wide open to abuse by a range of parties, mainly because of weak authentication and because it is not easy to opt out of the service.

Signing up requires an eligible resident to create a free user account at, which asks for the resident’s name, address and an email address. The final step in validating residents involves answering four so-called “knowledge-based authentication” or KBA questions.

The USPS told me it uses two ID proofing vendors: Lexis Nexisand, naturally, recently breached big three credit bureau Equifax — to ask the magic KBA questions, rotating between them randomly.

KrebsOnSecurity has assailed KBA as an unreliable authentication method because so many answers to the multiple-guess questions are available on sites like Spokeo and Zillow, or via social networking profiles.

It’s also nice when Equifax gives away a metric truckload of information about where you’ve worked, how much you made at each job, and what addresses you frequented when. See: How to Opt Out of Equifax Revealing Your Salary History for how much leaks from this lucrative division of Equifax.

All of the data points in an employee history profile from Equifax will come in handy for answering the KBA questions, or at least whittling away those that don’t match salary ranges or dates and locations of the target identity’s previous addresses.

Once signed up, a resident can view scanned images of the front of each piece of incoming mail in advance of its arrival. Unfortunately, anyone able to defeat those automated KBA questions from Equifax and Lexis Nexis — be they stalkers, jilted ex-partners or private investigators — can see who you’re communicating with via the Postal mail.

Maybe this is much ado about nothing: Maybe it’s just a reminder that people in the United States shouldn’t expect more than a post card’s privacy guarantee (which in can leak the “who” and “when” of any correspondence, and sometimes the “what” and “why” of the communication). We’d certainly all be better off if more people kept that guarantee in mind for email in addition to snail mail. At least now the USPS will deliver your address a piece of paper letting you know when someone signs up to look at those W’s in your snail mail online.

Cory DoctorowPodcast: The Man Who Sold the Moon, Part 05

Here’s part five of my reading (MP3) (part four, part three, part two, part one) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.


Worse Than FailureCodeSOD: Waiting for the Future

One of the more interesting things about human psychology is how bad we are at thinking about the negative consequences of our actions if those consequences are in the future. This is why the death penalty doesn’t deter crime, why we dump massive quantities of greenhouse gases into the atmosphere, and why the Y2K bug happened in the first place, and why we’re going to do it again when every 32-bit Unix system explodes in 2038. If the negative consequence happens well after the action which caused it, humans ignore the obvious cause and effect and go on about making problems that have to be fixed later.

Fran inherited a bit of technical debt. Specifically, there’s an auto-numbered field in the database. Due to their business requirements, when the field hits 999,999, it needs to wrap back around to 000,001. Many many years ago, the original developer “solved” that problem thus:

function getstan($callingMethod = null)

    $sequence = 1;

    // get insert id back
    $rs = db()->insert("sequence", array(
        'processor' => 'widgetsinc',
        'RID'       => $this->data->RID,
        'method'    => $callingMethod,
        'card'      => $this->data->cardNumber
    ), false, false);
    if ($rs) { // if query succeeded...
        $sequence = $rs;
        if ($sequence > 999999) {
            db()->q("delete from sequence where processor='widgetsinc'");
                array('processor' => 'widgetsinc', 'RID' => $this->data->RID, 'card' => $this->data->cardNumber), false,
            $sequence = 1;

    return (substr(str_pad($sequence, 6, "0", STR_PAD_LEFT), -6));

The sequence table uses an auto-numbered column. They insert a row into the table, which returns the generated ID used. If that ID is greater than 999,999, they… delete the old rows. They then insert a new row. Then they return “000001”.

Unfortunately, sequences don’t work this way in MySQL, or honestly any other database. They keep counting up unless you alter or otherwise reset the sequence. So, the counter keeps ticking up, and this method keeps deleting the old rows and returning “000001”. The original developer almost certainly never tested what this code does when the counter breaks 999,999, because that day was so far out into the future that they could put off the problem.

Speaking of putting off solving problems, Fran also adds:

For the past 2 years this function has been returning 000001 and is starting to screw up reports.

Broken for at least two years, but only now is it screwing up reports badly enough that anyone wants to do anything to fix it.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaOpenSTEM: At Mercy of the Weather

It is the time of year when Australia often experiences extreme weather events. February is renowned as the hottest month and, in some parts of the country, also the wettest month. It often brings cyclones to our coasts and storms, which conversely enough, may trigger fires as lightening strikes the hot, dry bush. Aboriginal people […]


Planet Linux AustraliaChris Samuel: Vale Dad

[I’ve been very quiet here for over a year for reasons that will become apparent in the next few days when I finish and publish a long post I’ve been working on for a while – difficult to write, hence the delay]

It’s 10 years ago today that my Dad died, and Alan and I lost the father who had meant so much to both of us. It’s odd realising that it’s over 1/5th of my life since he died, it doesn’t seem that long.

Vale dad, love you…

This item originally posted here:

Vale Dad


Rondam RamblingsDevin Nunes doesn't realize that he's part of the government

I was reading about the long anticipated release of the Democratic rebuttal to the famous Republican dossier memo.  I've been avoiding writing about this, or any aspect of the Russia investigation, because there is just so much insanity going on there and I didn't want to get sucked into that tar pit.  But I could not let this slide: [O]n Saturday, committee chairman Devin Nunes (R-Calif.)

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main March 2018 Meeting: Unions - Hacking society's operating system

Mar 6 2018 18:30
Mar 6 2018 20:30
Mar 6 2018 18:30
Mar 6 2018 20:30
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Tuesday, March 6, 2018

6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000


Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

March 6, 2018 - 18:30

read more


Sam VargheseJoyce affair: incestuous relationship between pollies and journos needs some exposure

Barnaby Joyce has come (no pun intended) and Barnaby Joyce has gone, but one issue that is intimately connected with the circus that surrounded him for the last three weeks has yet to be subjected to any scrutiny.

And that is the highly incestuous relationship that exists between Australian journalists and politicians and often results in news being concealed from the public.

The Australian media examined the scandal around Deputy Prime Minister Joyce from many angles, ever since a picture of his pregnant mistress, Vikki Campion, appeared on the front page of the The Daily Telegraph.

Various high-profile journalists tried to offer mea culpas to justify their non-reporting of the affair.

This is not the first time that journalists in Canberra have known about newsworthy stories connected to politicians and kept quiet.

In 2005, journalists Michael Brissenden, Tony Wright and Paul Daley were at a dinner with former treasurer Peter Costello at which he told them he had set next April (2006) as the absolute deadline “that is, mid-term,” for John Howard to stand aside; if not, he would challenge him.

Costello was said by Brissenden to have declared that a challenge “will happen then” if “Howard is still there”. “I’ll do it,” he said. He said he was “prepared to go the backbench”. He said he’d “carp” at Howard’s leadership “from the backbench” and “destroy it” until he “won” the leadership.

But the three journalists kept mum about what would have been a big scoop, because Costello’s press secretary asked them not to write the yarn.

There was a great deal of speculation in the run-up to the 2007 election as to whether Howard would step down; one story in July 2006 said there had been an unspoken 1994 agreement between him and Costello to vacate the PM’s seat and make way for Costello to get the top job.

Had the three journalists at that 2005 dinner gone ahead and reported the story — as journalists are supposed to do — it is unlikely that Howard would have been able to carry on as he did. It would have forced Costello to challenge for the leadership or quit. In short, it would have changed the course of politics.

But Brissenden, Daley and Wright kept mum.

In the case of Joyce, it has been openly known since at least April 2017 that he was schtupping Campion. Indeed, the picture of Campion on the front page of the Telegraph indicates she was at least seven months pregnant — later it became known that the baby is due in April — which means Joyce must have been sleeping with her at least from June onwards.

The story was in the public interest, because Joyce and Campion are both paid from the public purse. When their affair became an issue, Joyce had her moved around to the offices of his National Party mates, Matt Canavan and Damian Drum, at salaries that went as high as $190,000. Joyce is also no ordinary politician – he is the deputy prime minister and thus acts as the head of the country whenever the prime minister is out of the country. Thus anything that affects his functioning is of interest to the public as he can make decisions that affect them.

But journalists like Katharine Murphy of the Guardian and Jacqueline Maley of the Sydney Morning Herald kept mum. A female journalist who is not part of this clique, Sharri Markson, broke the story. She was roundly criticised by many who belong the Murphy-Maley school of thinking.

Chris Uhlmann kept mum. So did Malcolm Farr and a host of others like Fran Bailey.

Both Murphy and Maley cited what they called “ethics” to justify keeping mum. But after the story broke, they leapt on it with claws extended. Another journalist, Julia Baird, tried to spin the story as one that showed how a woman in Joyce’s position would have been treated – much worse, was her opinion. She chose former prime minister Julia Gillard as her case study but did not offer the fact that Gillard was also a highly incompetent prime minister and that the flak she earned was also due to this aspect of her character.

Baird once was a columnist for Fairfax’s Weekend magazine and her profile pic in the publication at the time showed her in Sass & Bide jeans – the very business in which her husband was involved. Given that, when she moralises, one needs to take it with a kilo of salt.

But the central point is that, though she has a number of platforms to break a story, Baird never wrote a word about Joyce’s philandering. He promoted himself as a man who espoused family values by being photographed with his wife and four daughters repeatedly. He moralised more times than any other about the sanctity of marriage. Thus, he was fair game. Or so commonsense would dictate.

Why do these journalists and many others keep quiet and try to stay in the good books of politicians? The answer is simple: though the jobs of journalists and public relations people are diametric opposites, journalists have no qualms about crossing the divide because the money in PR is much more.

Salaries are much higher if a journalist gets onto the PR team of a senior politician. And with jobs in journalism disappearing at a rate of knots year after year, journalists like Murphy, Maley and Baird hedge their bets in order to stay in politicians’ good books. Remember Mark Simkin, a competent news reporter at the ABC? He joined the staff of — hold your breath — Tony Abbott when the man was prime minister. Simkin is rarely seen in public these days.

Nobody calls journalists on this deception and fraud. It emboldens them to continue to pose as people who act in the public interest when in reality they are no different from the average worker. Yet they climb on pulpits week after week and pontificate to the masses.

It has been said that journalists are like prostitutes: first, they do it for the fun of it, then they do it for a few friends, and finally they end up doing it for money. You won’t find too many arguments from me about that characterisation.

CryptogramFriday Squid Blogging: The Symbiotic Relationship Between the Bobtail Squid and a Particular Microbe

This is the story of the Hawaiian bobtail squid and Vibrio fischeri.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sociological ImagesDigital Drag?

Screenshot used with permission

As I was scrolling through Facebook a few weeks ago, I noticed a new trend: Several friends posted pictures (via an app) of what they would look like as “the opposite sex.” Some of them were quite funny—my female-identified friends sported mustaches, while my male-identified friends revealed long flowing locks. But my sociologist-brain was curious: What makes this app so appealing? How does it decide what the “opposite sex” looks like? Assuming it grabs the users’ gender from their profiles, what would it do with users who listed their genders as non-binary, trans, or genderqueer? Would it assign them male or female? Would it crash? And, on a basic level, why are my friends partaking in this “game?”

Gender is deeply meaningful for our social world and for our identities—knowing someone’s gender gives us “cues” about how to categorize and connect with that person. Further, gender is an important way our social world is organizedfor better or worse. Those who use the app engage with a part of their own identities and the world around them that is extremely significant and meaningful.

Gender is also performative. We “do” gender through the way we dress, talk, and take up space. In the same way, we read gender on people’s bodies and in how they interact with us. The app “changes people’s gender” by changing their gender performance; it alters their hair, face shape, eyes, and eyebrows. The app is thus a outlet to “play” with gender performance. In other words, it’s a way of doing digital drag. Drag is a term that is often used to refer to male-bodied people dressing in a feminine way (“drag queens”) or female-bodied people dressing in a masculine way (“drag kings”), but all people who do drag do not necessarily fit in this definition. Drag is ultimately about assuming and performing a gender. Drag is increasingly coming into the mainstream, as the popular reality TV series RuPaul’s Drag Race has been running for almost a decade now. As more people are exposed to the idea of playing with gender, we might see more of them trying it out in semi-public spaces like Facebook.

While playing with gender may be more common, it’s not all fun and games. The Facebook app in particular assumes a gender binary with clear distinctions between men and women, and this leaves many people out. While data on individuals outside of the gender binary is limited, a 2016 report from The Williams Institute estimated that 0.6% of the U.S. adult population — 1.4 million people — identify as transgender. Further, a Minnesota study of high schoolers found about 3% of the student population identify as transgender or gender nonconforming, and researchers in California estimate that 6% of adolescents are highly gender nonconforming and 20% are androgynous (equally masculine and feminine) in their gender performances.

The problem is that the stakes for challenging the gender binary are still quite high. Research shows people who do not fit neatly into the gender binary can face serious negative consequences, like discrimination and violence (including at least 28 killings of transgender individuals in 2017 and 4 already in 2018).  And transgender individuals who are perceived as gender nonconforming by others tend to face more discrimination and negative health outcomes.

So, let’s all play with gender. Gender is messy and weird and mucking it up can be super fun. Let’s make a digital drag app that lets us play with gender in whatever way we please. But if we stick within the binary of male/female or man/woman, there are real consequences for those who live outside of the gender binary.

Recommended Readings:

Allison Nobles is a PhD candidate in sociology at the University of Minnesota and Graduate Editor at The Society Pages. Her research primarily focuses on sexuality and gender, and their intersections with race, immigration, and law.

(View original at

Planet Linux AustraliaTim Serong: Strange Bedfellows

The Tasmanian state election is coming up in a week’s time, and I’ve managed to do a reasonable job of ignoring the whole horrible thing, modulo the promoted tweets, the signs on the highway, the junk the major (and semi-major) political parties pay to dump in my letterbox, and occasional discussions with friends and neighbours.

Promoted tweets can be blocked. The signs on the highway can (possibly) be re-purposed for a subsequent election, or can be pulled down and used for minor windbreak/shelter works for animal enclosures. Discussions with friends and neighbours are always interesting, even if one doesn’t necessarily agree. I think the most irritating thing is the letterbox junk; at best it’ll eventually be recycled, at worst it becomes landfill or firestarters (and some of those things do make very satisfying firestarters).

Anyway, as I live somewhere in the wilds division of Franklin, I thought I’d better check to see who’s up for election here. There’s no independents running this time, so I’ve essentially got the choice of four parties; Shooters, Fishers and Farmers Tasmania, Tasmanian Greens, Tasmanian Labor and Tasmanian Liberals (the order here is the same as on the TEC web site; please don’t infer any preference based on the order in which I list parties in this blog post).

I feel like I should be setting party affiliations aside and voting for individuals, but of the sixteen candidates listed, to the best of my knowledge I’ve only actually met and spoken with two of them. Another I noticed at random in a cafe, and I was ignored by a fourth who was milling around with some cronies at a promotional stand out the front of Woolworths in Huonville a few weeks ago. So, party affiliations it is, which leads to an interesting thought experiment.

When you read those four party names above, what things came most immediately to mind? For me, it was something like this:

  • Shooters, Fishers & Farmers: Don’t take our guns. Fuck those bastard Greenies.
  • Tasmanian Greens: Protect the natural environment. Renewable energy. Try not to kill anything. Might collaborate with Labor. Liberals are big money and bad news.
  • Tasmanian Labor: Mellifluous babble concerning health, education, housing, jobs, pokies and something about workers rights. Might collaborate with the Greens. Vehemently opposed to the Liberals.
  • Tasmanian Liberals: Mellifluous babble concerning jobs, health, infrastructure, safety and the Tasmanian way of life, peppered with something about small business and family values. Vehemently opposed to Labor and the Greens.

And because everyone usually automatically thinks in terms of binaries (e.g. good vs. evil, wrong vs. right, one vs. zero), we tend to end up imagining something like this:

  • Shooters, Fishers & Farmers vs. Greens
  • Labor vs. Liberal
  • …um. Maybe Labor and the Greens might work together…
  • …but really, it’s going to be Labor or Liberal in power (possibly with some sort of crossbench or coalition support from minor parties, despite claims from both that it’ll be majority government all the way).

It turns out that thinking in binaries is remarkably unhelpful, unless you’re programming a computer (it’s zeroes and ones all the way down), or are lost in the wilderness (is this plant food or poison? is this animal predator or prey?) The rest of the time, things tend to be rather more colourful (or grey, depending on your perspective), which leads back to my thought experiment: what do these “naturally opposed” parties have in common?

According to their respective web sites, the Shooters, Fishers & Farmers and the Greens have many interests in common, including agriculture, biosecurity, environmental protection, tourism, sustainable land management, health, education, telecommunications and addressing homelessness. There are differences in the policy details of course (some really are diametrically opposed), but in broad strokes these two groups seem to care strongly about – and even agree on – many of the same things.

Similarly, Labor and Liberal are both keen to tell a story about putting the people of Tasmania first, about health, education, housing, jobs and infrastructure. Honestly, for me, they just kind of blend into one another; sure there’s differences in various policy details, but really if someone renamed them Labal and Liberor I wouldn’t notice. These two are the status quo, and despite fighting it out with each other repeatedly, are, essentially, resting on their laurels.

Here’s what I’d like to see: a minority Tasmanian state government formed from a coalition of the Tasmanian Greens plus the Shooters, Fishers & Farmers party, with the Labor and Liberal parties together in opposition. It’ll still be stuck in that irritating Westminster binary mode, but at least the damn thing will have been mixed up sufficiently that people might actually talk to each other rather than just fighting.

CryptogramElection Security

I joined a letter supporting the Secure Elections Act (S. 2261):

The Secure Elections Act strikes a careful balance between state and federal action to secure American voting systems. The measure authorizes appropriation of grants to the states to take important and time-sensitive actions, including:

  • Replacing insecure paperless voting systems with new equipment that will process a paper ballot;

  • Implementing post-election audits of paper ballots or records to verify electronic tallies;

  • Conducting "cyber hygiene" scans and "risk and vulnerability" assessments and supporting state efforts to remediate identified vulnerabilities.

    The legislation would also create needed transparency and accountability in elections systems by establishing clear protocols for state and federal officials to communicate regarding security breaches and emerging threats.

Worse Than FailureError'd: Everybody's Invited!

"According to Outlook, it seems that I accidentally invited all of the EU and US citizens combined," writes Wouter.


"Just an array a month sounds like a pretty good deal to me! And I do happen to have some arrays to spare..." writes Rutger W.


Lucas wrote, "VMWare is on the cutting edge! They can support TWICE as much Windows 10 as their competitors!"


"I just wish it was CurrentMonthName so that I could take advantage of the savings!" Ken wrote.


Mark B. "I had no idea that Redboxes were so cultured."


"I'm a little uncomfortable about being connected to an undefined undefined," writes Joel B.


[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Krebs on SecurityChase ‘Glitch’ Exposed Customer Accounts

Multiple customers have reported logging in to their bank accounts, only to be presented with another customer’s bank account details. Chase has acknowledged the incident, saying it was caused by an internal “glitch” Wednesday evening that did not involve any kind of hacking attempt or cyber attack.

Trish Wexler, director of communications for the retail side of JP Morgan Chase, said the incident happened Wednesday evening, for “a pretty limited number of customers” between 6:30 pm  and 9 pm ET who “sporadically during that time while logged in to could see someone else’s account details.”

“We know for sure the glitch was on our end, not from a malicious actor,” Wexler said, noting that Chase is still trying to determine how many customers may have been affected. “We’re going through Tweets from customers and making sure that if anyone is calling us with issues we’re working one on one with customers. If you see suspicious activity you should give us a call.”

Wexler urged customers to “practice good security hygiene” by regularly reviewing their account statements, and promptly reporting any discrepancies. She said Chase is still working to determine the precise cause of the mix-up, and that there have been no reports of JPMC commercial customers seeing the account information of other customers.

“This was all on our side,” Wexler said. “I don’t know what did happen yet but I know what didn’t happen. What happened last night was 100 percent not the result of anything malicious.”

The account mix-up was documented on Wednesday by Fly & Dine, an online publication that chronicles the airline food industry. Fly & Dine included screenshots of one of their writer’s spouses logged into the account of a fellow Chase customer with an Amazon and Chase card and a balance of more than $16,000.

Kenneth White, a security researcher and director of the Open Crypto Audit Project, said the reports he’s seen on Twitter and elsewhere suggested the screwup was somehow related to the bank’s mobile apps. He also said the Chase retail banking app offered an update first thing Thursday morning.

Chase says the oddity occurred for both and users of the Chase mobile app. 

“We don’t have any evidence it was related to any update,” Wexler said.

“There’s only so many kind of logic errors where Kenn logs in and sees Brian’s account,” White said.  “It can be a devil to track down because every single time someone logs in it’s a roll of the dice — maybe they get something in the warmed up cache or they get a new hit. It’s tricky to debug, but this is like as bad as it gets in terms of screwup of the app.”

White said the incident is reminiscent of a similar glitch at online game giant Steam, which caused many customers to see account information for other Steam users for a few hours. He said he suspects the problem was a configuration error someplace within “caching servers,” which are designed to ease the load on a Web application by periodically storing some common graphical elements on the page — such as images, videos and GIFs.

“The images, the site banner, all that’s fine to be cached, but you never want to cache active content or raw data coming back,” White said. “If you’re CNN, you’re probably caching all the content on the homepage. But for a banking app that has access to live data, you never want that to be cached.”

“It’s fairly easy to fix once you identify the problem,” he added. “I can imagine just getting the basics of the core issue [for Chase] would be kind of tricky and might mean a lot of non techies calling your Tier 1 support people.”

Update, 8:10 p.m. ET: Added comment from Chase about the incident affecting both mobile device and Web browser users.


Planet Linux AustraliaRussell Coker: Dell PowerEdge T30

I just did a Debian install on a Dell PowerEdge T30 for a client. The Dell web site is a bit broken at the moment, it didn’t list the price of that server or give useful specs when I was ordering it. I was under the impression that the server was limited to 8G of RAM, that’s unusually small but it wouldn’t be the first time a vendor crippled a low end model to drive sales of more expensive systems. It turned out that the T30 model I got has 4*DDR4 sockets with only one used for an 8G DIMM. It apparently can handle up to 64G of RAM.

It has space for 4*3.5″ SATA disks but only has 4*SATA connectors on the motherboard. As I never use the DVD in a server this isn’t a problem for me, but if you want 4 disks and a DVD then you need to buy a PCI or PCIe SATA card.

Compared to the PowerEdge T130 I’m using at home the new T30 is slightly shorter and thinner while seeming to have more space inside. This is partly due to better design and partly due to having 2 hard drives in the top near the DVD drive which are a little inconvenient to get to. The T130 I have (which isn’t the latest model) has 4*3.5″ SATA drive bays at the bottom which are very convenient for swapping disks.

It has two PCIe*16 slots (one of which is apparently quad speed), one shorter PCIe slot, and a PCI slot. For a cheap server a PCI slot is a nice feature, it means I can use an old PCI Ethernet card instead of buying a PCIe Ethernet card. The T30 cost $1002 so using an old Ethernet card saved 1% of the overall cost.

The T30 seems designed to be more of a workstation or personal server than a straight server. The previous iterations of the low end tower servers from Dell didn’t have built in sound and had PCIe slots that were adequate for a RAID controller but vastly inadequate for video. This one has built in line in and out for audio and has two DisplayPort connectors on the motherboard (presumably for dual-head support). Apart from the CPU (an E3-1225 which is slower than some systems people are throwing out nowadays) the system would be a decent gaming system.

It has lots of USB ports which is handy for a file server, I can attach lots of backup devices. Also most of the ports support “super speed”, I haven’t yet tested out USB devices that support such speeds but I’m looking forward to it. It’s a pity that there are no USB-C ports.

One deficiency of the T30 is the lack of a VGA port. It has one HDMI and two DisplayPort sockets on the motherboard, this is really great for a system on or under your desk, any monitor you would want on your desk will support at least one of those interfaces. But in a server room you tend to have an old VGA monitor that’s there because no-one wants it on their desk. Not supporting VGA may force people to buy a $200 monitor for their server room. That increases the effective cost of the system by 20%. It has a PC serial port on the motherboard which is a nice server feature, but that doesn’t make up for the lack of VGA.

The BIOS configuration has an option displayed for enabling charging devices from USB sockets when a laptop is in sleep mode. It’s disappointing that they didn’t either make a BIOS build for a non-laptop or have the BIOS detect at run-time that it’s not on laptop hardware and hide that.


The PowerEdge T30 is a nice low-end workstation. If you want a system with ECC RAM because you need it to be reliable and you don’t need the greatest performance then it will do very well. It has Intel video on the motherboard with HDMI and DisplayPort connectors, this won’t be the fastest video but should do for most workstation tasks. It has a PCIe*16 quad speed slot in case you want to install a really fast video card. The CPU is slow by today’s standards, but Dell sells plenty of tower systems that support faster CPUs.

It’s nice that it has a serial port on the motherboard. That could be used for a serial console or could be used to talk to a UPS or other server-room equipment. But that doesn’t make up for the lack of VGA support IMHO.

One could say that a tower system is designed to be a desktop or desk-side system not run in any sort of server room. However it is cheaper than any rack mounted systems from Dell so it will be deployed in lots of small businesses that have one server for everything – I will probably install them in several other small businesses this year. Also tower servers do end up being deployed in server rooms, all it takes is a small business moving to a serviced office that has a proper server room and the old tower servers end up in a rack.

Rack vs Tower

One reason for small businesses to use tower servers when rack servers are more appropriate is the issue of noise. If your “server room” is the room that has your printer and fax then it typically won’t have a door and you just can’t have the noise of a rack mounted server in there. 1RU systems are inherently noisy because the small diameter of the fans means that they have to spin fast. 2RU systems can be made relatively quiet if you don’t have high-end CPUs but no-one seems to be trying to do that.

I think it would be nice if a company like Dell sold low-end servers in a rack mount form-factor (19 inches wide and 2RU high) that were designed to be relatively quiet. Then instead of starting with a tower server and ending up with tower systems in racks a small business could start with a 19 inch wide system on a shelf that gets bolted into a rack if they move into a better office. Any laptop CPU from the last 10 years is capable of running a file server with 8 disks in a ZFS array. Any modern laptop CPU is capable of running a file server with 8 SSDs in a ZFS array. This wouldn’t be difficult to design.

Google AdsenseIntroducing AdSense Auto ads

Finding the time to create great content for your users is an essential part of growing your publishing business. Today we are introducing AdSense Auto ads, a powerful new way to place ads on your site. Auto ads use machine learning to make smart placement and monetization decisions on your behalf, saving you time. Place one piece of code just once to all of your pages, and let Google take care of the rest.
Some of the benefits of Auto ads include:
  • Optimization: Using machine learning, Auto ads show ads only when they are likely to perform well and provide a good user experience.
  • Revenue opportunities: Auto ads will identify any available ad space and place new ads there, potentially increasing your revenue.
  • Easy to use: With Auto ads you only need to place the ad code on your pages once. When you’re ready to use new features and ad formats, simply turn them on and off with the flick of a switch -- there’s no need to change the code again.

How do Auto ads work?

  Select the ad formats you want to show on your pages by switching them on with a simple toggle

 Place the Auto ads code on your pages

Auto ads will now start working for you by analyzing your pages, finding potential ad placements, and showing new ads when they’re likely to perform well and provide a good user experience.
And if you want to have different formats on different pages you can use the new Advanced URL settings feature (e.g. you can choose to place In-feed ads on but not on
Getting started with AdSense Auto ads
Auto ads can work equally well on new sites and on those already showing ads.
Have you manually placed ads on your page?
There’s no need to remove them if you don’t want to. Auto ads will take into account all existing Google ads on your pages.

Already using Anchor or Vignette ads?
Auto ads include Anchor and Vignette ads and many more additional formats such as Text and display, In-feed, and Matched content. Note that all users that used Page-level ads are automatically migrated over to Auto ads without any need to add code to their pages again.

To get started with AdSense Auto ads:
  1. Sign in to your AdSense account.
  2. In the left navigation panel, visit My ads and select Get Started.
  3. On the "Choose your global settings" page, select the ad formats that you'd like to show and click Save.
  4. On the next page, click Copy code.
  5. Paste the ad code between the < head > and </ head > tags of each page where you want to show Auto ads.
  6. Auto ads will start to appear on your pages in about 10-20 minutes.

We'd love to hear what you think about Auto ads in the comments section below this post.

Posted by:
Tom Long, AdSense Engineering Manager
Violetta Kalathaki, AdSense Product Manager

CryptogramHarassment By Package Delivery

People harassing women by delivering anonymous packages purchased from Amazon.

On the one hand, there is nothing new here. This could have happened decades ago, pre-Internet. But the Internet makes this easier, and the article points out that using prepaid gift cards makes this anonymous. I am curious how much these differences make a difference in kind, and what can be done about it.

Worse Than FailureCodeSOD: Functional IsFunction

Julio S recently had to attempt to graft a third-party document viewer onto an internal web app. The document viewer was from a company which specialized in enterprise “document solutions”, which can be purchased for enterprise-sized licensing fees.

Gluing the document viewer onto their internal app didn’t go terribly well. While debugging, and browsing through the vendor’s javascript, he saw a lot of calls to a function called IsFunction. It was loaded from a “utilities.js”-type do-everything library file. Curious, Julio pulled up the implementation.

function IsFunction ( func ) {
    var bChk=false;
    if (func != "undefined") bChk=true;
    else bChk=false;
    return bChk;

I cannot emphasize enough how beautiful this block of code is, by the standards of bad code. There’s so much there. One variable, bChk uses Hungarian notation. Nothing else seems to. It’s a totally superfluous variable, as we could just do return func != "undefined".

Then again why would we even do that? The real beauty, though, is how the name of the function and its implementation have no relationship to each other, and the implementation is utterly useless. For example:

IsFunction("Hello World"); //true
IsFunction({spam: "eggs"}); //true
IsFunction(function() {}); //true, but it was probably an accident
IsFunction(undefined); //true
IsFunction("undefined"); //false

Yes, the only time this function returns false is the specific case where you pass it the string “undefined”. Everything else IsFunction apparently. The useless function sounds important. Someone wrote it, probably as a quick attempt at vaguely defensive programming. “I should make sure my inputs are valid”. They didn’t test it. The certainly didn’t think about it. But they wrote it. And then someone else saw the function in use, and said, “Oh… I should probably use that, too.” Somewhere, there’s probably a “Style Guide”, which mandates that, before attempting to invoke a variable that should contain a function, you use IsFunction to confirm it does. It comes up in code reviews, and code has been held from going into production because someone didn't use IsFunction.

And Julio probably is the first person to actually check the implementation since it was first written.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


TEDRemembering pastor Billy Graham, and more news in brief

Behold, your recap of TED-related news:

Remembering Billy Graham. For more than 60 years, pastor Billy Graham inspired countless people around the world with his sermons. On Wednesday, February 21, he passed away at his home in North Carolina after struggling with numerous illnesses over the past few years. He was 99 years old. Raised on a dairy farm in N.C., Graham used the power of new technologies, like radio and television, to spread his message of personal salvation to an estimated 215 million people globally, while simultaneously reflecting on technology’s limitations. Reciting the story of King David to audiences at TED1998, “David found that there were many problems that technology could not solve. There were many problems still left. And they’re still with us, and you haven’t solved them, and I haven’t heard anybody here speak to that,” he said, referring to human evil, suffering, and death. To Graham, the answer to these problems was to be found in God. Even after his death, through the work of the Billy Graham Evangelistic Association, led by his son Franklin, his message of personal salvation will live on. (Watch Graham’s TED Talk)

Fashion inspired by Black Panther. TED Fellow and fashion designer Walé Oyéjidé draws on aesthetics from around the globe to create one-of-a-kind pieces that dismantle bias and celebrate often-marginalized groups. For New York Fashion Week, Oyéjidé designed a suit with a coat and scarf for a Black Panther-inspired showcase, sponsored by Marvel Studios. One of Oyéjidé’s scarves is also worn in the movie by its protagonist, King T’Challa. “The film is very much about the joy of seeing cultures represented in roles that they are generally not seen in. There’s villainy and heros, tech genius and romance,” Oyéjidé told the New York Times, “People of color are generally presented as a monolithic image. I’m hoping it smashes the door open to show that people can occupy all these spaces.” (Watch Oyéjidé’s TED Talk)

Nuclear energy advocate runs for governor. Environmentalist and nuclear energy advocate Michael Shellenberger has launched his campaign for governor of California as an independent candidate. “I think both parties are corrupt and broken. We need to start fresh with a fresh agenda,” he says. Shellenberger intends to run on an energy and environmental platform, and he hopes to involve student environmental activists in his campaign. California’s gubernatorial election will be held in November 2018. (Watch Shellenberger’s TED Talk)

Can UV light help us fight the flu? Radiation scientist David Brenner and his research team at Columbia University’s Irving Medical Center are exploring whether a type of ultraviolet light known as far-UVC could be used to kill the flu virus. To test their theory, they released a strain of the flu virus called H1N1 in an enclosed chamber and exposed it to low doses of UVC. In a paper published in Nature’s Scientific Reports, they report that far-UVC successfully deactivated the virus. Previous research has shown that far-UVC doesn’t penetrate the outer layer of human skin or eyes, unlike conventional UV rays, which means that it appears to be safe to use on humans. Brenner suggests that far-UVC could be used in public spaces to fight the flu. “Think about doctors’ waiting rooms, schools, airports and airplanes—any place where there’s a likelihood for airborne viruses,” Brenner told Time. (Watch Brenner’s TED Talk.)

A beautiful sculpture for Madrid. For the 400 anniversary of Madrid’s Plaza Mayor, artist Janet Echelman created a colorful, fibrous sculpture, which she suspended above the historic space. The sculpture, titled “1.78 Madrid,” aims to provoke contemplation of the interconnectedness of time and our spatial reality. The title refers to the number of microseconds that a day on Earth was shortened as a result of the 2011 earthquake in Japan, which was so strong it caused the planet’s rotation to accelerate. At night, colorful lights are projected onto the sculpture, which makes it an even more dynamic, mesmerizing sight for the city’s residents. (Watch Echelman’s TED Talk)

A graduate program that doesn’t require a high school degree. Economist Esther Duflo’s new master’s program at MIT is upending how we think about graduate school admissions. Rather than requiring the usual test scores and recommendation letters, the program allows anyone to take five rigorous, online courses for free. Students only pay to take the final exam, the cost of which ranges from $100 to $1,000 depending on income. If they do well on the final exam, they can apply to MIT’s master’s program in data, economics and development policy. “Anybody could do that. At this point, you don’t need to have gone to college. For that matter, you don’t need to have gone to high school,” Duflo told WBUR. Already, more than 8,000 students have enrolled online. The program intends to raise significant aid to cover the cost of the master’s program and living in Cambridge, with the first class arriving in 2020. (Watch Duflo’s TED Talk)

Have a news item to share? Write us at and you may see it included in this weekly round-up.

CryptogramNew Spectre/Meltdown Variants

Researchers have discovered new variants of Spectre and Meltdown. The software mitigations for Spectre and Meltdown seem to block these variants, although the eventual CPU fixes will have to be expanded to account for these new attacks.

Worse Than FailureShiny Side Up


It feels as though disc-based media have always been with us, but the 1990s were when researchers first began harvesting these iridescent creatures from the wild in earnest, pressing data upon them to create the beast known as CD-ROM. Click-and-point adventure games, encyclopedias, choppy full-motion video ... in some cases, ambition far outweighed capability. Advances in technology made the media cheaper and more accessible, often for the worst. There are some US households that still burn America Online 7.0 CDs for fuel.

But we’re not here to delve into the late-90s CD marketing glut. We’re nestling comfortably into the mid-90s, when Internet was too slow and unreliable for anyone to upload installers onto a customer portal and call it a day. Software had to go out on physical media, and it had to be as bug-free as possible before shipping.

Chris, a developer fresh out of college, worked on product catalog database applications that were mailed to customers on CDs. It was a small shop with no Tech Support department, so he and the other developers had to take turns fielding calls from customers having issues with the admittedly awful VB4 installer. It was supposed to launch automatically, but if the auto-play feature was disabled in Windows 95, or the customer canceled the installer pop-up without bothering to read it, Chris or one of his colleagues was likely to hear about it.

And then came the caller who had no clue what Chris meant when he suggested, "Why don't we open up the CD through the file system and launch the installer manually?"

These were the days before remote desktop tools, and the caller wasn't the savviest computer user. Talking him through minimizing his open programs, double-clicking on My Computer, and browsing into the CD drive took Chris over half an hour.

"There's nothing here," the caller said.

So close to the finish line, and yet so far. Chris stifled his exasperation. "What do you mean?"

"I opened the CD like you said, and it's completely empty."

This was new. Chris frowned. "You're definitely looking at the right drive? The one with the shiny little disc icon?"

"Yes, that's the one. It's empty."

Chris' frown deepened. "Then I guess you got a bad copy of the CD. I'm sorry about that! Let me copy down your name and address, and I'll get a new one sent out to you."

The customer provided his mailing address accordingly. Chris finished scribbling it onto a Post-it square. "OK, lemme read that back to—"

"The shiny side is supposed to be turned upwards, right?" the customer blurted. "Like a gramophone record?"

Chris froze, then slapped the mute button before his laughter spilled out over the line. After composing himself, he returned to the call as the model of professionalism. "Actually, it should be shiny-side down."

"Really? Huh. The little icon's lying, then."

"Yeah, I guess it is," Chris replied. "Unfortunately, that's on Microsoft to fix. Let's turn the disc over and try again."

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaColin Charles: MariaDB Developer’s unconference & M|18

Been a while since I wrote anything MySQL/MariaDB related here, but there’s the column on the Percona blog, that has weekly updates.

Anyway, I’ll be at the developer’s unconference this weekend in NYC. Even managed to snag a session on the schedule, MySQL features missing in MariaDB Server (Sunday, 12.15–13.00). Signup on meetup?

Due to the prevalence of “VIP tickets”, I too signed up for M|18. If you need a discount code, I’ll happily offer them up to you to see if they still work (though I’m sure a quick Google will solve this problem for you). I’ll publish notes, probably in my weekly column.

If you’re in New York and want to say hi, talk shop, etc. don’t hesitate to drop me a line.


CryptogramFacebook Will Verify the Physical Location of Ad Buyers with Paper Postcards

It's not a great solution, but it's something:

The process of using postcards containing a specific code will be required for advertising that mentions a specific candidate running for a federal office, Katie Harbath, Facebook's global director of policy programs, said. The requirement will not apply to issue-based political ads, she said.

"If you run an ad mentioning a candidate, we are going to mail you a postcard and you will have to use that code to prove you are in the United States," Harbath said at a weekend conference of the National Association of Secretaries of State, where executives from Twitter Inc and Alphabet Inc's Google also spoke.

"It won't solve everything," Harbath said in a brief interview with Reuters following her remarks.

But sending codes through old-fashioned mail was the most effective method the tech company could come up with to prevent Russians and other bad actors from purchasing ads while posing as someone else, Harbath said.

It does mean a several-days delay between purchasing an ad and seeing it run.

Krebs on SecurityMoney Laundering Via Author Impersonation on Amazon?

Patrick Reames had no idea why sent him a 1099 form saying he’d made almost $24,000 selling books via Createspace, the company’s on-demand publishing arm. That is, until he searched the site for his name and discovered someone has been using it to peddle a $555 book that’s full of nothing but gibberish.

The phony $555 book sold more than 60 times on Amazon using Patrick Reames’ name and Social Security number.

Reames is a credited author on Amazon by way of several commodity industry books, although none of them made anywhere near the amount Amazon is reporting to the Internal Revenue Service. Nor does he have a personal account with Createspace.

But that didn’t stop someone from publishing a “novel” under his name. That word is in quotations because the publication appears to be little more than computer-generated text, almost like the gibberish one might find in a spam email.

“Based on what I could see from the ‘sneak peak’ function, the book was nothing more than a computer generated ‘story’ with no structure, chapters or paragraphs — only lines of text with a carriage return after each sentence,” Reames said in an interview with KrebsOnSecurity.

The impersonator priced the book at $555 and it was posted to multiple Amazon sites in different countries. The book — which as been removed from most Amazon country pages as of a few days ago — is titled “Lower Days Ahead,” and was published on Oct 7, 2017.

Reames said he suspects someone has been buying the book using stolen credit and/or debit cards, and pocketing the 60 percent that Amazon gives to authors. At $555 a pop, it would only take approximately 70 sales over three months to rack up the earnings that Amazon said he made.

“This book is very unlikely to ever sell on its own, much less sell enough copies in 12 weeks to generate that level of revenue,” Reames said. “As such, I assume it was used for money laundering, in addition to tax fraud/evasion by using my Social Security number. Amazon refuses to issue a corrected 1099 or provide me with any information I can use to determine where or how they were remitting the royalties.”

Reames said the books he has sold on Amazon under his name were done through his publisher, not directly via a personal account (the royalties for those books accrue to his former employer) so he’d never given Amazon his Social Security number. But the fraudster evidently had, and that was apparently enough to convince Amazon that the imposter was him.

Reames said after learning of the impersonation, he got curious enough to start looking for other examples of author oddities on Amazon’s Createspace platform.

“I have reviewed numerous Createspace titles and its clear to me that there may be hundreds if not thousands of similar fraudulent books on their site,” Reames said. “These books contain no real content, only dozens of pages of gibberish or computer generated text.”

For example, searching Amazon for the name Vyacheslav Grzhibovskiy turns up dozens of Kindle “books” that appear to be similar gibberish works — most of which have the words “quadrillion,” “trillion” or a similar word in their titles. Some retail for just one or two dollars, while others are inexplicably priced between $220 and $320.

Some of the “books” for sale on Amazon attributed to a Vyacheslav Grzhibovskiy.

“Its not hard to imagine how these books could be used to launder money using stolen credit cards or facilitating transactions for illicit materials or funding of illegal activities,” Reames said. “I can not believe Amazon is unaware of this and is unwilling to intercede to stop it. I also believe they are not properly vetting their new accounts to limit tax fraud via stolen identities.”

Reames said Amazon refuses to send him a corrected 1099, or to discuss anything about the identity thief.

“They say all they can do at this point is send me a letter acknowledging than I’m disputing ever having received the funds, because they said they couldn’t prove I didn’t receive the funds. So I told them, ‘If you’re saying you can’t say whether I did receive the funds, tell me where they went?’ And they said, “Oh, no, we can’t do that.’ So I can’t clear myself and they won’t clear me.”

Amazon said in a statement that the security of customer accounts is one of its highest priorities.

“We have policies and security measures in place to help protect them. Whenever we become aware of actions like the ones you describe, we take steps to stop them. If you’re concerned about your account, please contact Amazon customer service immediately using the help section on our website.”

Beware, however, if you plan to contact Amazon customer support via phone. Performing a simple online search for Amazon customer support phone numbers can turn up some dubious and outright fraudulent results.

Earlier this month, KrebsOnSecurity heard from a fraud investigator for a mid-sized bank who’d recently had several customers who got suckered into scams after searching for the customer support line for Amazon. She said most of these customers were seeking to cancel an Amazon Prime membership after the trial period ended and they were charged a $99 fee.

The fraud investigator said her customers ended up calling fake Amazon support numbers, which were answered by people with a foreign accent who proceeded to request all manner of personal data, including bank account and credit card information. In short order, the customers’ accounts were used to set up new Amazon accounts as well as accounts at, a service that facilitates the purchase of virtual currencies like Bitcoin.

This Web site does a good job documenting the dozens of phony Amazon customer support numbers that are hoodwinking unsuspecting customers. Amazingly, many of these numbers seem to be heavily promoted using Amazon’s own online customer support discussion forums, in addition to third-party sites like

Interestingly, clicking on the Customer Help Forum link link from the Amazon Support Options and Contact Us page currently sends visitors to the page pictured below, which displays a “Sorry, We Couldn’t Find That Page” error. Perhaps the company is simply cleaning things up after being notified last week by KrebsOnSecurity about the bogus phone numbers being promoted on the forum.

In any case, it appears some of these fake Amazon support numbers are being pimped by a number dubious-looking e-books for sale on Amazon that are all about — you guessed it — how to contact Amazon customer support.

If you wish to contact Amazon by phone, the only numbers you should use are:

U.S. and Canada: 1-866-216-1072

International: 1-206-266-2992

Amazon’s main customer help page is here.

Update, 11:44 a.m. ET: Not sure when it happened exactly, but this notice says Amazon has closed its discussion boards.

Update, 4:02 p.m. ET: Amazon just shared the following statement, in addition to their statement released earlier urging people to visit a help page that didn’t exist (see above):

“Anyone who believes they’ve received an incorrect 1099 form or a 1099 form in error can contact and we will investigate.”

“This is the general Amazon help page:”

Update 4:01 p.m ET: Reader zboot has some good stuff. What makes Amazon a great cashout method for cybercrooks as opposed to, say, bitcoin cashouts, is that funds can be deposited directly into a bank account. He writes:

“It’s not that the darkweb is too slow, it’s that you still need to cash out at the end. Amazon lets you go from stolen funds directly to a bank account. If you’ve set it up with stolen credentials, that process may be faster than getting money out of a bitcoin exchange which tend to limit fiat withdraws to accounts created with the amount of information they managed to steal.”

Worse Than FailureCodeSOD: The Telltale Snippet

True! nervous, very, very dreadfully nervous I had been and am; but why will you say that I am mad? The disease had sharpened my senses, not destroyed, not dulled them. Above all was the sense of hearing acute. I heard all things in the heaven and in the earth. I heard many things in hell. How then am I mad? Hearken! and observe how healthily, how calmly I can tell you the whole story. - “The Telltale Heart” by Edgar Allen Poe

Today’s submitter credits themselves as Too Afraid To Say (TATS) who they are. Why? Because like a steady “thump thump” from beneath the floorboards, they are haunted by their crimes. The haunting continues to this very day.

It is impossible to say how the idea entered TATS’s brain, but as a fresh-faced junior developer, they set out to write a flexible web-control in JavaScript. What they wanted was to dynamically add items to the control. Each item was a set of fields- an ID, a tool tip, a description, etc.

Think about how you might pass a list of objects to a method.

    ObjectLookupField.prototype._AddItems = function _AddItems(objItems)
        if (objItems && objItems.length > 0)
            var objItemIDs = [];
            var objTooltips = [];
            var objImages = [];
            var objTypes = [];
            var objDeleted = [];
            var objDescriptions = [];
            var objParentTreeCodes = [];
            var objHasChilderen = [];
            var objPath = [];
            var objMarked = [];
            var objLocked = [];

            var blnSkip;

            for (var intI = 0; intI < objItems.length; intI++)
                objImages.push((objItems[intI].TypeIconURL ? objItems[intI].TypeIconURL : objItems[intI].IconURL));
                objTooltips.push(objItems[intI].Tooltip ? objItems[intI].Tooltip : '');
                objMarked.push(objItems[intI].Marked ? 'Marked' : '');

                                // SNIP, not really related
                            //TATS also implemented `addItems` which requires all these arrays
            window[this._strControlID].addItems([objItemIDs, objImages, objPath, objTooltips, objLocked, objMarked, objParentTreeCodes, objHasChilderen]);

TATS used the infamous “Arrject” pattern. Instead of having a list of objects, where each object has all of the fields it needs, the Arrject pattern has one array per field, and then we’ll hope that each index holds all the related data for a given item. For example:

    arrNames = {"Joebob", "Sallybob", "Suebob"};
    arrAddresses = {"123 Street St", "234 Road Rd", "345 Lane Ln"};
    arrPhones = {"555-1234", "555-2345", "555-3456"};

The 0th index of every array contains everything you want to know about Joebob.

Most uses of the Arrject pattern end up in code that doesn’t use objects at all, but TATS adds their own little twist. They explode an object into a set of arrays, and then pass those arrays to their own method which creates the necessary DOM elements.

TATS smiled, for what did they have to fear? They bade the senior developers welcome: use my code. And they did.

Before long, this little bit of code propagated throughout their entire codebase; copied, pasted, dropped in, loaded as a JS dependency, hosted on a private CDN. It was everywhere. Time passed, and careers changed. TATS got promoted up to senior. Other seniors left and handed their code off to TATS. And that’s when the thumping beneath the floorboards became intolerable. That is why they are “Too Afraid to Say”. This little ghost, this reminder of their mistakes as a junior dev is always there, waiting beneath their feet, and it keeps. getting. louder.

“Villains!” I shrieked, “dissemble no more! I admit the deed!—tear up the planks!—here, here!—it is the beating of his hideous heart!”


CryptogramOn the Security of Walls

Interesting history of the security of walls:

Dún Aonghasa presents early evidence of the same principles of redundant security measures at work in 13th century castles, 17th century star-shaped artillery fortifications, and even "defense in depth" security architecture promoted today by the National Institute of Standards and Technology, the Nuclear Regulatory Commission, and countless other security organizations world-wide.

Security advances throughout the centuries have been mostly technical adjustments in response to evolving weaponry. Fortification -- the art and science of protecting a place by imposing a barrier between you and an enemy -- is as ancient as humanity. From the standpoint of theory, however, there is very little about modern network or airport security that could not be learned from a 17th century artillery manual. That should trouble us more than it does.

Fortification depends on walls as a demarcation between attacker and defender. The very first priority action listed in the 2017 National Security Strategy states: "We will secure our borders through the construction of a border wall, the use of multilayered defenses and advanced technology, the employment of additional personnel, and other measures." The National Security Strategy, as well as the executive order just preceding it, are just formal language to describe the recurrent and popular idea of a grand border wall as a central tool of strategic security. There's been a lot said about the costs of the wall. But, as the American finger hovers over the Hadrian's Wall 2.0 button, whether or not a wall will actually improve national security depends a lot on how walls work, but moreso, how they fail.

Lots more at the link.

Krebs on SecurityIRS Scam Leverages Hacked Tax Preparers, Client Bank Accounts

Identity thieves who specialize in tax refund fraud have been busy of late hacking online accounts at multiple tax preparation firms, using them to file phony refund requests. Once the Internal Revenue Service processes the return and deposits money into bank accounts of the hacked firms’ clients, the crooks contact those clients posing as a collection agency and demand that the money be “returned.”

In one version of the scam, criminals are pretending to be debt collection agency officials acting on behalf of the IRS. They’ll call taxpayers who’ve had fraudulent tax refunds deposited into their bank accounts, claim the refund was deposited in error, and threaten recipients with criminal charges if they fail to forward the money to the collection agency.

This is exactly what happened to a number of customers at a half dozen banks in Oklahoma earlier this month. Elaine Dodd, executive vice president of the fraud division at the Oklahoma Bankers Association, said many financial institutions in the Oklahoma City area had “a good number of customers” who had large sums deposited into their bank accounts at the same time.

Dodd said the bank customers received hefty deposits into their accounts from the U.S. Treasury, and shortly thereafter were contacted by phone by someone claiming to be a collections agent for a firm calling itself DebtCredit and using the Web site name debtcredit[dot]us.

“We’re having customers getting refunds they have not applied for,” Dodd said, noting that the transfers were traced back to a local tax preparer who’d apparently gotten phished or hacked. Those banks are now working with affected customers to close the accounts and open new ones, Dodd said. “If the crooks have breached a tax preparer and can send money to the client, they can sure enough pull money out of those accounts, too.”

Several of the Oklahoma bank’s clients received customized notices from a phony company claiming to be a collections agency hired by the IRS.

The domain debtcredit[dot]us hasn’t been active for some time, but an exact copy of the site to which the bank’s clients were referred by the phony collection agency can be found at jcdebt[dot]com — a domain that was registered less than a month ago. The site purports to be associated with a company in New Jersey called Debt & Credit Consulting Services, but according to a record (PDF) retrieved from the New Jersey Secretary of State’s office, that company’s business license was revoked in 2010.

“You may be puzzled by an erroneous payment from the Internal Revenue Service but in fact it is quite an ordinary situation,” reads the HTML page shared with people who received the fraudulent IRS refunds. It includes a video explaining the matter, and references a case number, the amount and date of the transaction, and provides a list of personal “data reported by the IRS,” including the recipient’s name, Social Security Number (SSN), address, bank name, bank routing number and account number.

All of these details no doubt are included to make the scheme look official; most recipients will never suspect that they received the bank transfer because their accounting firm got hacked.

The scammers even supposedly assign the recipients an individual “appointed debt collector,” complete with a picture of the employee, her name, telephone number and email address. However, the emails to the domain used in the email address from the screenshot above (debtcredit[dot]com) bounced, and no one answers at the provided telephone number.

Along with the Web page listing the recipient’s personal and bank account information, each recipient is given a “transaction error correction letter” with IRS letterhead (see image below) that includes many of the same personal and financial details on the HTML page. It also gives the recipient instructions on the account number, ACH routing and wire number to which the wayward funds are to be wired.

A phony letter from the IRS instructing recipients on how and where to wire the money that was deposited into their bank account as a result of a fraudulent tax refund request filed in their name.

Tax refund fraud affects hundreds of thousands, if not millions, of U.S. citizens annually. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

On Feb. 2, 2018, the IRS issued a warning to tax preparers, urging them to step up their security in light of increased attacks. On Feb. 13, the IRS warned that phony refunds through hacked tax preparation accounts are a “quickly growing scam.”

“Thieves know it is more difficult to identify and halt fraudulent tax returns when they are using real client data such as income, dependents, credits and deductions,” the agency noted in the Feb. 2 alert. “Generally, criminals find alternative ways to get the fraudulent refunds delivered to themselves rather than the real taxpayers.”

The IRS says taxpayer who receive fraudulent transfers from the IRS should contact their financial institution, as the account may need to be closed (because the account details are clearly in the hands of cybercriminals). Taxpayers receiving erroneous refunds also should consider contacting their tax preparers immediately.

If you go to file your taxes electronically this year and the return is rejected, it may mean fraudsters have beat you to it. The IRS advises taxpayers in this situation to follow the steps outlined in the Taxpayer Guide to Identity Theft. Those unable to file electronically should mail a paper tax return along with Form 14039 (PDF) — the Identity Theft Affidavit — stating they were victims of a tax preparer data breach.

Worse Than FailureCousin of ITAPPMONROBOT

Logitech Quickcam Pro 4000

Every year, Initrode Global was faced with further and further budget shortages in their IT department. This wasn't because the company was doing poorly—on the contrary, the company overall was doing quite well, hitting record sales every quarter. The only way to spin that into a smaller budget was to dream bigger. Thus, every quarter, the budget demanded greater and greater increases in sales, and the exceptional growth was measured against the desired phenomenal growth and found wanting.

IT, being a cost center, was always hit by budget cuts the hardest. What did they need money for? The lights were still on, the mainframes still churning; any additional funds would only encourage them to take wild risks and break things.

One of the things people were worried about breaking were the thin clients. These had been purchased some years ago from Smyrt, who had been acquired the previous year by Hell Computers. There would be no tech support or patching, not from Hell. The IT department was on their own to ensure the clients kept running.

Unfortunately, the things seemed to have a will of their own—and that will did not include remaining up for weeks on end. Every once in a while, when booting Linux on the thin clients, the Thin Film Transistor screen would turn dark as soon as the X server started. They would remain dark after that; however, when the helpdesk SSH'd into the system, the screen would of course render perfectly on their end. So there was nothing to do to troubleshoot except lug a thin client to their work area and test workarounds from there.

The worst part of this kind of troubleshooting is when the problem is an intermittent one. The only way they could think to reproduce the problem was to spend hours in front of the client, turning it off and back on again. In the face of budget cuts, the already understaffed desk had no manpower to do something so trivial and dull.

Tedium is the mother of invention. Many of the most ingenious pieces of automation were put in place when an enterprising programmer was faced with performing a mind-numbing task over and over for the foreseeable future. Such is the case in this instance. Lacking the support staff to power cycle the machine over and over, the staff instead built a robot.

A webcam was found in the back room, dusty and abandoned, the last vestige of a proposed work-from-home solution that never quite came to fruition years before. A sticker of transparent rubber someone found in their desk was placed over the metal rim of the camera so it wouldn't leave any scratches on the glass of the TFT screen. The webcam was placed up close against one strategically chosen corner of the screen, and attached to a Raspberry Pi someone brought from home.

The Pi was programmed to run a bash script, which in turn called a CLI image-grabbing tool and then applied some ImageMagick filters to determine the brightness value of the patch of screen it could see. This brightness value was compared against a known list of brightnesses to determine which state the machine was in: the boot menu, the Linux kernel messages scrolling past, the colorful login screen, or the solid black screen representing the problem. When the Pi detected a login screen, it would run a scripted reboot on the thin client using SSH and a keypair. If, instead, the screen remained dark for a long period of time, it would send an IM through the company messaging solution to alert the staff that they could begin their testing, then exit.

We've seen machines with the ability to manipulate physical servers. Now, we have machines seeing and evaluating the world in front of them. How long before we reach peak Skynet potential here at TDWTF? And what would the robot revolution look like, with founding members such as these?

[Advertisement] Incrementally adopt DevOps best practices with BuildMaster, ProGet and Otter, creating a robust, secure, scalable, and reliable DevOps toolchain.


Don MartiThe tracker will always get through?

(I work for Mozilla. None of this is secret. None of this is Mozilla policy. Not speaking for Mozilla here.)

A big objection to tracking protection is the idea that the tracker will always get through. Some people suggest that as browsers give users more ability to control how their personal information gets leaked across sites, things won't get better for users, because third-party tracking will just keep up. On this view, today's easy-to-block third-party cookies will be replaced by techniques such as passive fingerprinting where it's hard to tell if the browser is succeeding at protecting the user or not, and users will be stuck in the same place they are now, or worse.

I doubt this is the case because we're playing a more complex game than just trackers vs. users. The game has at least five sides, and some of the fastest-moving players with the best understanding of the game are the adfraud hackers. Right now adfraud is losing in some areas where they had been winning, and the resulting shift in adfraud is likely to shift the risks and rewards of tracking techniques.

Data center adfraud

Fraudbots, running in data centers, visit legit sites (with third-party ads and trackers) to pick up a realistic set of third-party cookies to make them look like high-value users. Then the bots visit dedicated fraudulent "cash out" sites (whose operators have the same third-party ads and trackers) to generate valuable ad impressions for those sites. If you wonder why so many sites made a big deal out of "pivot to video" but can't remember watching a video ad, this is why. Fraudbots are patient enough to get profiled as, say, a car buyer, and watch those big-money ads. And the money is good enough to motivate fraud hackers to make good bots, usually based on real browser code. When a fraudbot network gets caught and blocked from high-value ads, it gets recycled for lower and lower value forms of advertising. By the time you see traffic for sale on fraud boards, those bots are probably only getting past just enough third-party anti-fraud services to be worth running.

This version of adfraud has minimal impact on real users. Real users don't go to fraud sites, and fraudbots do their thing in data centers Doesn't everyone do their Christmas shopping while chilling out in the cold aisle at an Amazon AWS data center? Seems legit to me. and don't touch users' systems. The companies that pay for it are legit publishers, who not only have to serve pages to fraudbots—remember, a bot needs to visit enough legit sites to look like a real user—but also end up competing with adfraud for ad revenue. Adfraud has only really been a problem for legit publishers. The adtech business is fine with it, since they make more money from fraud than the fraud hackers do, and the advertisers are fine with it because fraud is priced in, so they pay the fraud-adjusted price even for real impressions.

What's new for adfraud

So what's changing? More fraudbots in data centers are getting caught, just because the adtech firms have mostly been shamed into filtering out the embarassingly obvious traffic from IP addresses that everyone can tell probably don't have a human user on them. So where is fraud going now? More fraud is likely to move to a place where a bot can look more realistic but probably not stay up as long—your computer or mobile device. Expect adfraud concealed within web pages, as a payload for malware, and of course in lots and lots of cheesy native mobile apps.The Google Play Store has an ongoing problem with adfraud, which is content marketing gold for Check Point Software, if you like "shitty app did WHAT?" stories. Adfraud makes way more money than cryptocurrency mining, using less CPU and battery.

So the bad news is that you're going to have to reformat your uncle's computer a lot this year, because more client-side fraud is coming. Data center IPs don't get by the ad networks as well as they once did, so adfraud is getting personal. The good news, is, hey, you know all that big, scary passive fingerprinting that's supposed to become the harder-to-beat replacement for the third-party cookie? Client-side fraud has to beat it in order to get paid, so they'll beat it. As a bonus, client-side bots are way better at attribution fraud (where a fraudulent ad gets credit for a real sale) than data center bots.

Users don't have to get protected from every possible tracking technique in order to shift the web advertising game from a hacking contest to a reputation contest. It often helps simply to shift the advertiser's ROI from negative-externality advertising below the ROI of positive-externality advertising.

Advertisers have two possible responses to adfraud: either try to out-hack it, or join the "flight to quality" and cut back on trying to follow big-money users to low-reputation sites in the first place. Hard-to-detect client-side bots, by making creepy fingerprinting techniques less trustworthy, tend to increase the uncertainty of the hacking option and make flight to quality relatively more attractive.


Planet Linux AustraliaPia Waugh: An optimistic future

This is my personal vision for an event called “Optimistic Futures” to explore what we could be aiming for and figure out the possible roles for government in future.

Technology is both an enabler and a disruptor in our lives. It has ushered in an age of surplus, with decentralised systems enabled by highly empowered global citizens, all creating increasing complexity. It is imperative that we transition into a more open, collaborative, resilient and digitally enabled society that can respond exponentially to exponential change whilst empowering all our people to thrive. We have the means now by which to overcome our greatest challenges including poverty, hunger, inequity and shifting job markets but we must be bold in collectively designing a better future, otherwise we may unintentionally reinvent past paradigms and inequities with shiny new things.

Technology is only as useful as it affects actual people, so my vision starts, perhaps surprisingly for some, with people. After all, if people suffer, the system suffers, so the well being of people is the first and foremost priority for any sustainable vision. But we also need to look at what all sectors and communities across society need and what part they can play:

  • People: I dream of a future where the uniqueness of local communities, cultures and individuals is amplified, where diversity is embraced as a strength, and where all people are empowered with the skills, capacity and confidence to thrive locally and internationally. A future where everyone shares in the benefits and opportunities of a modern, digital and surplus society/economy with resilience, and where everyone can meaningfully contribute to the future of work, local communities and the national/global good.
  • Public sectors: I dream of strong, independent, bold and highly accountable public sectors that lead, inform, collaborate, engage meaningfully and are effective enablers for society and the economy. A future where we invest as much time and effort on transformational digital public infrastructure and skills as we do on other public infrastructure like roads, health and traditional education, so that we can all build on top of government as a platform. Where everyone can have confidence in government as a stabilising force of integrity that provides a minimum quality of life upon which everyone can thrive.
  • The media: I dream of a highly effective fourth estate which is motivated systemically with resilient business models that incentivise behaviours to both serve the public and hold power to account, especially as “news” is also arguably becoming exponential. Actionable accountability that doesn’t rely on the linearity and personal incentives of individuals to respond will be critical with the changing pace of news and with more decisions being made by machines.
  • Private, academic and non-profit sectors: I dream of a future where all sectors can more freely innovate, share, adapt and succeed whilst contributing meaningfully to the public good and being accountable to the communities affected by decisions and actions. I also see a role for academic institutions in particular, given their systemic motivation for high veracity outcomes without being attached to one side, as playing a role in how national/government actions are measured, planned, tested and monitored over time.
  • Finally, I dream of a world where countries are not celebrated for being just “digital nations” but rather are engaged in a race to the top in using technology to improve the lives of all people and to establish truly collaborative democracies where people can meaningfully participate in the shaping the optimistic and inclusive futures.

Technology is a means, not an ends, so we need to use technology to both proactively invent the future we need (thank you Alan Kay) and to be resilient to change including emerging tech and trends.

Let me share a few specific optimistic predictions for 2070:

  • Automation will help us redesign our work expectations. We will have a 10-20 hour work week supported by machines, freeing up time for family, education, civic duties and innovation. People will have less pressure to simply survive and will have more capacity to thrive (this is a common theme, but something I see as critical).
  • 3D printing of synthetic foods and nanotechnology to deconstruct and reconstruct molecular materials will address hunger, access to medicine, clothes and goods, and community hubs (like libraries) will become even more important as distribution, education and social hubs, with drones and other aerial travel employed for those who can’t travel. Exoskeletons will replace scooters :)
  • With rocket travel normalised, and only an hour to get anywhere on the planet, nations will see competitive citizenships where countries focus on the best quality of life to attract and retain people, rather than largely just trying to attract and retain companies as we do today. We will also likely see the emergence of more powerful transnational communities that have nationhood status to represent the aspects of people’s lives that are not geopolitically bound.
  • The public service has highly professional, empathetic and accountable multi-disciplinary experts on responsive collaborative policy, digital legislation, societal modeling, identifying necessary public digital infrastructure for investment, and well controlled but openly available data, rules and transactional functions of government to enable dynamic and third party services across myriad channels, provided to people based on their needs but under their control. We will also have a large number of citizens working 1 or 2 days a week in paid civic duties on areas where they have passion, skills or experience to contribute.
  • The paralympics will become the main game, as it were, with no limits on human augmentation. We will do the 100m sprint with rockets, judo with cyborgs, rock climbing with tentacles. We have access to medical capabilities to address any form of disease or discomfort but we don’t use the technologies to just comply to a normative view of a human. People are free to choose their form and we culturally value diversity and experimentation as critical attributes of a modern adaptable community.

I’ve only been living in New Zealand a short time but I’ve been delighted and inspired by what I’ve learned from kiwi and Māori cultures, so I’d like to share a locally inspired analogy.

Technology is on one hand, just a waka (canoe), a vehicle for change. We all have a part to play in the journey and in deciding where we want to go. On the other hand, technology is also the winds, the storms, the thunder, and we have to continually work to understand and respond to emerging technologies and trends so we stay safely on course. It will take collaboration and working towards common goals if we are to chart a better future for all.

Don MartiThis is why we can't have nice brands.

What if I told you that there was an Internet ad technology that...

  • can reach the same user on mobile and desktop

  • uses open-standard persistent identifiers for users

  • can connect users to their purchase history

  • reaches the users that the advertiser chooses, at the time the advertiser chooses

  • and doesn't depend on the Google/Facebook duopoly?

Don't go looking for it on the Lumascape.

I'm describing email spam.

Every feature that adtech is bragging on, or working toward? Email spam had it in the 1990s.

So why didn't brand advertisers jump all over spam? Why did they mostly leave it to low-reputation brands and scammers?

To be honest, it probably wasn't a decision decision in most cases, just corporate sloth. But staying away from spam was the right answer. In the email inbox, spam from a high-reputation brand doesn't look any different from spam that any fly-by-night operation can send. All spammers can do the same stuff:

They can sell to people...for a fraction of what marketing used to cost. And they can collect data on these consumers, track what they buy, what they love and hate about the experience, and market to them directly much more effectively.

Oh, wait. That one isn't about spam in the 1990s. That's about targeted advertising on social media sites today. The CEO of digital advertising's biggest trade group says most big marketers are screwed unless they completely change their business models.

It's the direct consumer relationships, and the use of consumer data, that is completely game-changing for the marketing world. And most big marketers, such as Procter & Gamble and Unilever, are not ready for this new reality, the IAB says.

But of course they're ready. The difference is that those established brand advertisers aren't any more ready than some guy who watched a YouTube video series on "growth hacking" and is ready to start buying targeted ads and drop-shipping.

The "new reality," the targeted advertising business that the IAB wants brands to join them in, is a place where you win based not on how much the audience trusts you, but on how well you can out-hack the competition. And like any information space organized by hacking skill, it's a hellscape of deceptive crap. Read The Strange Brands in Your Instagram Feed by Alexis C. Madrigal.

Some Instagram retailers are legit brands with employees and products. Others are simply middlemen for Chinese goods, built in bedrooms, and launched with no capital or inventory. All of them have been pulled into existence by the power of Instagram and Facebook ads combined with a suite of e-commerce tools based around Shopify.

Of course, not every brand that buys a social media ad or other targeted ad is crap.

But a social media ad is useless for telling crap brands from non-crap ones. It doesn't carry economic signal. There's no such thing as a free watch. (PDF)

Rory Sutherand writes, in Reducing activities to their core misses the point,

Many billions of pounds of advertising expenditure have been shifted from conventional media, most notably newspapers, and moved into digital media in a quest for targeted efficiency. If advertising simply works by the conveyance of messages, this would be a sensible thing to do. However, it is beginning to become apparent that not all, perhaps not even most, advertising works this way. It seems that a large part of advertising creates trust and conviction in its audience precisely because it is perceived to be costly.

If anyone knows that any seller can watch a few YouTube videos and do a certain activity, does that activity really help the audience distinguish a high-reputation seller from a low-reputation one?

And how does it affect a legit brand when its ads show up on the same medium with all the crappy ones?Twitter has a solution that keeps its ads saleable: just don't show any ads to important people. I'm surprised they can get away with this, but given the mix of rip-off and real brand ads I keep seeing there, it seems to be working.

Extremists and state-sponsored misinformation campaigns aren't "abusing" targeted advertising. They're just taking advantage of a system optimized for deception and using it normally.

Now, I don't want to blame targeted advertising for all of the problems of brand equity. When you put high-fructose corn syrup in your product, brand equity suffers. When you outsource or de-skill the customer support function, brand equity suffers. All the half-ass "looks good this quarter" stuff that established brands are doing is bad for brand equity. It just turns out that the kinds of advertising that you can do on the Internet today are all half-ass "looks good this quarter" stuff. If you want to send a credible economic signal, buy TV time or put a flagship store on some expensive real estate. The Internet's got nothing for you.

Failure to create signal-carrying ad units should be more of a concern for people who want to earn ad money on the Internet than it is. See Bob Hoffman's "refrigerator test." All that work that went into building the most complicated ad medium ever? It went into building an ad medium optimized for low-reputation advertisers. And that kind of ad medium tends to see rates go down over time. It doesn't hold value.

And the medium can't gain value until the users trust it, which means they have to trust the browser. In-browser tracking protection is going to have to enable the legit web advertising industry the same way that spam filters enables the legit email newsletter industry.

Here’s why the epidemic of malicious ads grew so much worse last year

Facebook and Google could lose $2B in ad revenue over ‘toxic content’

How I Cracked Facebook’s New Algorithm And Tortured My Friends

Wanted: Console Text Editor for Windows

Where Did All the Advertising Jobs Go?

Facebook patents tech to determine social class

The Mozilla Blog: A Perspective: Firefox Quantum’s Tracking Protection Gives Users The Right To Be Curious

Breaking up with Facebook: users confess they're spending less time

Survey: Facebook is the big tech company that people trust least

The Perils of Paid Content


Unilever pledges to cut ties with ‘platforms that create division’

Content recommendation services Outbrain and Taboola are no longer a guaranteed source of revenue for digital publishers

The House That Spied on Me

Why Facebook's Disclosure to the City of Seattle Doesn't Add Up

Debunking common blockchain-saving-advertising myths

SF tourist industry struggles to explain street misery to horrified visitors

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

How Facebook Helped Ruin Cambodia's Democracy

Planet Linux AustraliaDonna Benjamin: Site building with Drupal

What even is "Site Building"?

At DrupalDownunder some years back, the wonderful Erica Bramham named her talk "All node, no code". Nodes were the fundamental building blocks in Drupal, they were like single drops of content. These days though, it's all about entities.

But hang on a minute, I'm using lots of buzz words, and worse, I'm using words that mean different things in different contexts. Jargon is one of the first hurdles you need to jump to understand the diverse worlds of the web. People who grow up multi-lingual learn that the meanings of words is somewhat arbitrary. They learn the same thing has different names. This is true for the web too. So the first thing to know about Site Building, is it means different things to different people. 

To me, it means being able to build a website with out knowing how to code. I also believe it means I can build a website without having to set up my own development environment. I know people who vehemently disagree with me about this. But that's ok. This is my blog, and these are my rules.

So - this is a post about site building, using SimplyTest.Me and Drupal 8 out of the box.

1. Go to

2. Type Drupal Core in the search field, and select "Drupal core" from the list

3. Choose the latest development branch, right at the bottom of the list.


For me, right now, that's 8.6.x, and here's a screenshot of what that looks like.

SimplyTest Me Screenshot, showing drop down fields described in the text.


4. Click "Launch sandbox".

Now wait.

In a few moments, you should see a fresh shiny Drupal 8 site, ready for you to explore.

For me today, it looks like this.  

Drupal 8.6.x front page screenshot


In the top right of the window, you should see a "Log in" link.

Click that, and enter admin/admin to login. 

You're now ready to practice some site building!

First, you'll need to create some content to play with.  Here's a short screencast that shows you how to login, add an article, and change the title using Quick Edit.

A guide to what's next

Follow the Drupal User guide to start building your site!

If you want to start at the beginning, you'll get a great overview of Drupal, and some important info on how to plan your site. But if you want to roll up your sleeves and get building, you can skip the chapter on site installation and jump straight to chapter 4, and dive into basic site configuration.



You have 24 hours to experiment with the sandbox - after that it disappears.


Get in touch

If you want something more permanent, you might want to "try drupal" or contact us at to discuss our Drupal services.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV February 2018 Workshop: Installing an Open Source OS on your tablet or phone

Feb 24 2018 12:30
Feb 24 2018 16:30
Feb 24 2018 12:30
Feb 24 2018 16:30
Infoxchange, 33 Elizabeth St. Richmond

Installing an Open Source OS on your tablet or phone

Andrew Pam will demonstrate how to install LineageOS, previously known as CyanogenMod and based on the Android Open Source Project, on tablets and phones.  Feel free to bring your own tablets and phones and have a go, but please ensure you back them up if there is anything you still need stored on them!

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

February 24, 2018 - 12:30

read more


CryptogramFriday Squid Blogging: Squid Pin

There's a squid pin on Kickstarter.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Rondam RamblingsYes, code is data, but that's not what makes Lisp cool

There has been some debate on Hacker News lately about what makes Lisp cool, in particular about whether the secret sauce is homo-iconicity, or the idea that "code is data", or something else.  I've read through a fair amount of the discussion, and there is a lot of misinformation and bad pedagogy floating around.  Because this is a topic that is near and dear to my heart, I thought I'd take a

CryptogramNew National Academies Report on Crypto Policy

The National Academies has just published "Decrypting the Encryption Debate: A Framework for Decision Makers." It looks really good, although I have not read it yet.

Not much news or analysis yet. Please post any links you find in the comments, and I will summarize them here.

Planet Linux AustraliaOpenSTEM: Australia at the Olympics

The modern Olympic games were started by Frenchman Henri de Baillot-Latour to promote international understanding. The first games of the modern era were held in 1896 in Athens, Greece. Australia has competed in all the Olympic games of the modern era, although our participation in the first one was almost by chance. Of course, the […]

Worse Than FailureError'd: Preparing for the Future

George B. wrote, "Wait, so is it done...or not done?"


George B. (different George, but is in good company) is seeing nearly the same thing with Crash Plan Pro where the backup is done ...maybe.


"I swear, that's the last time that I'm flying with Icarus Airlines" Allison V. writes.


"The best I can figure, someone wanted to see what the simulation app would do if executed in some far flung future where months don't matter and nothing makes any sense," writes M.C.


Joel C. wrote "I can't help it - Next time my train is late, I'm going to immediately think that it's because someone didn't click to dismiss a popup."


"I'm not sure what this means, but I guess it's to point out that there are website buttons, and then there are buttons on the website," Brian R. wrote.


[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.


Cory DoctorowDo We Need a New Internet?

I was one of the interview subjects on an episode of BBC’s Tomorrow’s World called Do We Need a New Internet? (MP3); it’s a fascinating documentary, including some very thoughtful commentary from Edward Snowden.

Cory DoctorowThe 2018 Locus Poll is open: choose your favorite science fiction of 2017!

Following the publication of its editorial board’s long-list of the best science fiction of 2017, science fiction publishing trade-journal Locus now invites its readers to vote for their favorites in the annual Locus Award. I’m honored to have won this award in the past, and doubly honored to see my novel Walkaway on the short list, and in very excellent company indeed.

While you’re thinking about your Locus List picks, you might also use the list as an aide-memoire in picking your nominees for the Hugo Awards.

Krebs on SecurityNew EU Privacy Law May Weaken Security

Companies around the globe are scrambling to comply with new European privacy regulations that take effect a little more than three months from now. But many security experts are worried that the changes being ushered in by the rush to adhere to the law may make it more difficult to track down cybercriminals and less likely that organizations will be willing to share data about new online threats.

On May 25, 2018, the General Data Protection Regulation (GDPR) takes effect. The law, enacted by the European Parliament, requires technology companies to get affirmative consent for any information they collect on people within the European Union. Organizations that violate the GDPR could face fines of up to four percent of global annual revenues.

In response, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — is poised to propose changes to the rules governing how much personal information Web site name registrars can collect and who should have access to the data.

Specifically, ICANN has been seeking feedback on a range of proposals to redact information provided in WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges (IP addresses).

Under current ICANN rules, domain name registrars should collect and display a variety of data points when someone performs a WHOIS lookup on a given domain, such as the registrant’s name, address, email address and phone number. (Most registrars offer a privacy protection service that shields this information from public WHOIS lookups; some registrars charge a nominal fee for this service, while others offer it for free).

In a bid to help domain registrars comply with the GDPR regulations, ICANN has floated several proposals, all of which would redact some of the registrant data from WHOIS records. Its mildest proposal would remove the registrant’s name, email, and phone number, while allowing self-certified 3rd parties to request access to said data at the approval of a higher authority — such as the registrar used to register the domain name.

The most restrictive proposal would remove all registrant data from public WHOIS records, and would require legal due process (such as a subpoena or court order) to reveal any information supplied by the domain registrant.

ICANN’s various proposed models for redacting information in WHOIS domain name records.

The full text of ICANN’s latest proposed models (from which the screenshot above was taken) can be found here (PDF). A diverse ICANN working group made up of privacy activists, technologists, lawyers, trademark holders and security experts has been arguing about these details since 2016. For the curious and/or intrepid, the entire archive of those debates up to the current day is available at this link.


To drastically simplify the discussions into two sides, those in the privacy camp say WHOIS records are being routinely plundered and abused by all manner of ne’er-do-wells, including spammers, scammers, phishers and stalkers. In short, their view seems to be that the availability of registrant data in the WHOIS records causes more problems than it is designed to solve.

Meanwhile, security experts are arguing that the data in WHOIS records has been indispensable in tracking down and bringing to justice those who seek to perpetrate said scams, spams, phishes and….er….stalks.

Many privacy advocates seem to take a dim view of any ICANN system by which third parties (and not just law enforcement officials) might be vetted or accredited to look at a domain registrant’s name, address, phone number, email address, etc. This sentiment is captured in public comments made by the Electronic Frontier Foundation‘s Jeremy Malcolm, who argued that — even if such information were only limited to anti-abuse professionals — this also wouldn’t work.

“There would be nothing to stop malicious actors from identifying as anti-abuse professionals – neither would want to have a system to ‘vet’ anti-abuse professionals, because that would be even more problematic,” Malcolm wrote in October 2017. “There is no added value in collecting personal information – after all, criminals are not going to provide correct information anyway, and if a domain has been compromised then the personal information of the original registrant isn’t going to help much, and its availability in the wild could cause significant harm to the registrant.”

Anti-abuse and security experts counter that there are endless examples of people involved in spam, phishing, malware attacks and other forms of cybercrime who include details in WHOIS records that are extremely useful for tracking down the perpetrators, disrupting their operations, or building reputation-based systems (such as anti-spam and anti-malware services) that seek to filter or block such activity.

Moreover, they point out that the overwhelming majority of phishing is performed with the help of compromised domains, and that the primary method for cleaning up those compromises is using WHOIS data to contact the victim and/or their hosting provider.

Many commentators observed that, in the end, ICANN is likely to proceed in a way that covers its own backside, and that of its primary constituency — domain registrars. Registrars pay a fee to ICANN for each domain a customer registers, although revenue from those fees has been falling of late, forcing ICANN to make significant budget cuts.

Some critics of the WHOIS privacy effort have voiced the opinion that the registrars generally view public WHOIS data as a nuisance issue for their domain registrant customers and an unwelcome cost-center (from being short-staffed to field a constant stream of abuse complaints from security experts, researchers and others in the anti-abuse community).

“Much of the registrar market is a race to the bottom, and the ability of ICANN to police the contractual relationships in that market effectively has not been well-demonstrated over time,” commenter Andrew Sullivan observed.

In any case, sources close to the debate tell KrebsOnSecurity that ICANN is poised to recommend a WHOIS model loosely based on Model 1 in the chart above.

Specifically, the system that ICANN is planning to recommend, according to sources, would ask registrars and registries to display just the domain name, city, state/province and country of the registrant in each record; the public email addresses would be replaced by a form or message relay link that allows users to contact the registrant. The source also said ICANN plans to leave it up to the registries/registrars to apply these changes globally or only to natural persons living in the European Economic Area (EEA).

In addition, sources say non-public WHOIS data would be accessible via a credentialing system to identify law enforcement agencies and intellectual property rights holders. However, it’s unlikely that such a system would be built and approved before the May 25, 2018 effectiveness date for the GDPR, so the rumor is that ICANN intends to propose a self-certification model in the meantime.

ICANN spokesman Brad White declined to confirm or deny any of the above, referring me instead to a blog post published Tuesday evening by ICANN CEO Göran Marby. That post does not, however, clarify which way ICANN may be leaning on the matter.

“Our conversations and work are on-going and not yet final,” White wrote in a statement shared with KrebsOnSecurity. “We are converging on a final interim model as we continue to engage, review and assess the input we receive from our stakeholders and Data Protection Authorities (PDAs).”

But with the GDPR compliance deadline looming, some registrars are moving forward with their own plans on WHOIS privacy. GoDaddy, one of the world’s largest domain registrars, recently began redacting most registrant data from WHOIS records for domains that are queried via third-party tools. And it seems likely that other registrars will follow GoDaddy’s lead.


For my part, I can say without hesitation that few resources are as critical to what I do here at KrebsOnSecurity than the data available in the public WHOIS records. WHOIS records are incredibly useful signposts for tracking cybercrime, and they frequently allow KrebsOnSecurity to break important stories about the connections between and identities behind various cybercriminal operations and the individuals/networks actively supporting or enabling those activities. I also very often rely on WHOIS records to locate contact information for potential sources or cybercrime victims who may not yet be aware of their victimization.

In a great many cases, I have found that clues about the identities of those who perpetrate cybercrime can be found by following a trail of information in WHOIS records that predates their cybercriminal careers. Also, even in cases where online abusers provide intentionally misleading or false information in WHOIS records, that information is still extremely useful in mapping the extent of their malware, phishing and scamming operations.

Anyone looking for copious examples of both need only to search this Web site for the term “WHOIS,” which yields dozens of stories and investigations that simply would not have been possible without the data currently available in the global WHOIS records.

Many privacy activists involved in to the WHOIS debate have argued that other data related to domain and Internet address registrations — such as name servers, Internet (IP) addresses and registration dates — should also be considered private information. My chief concern if this belief becomes more widely held is that security companies might stop sharing such information for fear of violating the GDPR, thus hampering the important work of anti-abuse and security professionals.

This is hardly a theoretical concern. Last month I heard from a security firm based in the European Union regarding a new Internet of Things (IoT) botnet they’d discovered that was unusually complex and advanced. Their outreach piqued my curiosity because I had already been working with a researcher here in the United States who was investigating a similar-sounding IoT botnet, and I wanted to know if my source and the security company were looking at the same thing.

But when I asked the security firm to share a list of Internet addresses related to their discovery, they told me they could not do so because IP addresses could be considered private data — even after I assured them I did not intend to publish the data.

“According to many forums, IPs should be considered personal data as it enters the scope of ‘online identifiers’,” the researcher wrote in an email to KrebsOnSecurity, declining to answer questions about whether their concern was related to provisions in the GDPR specifically.  “Either way, it’s IP addresses belonging to people with vulnerable/infected devices and sharing them may be perceived as bad practice on our end. We consider the list of IPs with infected victims to be private information at this point.”

Certainly as the Internet matures and big companies develop ever more intrusive ways to hoover up data on consumers, we also need to rein in the most egregious practices while giving Internet users more robust tools to protect and preserve their privacy. In the context of Internet security and the privacy principles envisioned in the GDPR, however, I’m worried that cybercriminals may end up being the biggest beneficiaries of this new law.

CryptogramElection Security

Good Washington Post op-ed on the need to use voter-verifiable paper ballots to secure elections, as well as risk-limiting audits.