Planet Russell


Planet DebianNiels Thykier: Prototyping a new packaging papercut fix – DRY for the debhelper compat level

Have you ever looked at packaging and felt it is a long exercise in repeating yourself?  If you have, you are certainly not alone.  You can find examples of this such on the Debian mailing lists.  Such as when Russ Allbery pointed out that the debhelper compat level vs. the version in the debhelper build-dependency that is very often but not always the same.

Russ suggests two ways of solving the problem:

  1. The first proposal is to generate the build-dependency from the compat file. However, generating the control file is (as Russ puts it) “fraught with peril”.  Probably because we do not have good or standardized tooling for it – creating such tooling and deploying it will take years.  Not to mention that most contributors appear to be uncomfortable with handling the debian/control as a generated file.
  2. The alternative proposal from Russ is to assume that the major version of the build-dependency should mark the compat level (assuming no compat file exist).  However, Russ again points out an issue here that solution might be “too magical”.  Indeed, this solution have the problem that you implicitly change compat level as soon as you bump the versioned dependency beyond a major version of debhelper.  But only if you do not have a compat file.

Looking at these two options, the concept behind second one is most likely to be deployable in the near future.  However, the solution would need some tweaking and I have spend my time coming up with an alternative.

The third alternative:

My current alternative to Russ’s second proposal is to make debhelper provide multiple versions of “debhelper-compat” and have packages use a Build-Depends on “debhelper-compat (= X)”, where X denotes the desired compat level.  The build-dependency will then replace the “debhelper (>= X~)” relation when the package does not require a specific version of debhelper (beyond what is required for the compat level).

On top of this, debhelper implements some safe-guards to ensure that it can reliably determine the compat level from the build-dependencies.  Notably, there must be exactly one debhelper compat relation, it must be in the “Build-Depends” field and it must have a “strictly equal version” as version constraint.  Furthermore, it must not have any Build-Profile or architecture restrictions and so on.


With all of this in place:

  1. We have no repetition when it is not required.
  2. debhelper can reliable auto-detect which debhelper-compat level you wanted.  Otherwise, debhelper will ensure the build fails (or apt/aptitude, if you end up misspelling the package name or using an invalid version).
  3. Bumping the debhelper compat level is still explicit and separate from bumping the debhelper dependency when you need a  feature or bug fix from a later version.

Testing the prototype:

If you want to test the prototype, you can do so in unstable and testing at the moment (caveat: it is an experimental feature and may change or disappear without notice).  However, please note that lintian is completely unaware of this and will spew out several false-positives – including one nonfatal auto-reject, so you will have to apply at least one lintian override.  Also note, I have only added “debhelper-compat” versions for non-deprecated compat levels.  In other words, you will have to use compat 9 or later to test the feature.

You can use “mscgen/0.20-11” as an example for minimum the changes required.  Admittedly, the example cheats and relies on “debhelper-compat (= 10)” implies a “debhelper (>= 11.1.5~alpha1)” as that is the first version with the provides for debhelper-compat.  Going forward, if you need a feature from debhelper that appears in a later version than that, then you will need an explicit “debhelper (>= Y)” relation for that feature on top of the debhelper-compat relation.

Will you remove the support for debian/compat if this prototype works?

I have no immediate plans to remove the debian/compat file even if this prototype is successful.

Will you upload this to stretch-backports?

Yes, although I am waiting for a fix for #889567 to be uploaded to stretch-backports first.

Will this work for backports?

It worked fine on the buildds when I deployed it in experimental and I believe the build-dependency resolution process for experimental is similar (enough) to backports.

Will the third-party debhelper tools need changes to support this?

Probably no; most third-party debhelper tools do not seem to check the compat level directly. Even then, most tools use the “compat” sub from “Debian::Debhelper::Dh_Lib”, which handles all of this automatically.

That said, if you have a third-party tool that wishes or needs to check the debhelper compat level, please file a bug requesting a cross-language API for this and I will happy look at this.

Future work

I am considering to apply this concept to the dh sequence add-ons as well (i.e. the “dh $@ –with foo”).  From my PoV, this is another case needing a DRY fix.  Plus this would also present an opportune method for solving #836699 – though, the hard part for #836699 is actually taming the dh add-on API plus dh’s sequence handling to consistently only affect the “indep” part of the sequences.


Planet DebianSean Whitton: Why have combat encounters in 5e D&D?

A friend and I each run a D&D game, and we also play in each other’s games. We disagree on a number of different things about how the game is best played, and I learn a lot from seeing how both sets of choices play out in each of the two games.

One significant point of disagreement is how important it is to ensure that combat is balanced. In my game I disallow all homebrew and third party content. Only the core rulebooks, and official printed supplements, are permitted. By contrast, my friend has allowed several characters in his game to use homebrew races from the Internet, which are clearly more powerful than the PHB races. And he is quite happy to make modifications to spells and abilities without investigating the consequences for balance. Changes which seem innocuous can have balance consequences that you don’t realise for some time or do not observe without playtesting; I always assume the worst, and don’t change anything. (I constantly reflavour abilities and stats. In this post I’m interested in crunch alone.)

In this post I want to explain why I put such a premium on balance. Before getting on to that explanation, I first need to say something about the blogger Mike Shea’s claim that “D&D 5e is imbalanced by design and that’s ok. Imbalance leads to interesting stories.” (one, two). Shea is drawing a contrast between 4e and 5e. It was possible to more precisely quantify all character and monster abilities in 4e, which meant that if the calculations showed that an encounter would be easy, medium or hard, it was more likely to turn out to be easy, medium or hard. By contrast, 5e involves a number of abilities that can turn the tide of combat suddenly and against the odds. So while the XP thresholds might indicate that a planned encounter will be an easy one, a monster’s ability to petrify a character with just two failed saves could mean that the whole party goes down. Similarly for character abilities that can turn a powerful boss into a sitting duck for the entire combat. Shea points out that such abilities add an awful lot of fun and suspense to the game that might have been lacking from 4e.

I am not in a position to know whether 4e really lacked the kind of surprise and suspense described here. Regardless, Shea has identified something important about 5e. A great deal is added to combat by abilities on both sides that can quickly turn the tide. However, I find it misleading to say that this makes 5e unbalanced. Shea also uses the term ‘unpredictable’, and I think that’s a better way to refer to this phenomenon. For balance is more than determining an accurate challenge rating, and using this to pit the right number of monsters against the right number of players. In the absence of tide-turning abilities, that’s all balance is; however, balance is also a concept that applies to individual abilities, including tide-turning abilities.

I suggest that a very powerful ability, that has the potential to change the tide of a battle, is balanced by some combination of being (i) highly situational; (ii) very resource-depleting; and (iii) requires a saving throw, or similar, with the parameters set so that the full effect of the ability is unlikely to occur. Let me give a few examples. It’s been pointed out that the Fireball spell deals more damage than a multi-target 3rd level spell is meant to deal (DMG, p. 384). However, the spell is highly situational because it is highly likely to also hit your allies. (Evokers have a mitigation for this, but that is at the cost of a full class feature.) Power Word Kill might down a powerful enemy much sooner than expected. But there’s another enemy in the next room, and then that spell slot is gone.

We should conclude that 5e is not imbalanced by design, but unpredictable by design. In fact, I suggest that 5e spells and abilities are a mixture of the predictable and the unpredictable, and the concept of balance applies differently to these two kinds of abilities. A creature’s standard attack is predictable; balancing it is simply a matter of adjusting the to-hit bonus and the damage dice. Balancing its tide-turning ability is a matter of adjusting the factors I discussed in the previous paragraph, and possibly others. Playtesting will be necessary for both of these balancing efforts to succeed. Predictable abilities are unbalanced when they don’t do enough damage often enough, or too much damage too often, as compared with their CR. Unpredictable abilities are unbalanced when they offer systematic ways to change the tide of battle. Indeed, this makes them predictable.

Now that I’ve responded to Shea, I’ll say what I think the point of combat encounters is, and why this leads me to disallow content that has not been rigorously playtested. (My thinking here is very much informed by how Rodrigo Lopez runs D&D on the Critical Hit podcast, and what he says about running D&D in Q&A. Thank you Rodrigo!) Let me first set aside combat encounters that are meant to be a walkover, and combat encounters that are meant to end in multiple deaths or retreat. The purpose of walkover encounters is to set a particular tone in the story. It allows characters to know that certain things are not challenging to them, and this can be built into roleplaying (”we are among the most powerful denizens of the realm. That gives us a certain responsibility.”). The purpose of unwinnable combat encounters is to work through turning points in a campaign’s plot. The fact that an enemy cannot be defeated by the party is likely to drive a whole story arc; actually running that combat, rather than just saying that their attempt to fight the enemy fails, helps drive that home, and gives the characters points of reference (”you saw what happened when he turned his evil gaze upon you, Mazril. We’ve got to find another way!”).

Consider, then, other combat encounters. This is what I think they are all about. The GM creates an encounter that the rules say is winnable, or unwinnable but otherwise surviveable. Then the GM and the players fight out that encounter within the rules, each side trying to fully exploit the resources available to them, though without doing anything that would not make sense from the points of view of the characters and the monsters. Rolls are not made in secret or fudged, and HP totals are not arbitrarily adjusted. The GM does not pull punches. There are no restrictions on tactical thinking; for example, it’s fine for players to deduce enemy ACs, openly discuss them and act accordingly. However, actions taken must have an in-character justification. The outcome of the battle depends on a combination of tactics and luck: unpredictable abilities can turn the tide suddenly, and that might be enough to win or lose, but most of the time good tactical decision-making on the part of the players is rewarded. (The nature of monster abilities means that less interesting tactics are possible; further, the players have multiple brains between them, so ought to be able to out-think the GM in most cases.)

The result is that combat is a kind of minigame within D&D. The GM takes on a different role. In particular, GM fiat is suspended. The rules of the game are in charge (except, of course, where the GM has to make a call about a situation not covered by the rules). But isn’t this to throw out the advantages tabletop roleplaying games have over video games? Isn’t the GM’s freedom to bend the rules what makes D&D more fun and flexible? My basic response is that the rules for combat are only interesting when they do not depend on GM fiat, or other forms of arbitrariness, and for the parts of the game where GM fiat works well, it is better to use ability checks, or skills challenges, or straight roleplaying.

The thought is that the complexity of the combat rules is justified only when those rules are not arbitrary. If the players must think tactically within a system that can change arbitrarily, there’s no longer much point in investing energy in that tactical thinking. It is not intellectually interesting, it is much less fun, and it does not significantly contribute to the story. Tabletop games have an important role for a combination of GM fiat and dice rolls—the chance of those rolls succeeding remaining under the GM’s control—but that can be leveraged with simpler rules than those for combat. Now, I think that the combat rules are fun, so it is good to include them alongside the parts of the game that are more straightforwardly a collaboration between the GM and the players. But they should be deployed differently in order to bring out their strengths.

It should be clear, based on this, why I put such a premium on balance in combat: imbalance introduces arbitrariness to the combat system. If my tactical thinking is nullified by the systematic advantage that another party member has over my character, there’s no point in my engaging in that tactical thinking. Unpredictable abilities nullify tactical thinking in ways that are fun, but only when they are balanced in the ways I described above.

All of this is a matter of degree. I don’t think that combat is fun only when the characters and monsters are restricted to the core rulebooks; I enjoy combat when I play in my friend’s game. My view is just that combat is more fun the less arbitrary it is. I have certainly experienced the sense that my attempt to intellectually engage with the combat is being undermined by certain house rules and the overpowered abilities of homebrew races. Fortunately, thus far this problem has only affected a few turns of combat at a time, rather than whole combats.

Another friend is of the view that the GM should try to convince the players that they really are treating combat as I’ve described, but still fudge dice rolls in order to prevent, e.g., uninteresting character deaths. In response, I’ll note that I don’t feel capable of making those judgements, in the heat of the moment, about whether a death would be interesting. Further, having to worry about this would make combat less fun for me as the GM, and GM fun is important too.

Cory DoctorowA key to page-numbers in the Little Brother audiobook

Mary Kraus teaches my novel Little Brother to health science interns learning about cybersecurity; to help a student who has a print disability, Mary created a key that maps the MP3 files in the audiobook to the Tor paperback edition. She was kind enough to make her doc public to help other people move easily from the audiobook to the print edition — thanks, Mary!


Krebs on SecurityPowerful New DDoS Method Adds Extortion

Attackers have seized on a relatively new method for executing distributed denial-of-service (DDoS) attacks of unprecedented disruptive power, using it to launch record-breaking DDoS assaults over the past week. Now evidence suggests this novel attack method is fueling digital shakedowns in which victims are asked to pay a ransom to call off crippling cyberattacks.

On March 1, DDoS mitigation firm Akamai revealed that one of its clients was hit with a DDoS attack that clocked in at 1.3 Tbps, which would make it the largest publicly recorded DDoS attack ever.

The type of DDoS method used in this record-breaking attack abuses a legitimate and relatively common service called “memcached” (pronounced “mem-cash-dee”) to massively amp up the power of their DDoS attacks.

Installed by default on many Linux operating system versions, memcached is designed to cache data and ease the strain on heavier data stores, like disk or databases. It is typically found in cloud server environments and it is meant to be used on systems that are not directly exposed to the Internet.

Memcached communicates using the User Datagram Protocol or UDP, which allows communications without any authentication — pretty much anyone or anything can talk to it and request data from it.

Because memcached doesn’t support authentication, an attacker can “spoof” or fake the Internet address of the machine making that request so that the memcached servers responding to the request all respond to the spoofed address — the intended target of the DDoS attack.

Worse yet, memcached has a unique ability to take a small amount of attack traffic and amplify it into a much bigger threat. Most popular DDoS tactics that abuse UDP connections can amplify the attack traffic 10 or 20 times — allowing, for example a 1 mb file request to generate a response that includes between 10mb and 20mb of traffic.

But with memcached, an attacker can force the response to be thousands of times the size of the request. All of the responses get sent to the target specified in the spoofed request, and it requires only a small number of open memcached servers to create huge attacks using very few resources.

Akamai believes there are currently more than 50,000 known memcached systems exposed to the Internet that can be leveraged at a moment’s notice to aid in massive DDoS attacks.

Both Akamai and Qrator — a Russian DDoS mitigation company — published blog posts on Feb. 28 warning of the increased threat from memcached attacks.

“This attack was the largest attack seen to date by Akamai, more than twice the size of the September, 2016 attacks that announced the Mirai botnet and possibly the largest DDoS attack publicly disclosed,” Akamai said [link added]. “Because of memcached reflection capabilities, it is highly likely that this record attack will not be the biggest for long.”

According to Qrator, this specific possibility of enabling high-value DDoS attacks was disclosed in 2017 by a Chinese group of researchers from the cybersecurity 0Kee Team. The larger concept was first introduced in a 2014 Black Hat U.S. security conference talk titled “Memcached injections.”


On Thursday, KrebsOnSecurity heard from several experts from Cybereason, a Boston-based security company that’s been closely tracking these memcached attacks. Cybereason said its analysis reveals the attackers are embedding a short ransom note and payment address into the junk traffic they’re sending to memcached services.

Cybereason said it has seen memcached attack payloads that consist of little more than a simple ransom note requesting payment of 50 XMR (Monero virtual currency) to be sent to a specific Monero account. In these attacks, Cybereason found, the payment request gets repeated until the file reaches approximately one megabyte in size.

The ransom demand (50 Monero) found in the memcached attacks by Cybereason on Thursday.

Memcached can accept files and host files in temporary memory for download by others. So the attackers will place the 1 mb file full of ransom requests onto a server with memcached, and request that file thousands of times — all the while telling the service that the replies should all go to the same Internet address — the address of the attack’s target.

“The payload is the ransom demand itself, over and over again for about a megabyte of data,” said Matt Ploessel, principal security intelligence researcher at Cybereason. “We then request the memcached ransom payload over and over, and from multiple memcached servers to produce an extremely high volume DDoS with a simple script and any normal home office Internet connection. We’re observing people putting up those ransom payloads and DDoSsing people with them.”

Because it only takes a handful of memcached servers to launch a large DDoS, security researchers working to lessen these DDoS attacks have been focusing their efforts on getting Internet service providers (ISPs) and Web hosting providers to block traffic destined for the UDP port used by memcached (port 11211).

Ofer Gayer, senior product manager at security firm Imperva, said many hosting providers have decided to filter port 11211 traffic to help blunt these memcached attacks.

“The big packets here are very easy to mitigate because this is junk traffic and anything coming from that port (11211) can be easily mitigated,” Gayer said.

Several different organizations are mapping the geographical distribution of memcached servers that can be abused in these attacks. Here’s the world at-a-glance, from our friends at

The geographic distribution of memcached servers exposed to the Internet. Image:

Here are the Top 20 networks that are hosting the most number of publicly accessible memcached servers at this moment, according to data collected by Cybereason:

The global ISPs with the most number of publicly available memcached servers.

DDoS monitoring site publishes a live, running list of the latest targets getting pelted with traffic in these memcached attacks.

What do the stats at tell us? According to netlab@360, memcached attacks were not super popular as an attack method until very recently.

“But things have greatly changed since February 24th, 2018,” netlab wrote in a Mar. 1 blog post, noting that in just a few days memcached-based DDoS went from less than 50 events per day, up to 300-400 per day. “Today’s number has already reached 1484, with an hour to go.”

Hopefully, the global ISP and hosting community can come together to block these memcached DDoS attacks. I am encouraged by what I have heard and seen so far, and hope that can continue in earnest before these attacks start becoming more widespread and destructive.

Here’s the Cybereason video from which that image above with the XMR ransom demand was taken:

Cory DoctorowI’m coming to the Adelaide Festival this weekend (and then to Wellington, NZ!)

I’m on the last two cities in my Australia/NZ tour for my novel Walkaway: today, I’m flying to Adelaide for the Adelaide Festival, where I’m appearing in several program items: Breakfast with Papers on Sunday at 8AM; a book signing on Monday at 10AM in Dymocks at Rundle Mall; “Dust Devils,” a panel followed by a signing on Monday at 5PM on the West Stage at Pioneer Women’s Memorial Garden; and “Craphound,” a panel/signing on Tuesday at 5PM on the East Stage at Pioneer Women’s Memorial Garden.

After Adelaide, I’m off to Wellington for Writers and Readers Week and then the NetHui one-day copyright event.

I’ve had a fantastic time in Perth, Melbourne and Sydney and it’s been such a treat to meet so many of you — I’m looking so forward to these last two stops!

CryptogramFriday Squid Blogging: Searching for Humboldt Squid with Electronic Bait

Video and short commentary.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianJohn Goerzen: Emacs #3: More on org-mode

This is third in a series on Emacs and org-mode.

Todo tracking and keywords

When using org-mode to track your TODOs, it can have multiple states. You can press C-c C-t for a quick shift between states. I have set this:

(setq org-todo-keywords '(
  (sequence "TODO(t!)" "NEXT(n!)" "STARTED(a!)" "WAIT(w@/!)" "OTHERS(o!)" "|" "DONE(d)" "CANCELLED(c)")

Here, I set up 5 states that are for a task that is not yet done: TODO, NEXT, STARTED, WAIT, and OTHERS. Each has a single-character shortcut (t, n, a, etc). The states after the pipe symbol are ones that are considered “done”. I have two: DONE (for things that I have done) and CANCELED (for things that I haven’t done, but for whatever reason, won’t).

The exclamation mark means to log the time when an item was changed to a state. I don’t add this to the done states because those are already logged anyhow. The @ sign means to prompt for a reason; so when switching to WAIT, org-mode will ask me why and add this to the note.

Here’s an example of an entry that has had some state changes:

** DONE This is a test
   CLOSED: [2018-03-02 Fri 03:05]
   - State "DONE"       from "WAIT"       [2018-03-02 Fri 03:05]
   - State "WAIT"       from "TODO"       [2018-03-02 Fri 03:05] \\
     waiting for pigs to fly
   - State "TODO"       from "NEXT"       [2018-03-02 Fri 03:05]
   - State "NEXT"       from "TODO"       [2018-03-02 Fri 03:05]

Here, the most recent items are on top.

Agenda mode, schedules, and deadlines

When you’re in a todo item, C-c C-s or C-c C-d can set a schedule or a deadline for it, respectively. These show up in agenda mode. The difference is in intent and presentation. A schedule is something that you expect to work on at around a time, while a deadline is something that is due at a specific time. By default, the agenda view will start warning you about deadline items in advance.

And while we’re at it, the agenda view will show you the items that you have coming up, offers a nice way to search for items based on plain text or tags, and handles bulk manipulation of items even across multiple files. I covered setting the files for agenda mode in part 2 of this series.


Of course org-mode has tags. You can quickly set them with C-c C-q.

You can set shortcuts for tags you might like to use often. Perhaps something like this:

  (setq org-tag-persistent-alist 
        '(("@phone" . ?p) 
          ("@computer" . ?c) 
          ("@websurfing" . ?w)
          ("@errands" . ?e)
          ("@outdoors" . ?o)
          ("MIT" . ?m)
          ("BIGROCK" . ?b)
          ("CONTACTS" . ?C)
          ("INBOX" . ?i)

You can also add tags to this list on a per-file basis, and also set tags for something on a per-file basis. I use that for my and files to set an INBOX tag. I can then review all items tagged INBOX from the agenda view each day, and the simple act of refiling them into other files will cause them to lost the INBOX tag.


“Refiling” is moving things around, either within a file or elsewhere. It has completion using your headlines. C-c C-w does this. I like these settings:

(setq org-outline-path-complete-in-steps nil)         ; Refile in a single go
(setq org-refile-use-outline-path 'file)


After awhile, you’ll get your files all cluttered with things that are done. org-mode has an archive feature to move things out of your main .org files and into some other files for future reference. If you have your org files in git or something, you may wish to delete these other files since you’d have things in history anyhow, but I find them handy for grepping and searching.

I periodically want to go through and archive everything in my files. Based on a stackoverflow discussion, I have this code:

(defun org-archive-done-tasks ()
   (lambda ()
     (setq org-map-continue-from (outline-previous-heading)))
   "/DONE" 'file)
   (lambda ()
     (setq org-map-continue-from (outline-previous-heading)))
   "/CANCELLED" 'file)

This is based on a particular answer — see the comments there for some additional hints. Now you can run M-x org-archive-done-tasks and everything in the current file marked DONE or CANCELED will be pulled out into a different file.

Up next

I’ll wrap up org-mode with a discussion of automatically receiving emails into org, and syncing org between machines.

Resources to accompany this article

Planet DebianUrvika Gola: BOB Konferenz’18 in Berlin

Recently Pranav Jain and I attended Bob Conference in Berlin, Germany. The conference started with a keynote on a very interesting topic, A language for making movies. Using Non Linear Video Editor for making movies was time consuming, ofcourse. The speaker talked about the struggle of merging presentation, video and high quality sound for conferences. Clearly, Automation was needed here which could be achieved by 1. Making a plugin for non linear VE, 2. Writing a UI automation tool like an operating system macro 3. Using shell scripting. However, dealing shell script for this purpose could be time consuming no matter how great shell scripts are. While the goal to achieve here was to edit videos using a language only and let the language get in the way of solving this. In other words a DSL Domain-Specific Language was required along with Syntax Parse. Video ( a language for making movies which integrated with Racket ecosystem. It combines the power of a traditional video editor with the capabilities of a full programming language.

The next session was about Reactive Streaming with Akka Streams. Streaming Big Data applications is a challenge in itself by ensuring there is near to real time processing, i.e there is no time to batch data and process later. Streaming has to be done in a fault tolerant way, we have no time to deal with faults. Talking about streams, they are two types of streams Bounded and Unbounded! Bounded streams basically mean that the incoming stream is batched, processed to give some output whereas an Unbounded streams just keeps on flowing… just like that. Akka Streams make it easy to model type-safe message processing pipelines. Type-safe means that at compile time, it’s checks that data definitions are compatible. Akka streams has explicit semantics, which is quite important.
Basic building blocks for Akka streams are Sources (produce element of a type A), Sinks (take item of type A and consume A) and Flow (consume element of type A and produce elements of type B). The source will send data via the flow to the sinks. There are situations where data is not consumed or produced. Materialized values are useful when we, for example want to know if the stream was successful or not, result of which could be true/false. Another concept involved was of Backpressure. When we read things from file, it’s fast. If we split that file based on \n, it’s faster. If we want via http from somewhere, it can be slow due to net connectivity. So what backpressure does is that, any component can say ‘wooh! slow down, I need more time’. Everything is just as fast as the slowest component in the flow, which means that slowest component in the chain would determine the throughput. However, there are situations when we really don’t want to/ can’t control the speed of source. To have explicit control over back pressuring we can use buffering. If many requests are coming and reaches a limit, can set a buffer after which the requests can be discarded or we can also push the back pressure upstream when the buffer is full.

Next we saw a fun demo on GRiSP, Bare Metal Functional Programming. GRiSP allows you to run Erlang on bare metal hardware, without a kernel. GRiSP board could be an alternative to raspberry pi Or arduino. The robot was stubborn however, interesting to watch! Since, Pranav and I have worked on a Real Time Communications projects we were inclined towards attending a talk on Understanding real time ecosystems which was very informative. Learned about HTTP, AJAX polling, AJAX Long polling, HTTP/2, Pub/Sub and other concepts which were relatable. Learned more about protocols/ layers in the last talk of the conference, Engineering TCP/IP with logic.

This is just a summary of our experiences and what we were able to grasp at the conference and also share our individual experience with Debian on GSoC and Outreachy.

Thank you Dr. Michael Sperber for the opportunity and the organizers for putting up the conference.

CryptogramMalware from Space

Since you don't have enough to worry about, here's a paper postulating that space aliens could send us malware capable of destroying humanity.

Abstract: A complex message from space may require the use of computers to display, analyze and understand. Such a message cannot be decontaminated with certainty, and technical risks remain which can pose an existential threat. Complex messages would need to be destroyed in the risk averse case.

I think we're more likely to be enslaved by malicious AIs.

Planet DebianPetter Reinholdtsen: Debian used in the subway info screens in Oslo, Norway

Today I was pleasantly surprised to discover my operating system of choice, Debian, was used in the info screens on the subway stations. While passing Nydalen subway station in Oslo, Norway, I discovered the info screen booting with some text scrolling. I was not quick enough with my camera to be able to record a video of the scrolling boot screen, but I did get a photo from when the boot got stuck with a corrupt file system:

[photo of subway info screen]

While I am happy to see Debian used more places, some details of the content on the screen worries me.

The image show the version booting is 'Debian GNU/Linux lenny/sid', indicating that this is based on code taken from Debian Unstable/Sid after Debian Etch (version 4) was released 2007-04-08 and before Debian Lenny (version 5) was released 2009-02-14. Since Lenny Debian has released version 6 (Squeeze) 2011-02-06, 7 (Wheezy) 2013-05-04, 8 (Jessie) 2015-04-25 and 9 (Stretch) 2017-06-15, according to a Debian version history on Wikpedia. This mean the system is running around 10 year old code, with no security fixes from the vendor for many years.

This is not the first time I discover the Oslo subway company, Ruter, running outdated software. In 2012, I discovered the ticket vending machines were running Windows 2000, and this was still the case in 2016. Given the response from the responsible people in 2016, I would assume the machines are still running unpatched Windows 2000. Thus, an unpatched Debian setup come as no surprise.

The photo is made available under the license terms Creative Commons 4.0 Attribution International (CC BY 4.0).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Worse Than FailureError'd: I Don't Always Test my Code, but When I do...

"Does this mean my package is here or is it also in development?" writes Nariim.


Stuart L. wrote, "Who needs a development environment when you can just test in production on the 'Just In' feed?"


"It was so nice of Three to unexpectedly include me - a real user - in their User Acceptance Testing. Yeah, it's still not fixed," wrote Paul P.


"I found this great nearby hotel option that transcended into the complex plane," Rosenfield writes.


Stuart L. also wrote in, "I can't think of a better place for BoM to test out cyclone warnings than in production."


"The Ball Don't Lie blog at Yahoo! Sports seems to have run out of content during the NBA Finals so they started testing instead," writes Carlos S.


[Advertisement] Otter allows you to easily create and configure 1,000's of servers, all while maintaining ease-of-use, and granular visibility down to a single server. Find out more and download today!

CryptogramCellebrite Unlocks iPhones for the US Government

Forbes reports that the Israeli company Cellebrite can probably unlock all iPhone models:

Cellebrite, a Petah Tikva, Israel-based vendor that's become the U.S. government's company of choice when it comes to unlocking mobile devices, is this month telling customers its engineers currently have the ability to get around the security of devices running iOS 11. That includes the iPhone X, a model that Forbes has learned was successfully raided for data by the Department for Homeland Security back in November 2017, most likely with Cellebrite technology.


It also appears the feds have already tried out Cellebrite tech on the most recent Apple handset, the iPhone X. That's according to a warrant unearthed by Forbes in Michigan, marking the first known government inspection of the bleeding edge smartphone in a criminal investigation. The warrant detailed a probe into Abdulmajid Saidi, a suspect in an arms trafficking case, whose iPhone X was taken from him as he was about to leave America for Beirut, Lebanon, on November 20. The device was sent to a Cellebrite specialist at the DHS Homeland Security Investigations Grand Rapids labs and the data extracted on December 5.

This story is based on some excellent reporting, but leaves a lot of questions unanswered. We don't know exactly what was extracted from any of the phones. Was it metadata or data, and what kind of metadata or data was it.

The story I hear is that Cellebrite hires ex-Apple engineers and moves them to countries where Apple can't prosecute them under the DMCA or its equivalents. There's also a credible rumor that Cellebrite's mechanisms only defeat the mechanism that limits the number of password attempts. It does not allow engineers to move the encrypted data off the phone and run an offline password cracker. If this is true, then strong passwords are still secure.

EDITED TO ADD (3/1): Another article, with more information. It looks like there's an arms race going on between Apple and Cellebrite. At least, if Cellebrite is telling the truth -- which they may or may not be.

Planet DebianFrançois Marier: Redirecting an entire site except for the certbot webroot

In order to be able to use the webroot plugin for certbot and automatically renew the Let's Encrypt certificate for, I had to put together an Apache config that would do the following on port 80:

  • Let /.well-known/acme-challenge/* through on the bare domain (
  • Redirect anything else to

The reason for this is that the main Libravatar service listens on and not, but that cerbot needs to ascertain control of the bare domain.

This is the configuration I ended up with:

<VirtualHost *:80>
    DocumentRoot /var/www/acme
    <Directory /var/www/acme>
        Options -Indexes

    RewriteEngine on
    RewriteCond "/var/www/acme%{REQUEST_URI}" !-f
    RewriteRule ^(.*)$ [last,redirect=301]

The trick I used here is to make the redirection RewriteRule conditional on the requested file (%{REQUEST_URI}) not existing in the /var/www/acme directory, the one where I tell certbot to drop its temporary files.

Here are the relevant portions of /etc/letsencrypt/renewal/

authenticator = webroot
account = 

<span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=posts%2Fredirecting-entire-site-except-certbot-webroot&amp;page=webroot_map" rel="nofollow">?</a>webroot map</span> = /var/www/acme = /var/www/acme


Planet DebianDirk Eddelbuettel: RcppArmadillo 0.8.400.0.0

armadillo image

RcppArmadillo release 0.8.400.0.0, originally prepared and uploaded on February 19, finally hit CRAN today (after having been available via the RcppCore drat repo for a number of days). A corresponding Debian release was prepared and uploaded as well. This RcppArmadillo release contains Armadillo release 8.400.0 with a number of nice changes (see below for details), and continues our normal bi-monthly CRAN release cycle (slight delayes in CRAN processing notwithstanding).

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 450 other packages on CRAN.

A high-level summary of changes follows.

Changes in RcppArmadillo version 0.8.400.0.0 (2018-02-19)

  • Upgraded to Armadillo release 8.400.rc2 (Entropy Bandit)

    • faster handling of sparse matrices by repmat()

    • faster loading of CSV files

    • expanded kron() to handle sparse matrices

    • expanded index_min() and index_max() to handle cubes

    • expanded randi(), randu(), randn(), randg() to output single scalars

    • added submatrix & subcube iterators

    • added normcdf()

    • added mvnrnd()

    • added chi2rnd()

    • added wishrnd() and iwishrnd()

  • The configure generated header settings for LAPACK and OpenMP can be overridden by the user.

  • This release was preceded by two release candidates which were tested extensively.

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianGregor Herrmann: trains & snow

last weekend I attended the Debian SnowCamp at the lago maggiore in north-western italy, a small DIY hacking & socialising Debian meeting in the tradion of Debian SunCamp. – some impressions:


I wasn't aware of how tedious it can be to travel less than 500 kilometers into a neighbouring country by train in 2018. first of all, no train company would sell me a ticket from innsbruck to laveno-mombello, so innsbruck–verona from öbb & verona–laveno-mombello from trenitalia it was.

from the eight trains (yes, 3 changes per direction) exactly 0 had no delays at departure or arrival or both. in the end I "only" lost one connection. – & one direction from-door-to-door took roughly 9 hours.


the ostello casa rossa was lovely (& might be even more lovely with warmer temperature which would allow for longer stays in the garden), & the food at the ristorante concordia was mostly excellent. trying new local specialties (like the pizzoccheri) is always fun. & no lunch or dinner lasted shorter than 2 hours.


unsurprisingly, my work was mostly focussed on Debian Perl Group stuff. we managed to move our repos from alioth to salsa during the weekend, which involved not only importing ~3500 repositories but also e.g. recreating our .mrconfig setup.

in practice it didn't help alot for my contributions that I was at SnowCamp as the others were not there, & coordination happened via IRC (unlike a team sprint); but at least I had more people who listened to my shouts of joy or frustration than what I would have had at home :)


  • on my way back I even had snow while waiting at the delayed train in the unexciting station of laveno-mombello.
  • mille grazie to elena & friends for the perfect organisation!
  • if you speak german: the respective volume of my archäologie des alltags (also contains links to tweets with photos).
  • I see quite some potential for other Debian *Camps, maybe even longer & with more pre-planned team sprints under one roof (or in one garden) …

Krebs on SecurityFinancial Cyber Threat Sharing Group Phished

The Financial Services Information Sharing and Analysis Center (FS-ISAC), an industry forum for sharing data about critical cybersecurity threats facing the banking and finance industries, said today that a successful phishing attack on one of its employees was used to launch additional phishing attacks against FS-ISAC members.

The fallout from the back-to-back phishing attacks appears to have been limited and contained, as many FS-ISAC members who received the phishing attack quickly detected and reported it as suspicious. But the incident is a good reminder to be on your guard, remember that anyone can get phished, and that most phishing attacks succeed by abusing the sense of trust already established between the sender and recipient.

The confidential alert FS-ISAC sent to members about a successful phishing attack that spawned phishing emails coming from the FS-ISAC.

Notice of the phishing incident came in an alert FS-ISAC shared with its members today and obtained by KrebsOnSecurity. It describes an incident on Feb. 28 in which an FS-ISAC employee “clicked on a phishing email, compromising that employee’s login credentials. Using the credentials, a threat actor created an email with a PDF that had a link to a credential harvesting site and was then sent from the employee’s email account to select members, affiliates and employees.”

The alert said while FS-ISAC was already planning and implementing a multi-factor authentication (MFA) solution across all of its email platforms, “unfortunately, this incident happened to an employee that was not yet set up for MFA. We are accelerating our MFA solution across all FS-ISAC assets.”

The FS-ISAC also said it upgraded its Office 365 email version to provide “additional visibility and security.”

In an interview with KrebsOnSecurity, FS-ISAC President and CEO Bill Nelson said his organization has grown significantly in new staff over the past few years to more than 75 people now, including Greg Temm, the FS-ISAC’s chief information risk officer.

“To say I’m disappointed this got through is an understatement,” Nelson said. “We need to accelerate MFA extremely quickly for all of our assets.”

Nelson observed that “The positive messaging out of this I guess is anyone can become victimized by this.” But according to both Nelson and Temm, the phishing attack that tricked the FS-ISAC employee into giving away email credentials does not appear to have been targeted — nor was it particularly sophisticated.

“I would classify this as a typical, routine, non-targeted account harvesting and phishing,” Temm said. “It did not affect our member portal, or where our data is. That’s 100 percent multifactor. In this case it happened to be an asset that did not have multifactor.”

In this incident, it didn’t take a sophisticated actor to gain privileged access to an FS-ISAC employee’s inbox. But attacks like these raise the question: How successful might such a phishing attack be if it were only slightly more professional and/or organized?

Nelson said his staff members all participate in regular security awareness training and testing, but that there is always room to fill security gaps and move the needle on how many people click when they shouldn’t with email.

“The data our members share with us is fully protected,” he said. “We have a plan working with our board of directors to make sure we have added security going forward,” Nelson said. “But clearly, recognizing where some of these softer targets are is something every company needs to take a look at.”

Planet DebianAntoine Beaupré: February 2018 report: LTS, ...

Debian Long Term Support (LTS)

This is my monthly Debian LTS report. This month was exclusively dedicated to my frontdesk work. I actually forgot to do it the first week and had to play catchup during the weekend, so I brought up a discussion about how to avoid those problems in the future. I proposed an automated reminder system, but it turns out people found this was overkill. Instead, Chris Lamb suggested we simply send a ping to the next person in the list, which has proven useful the next time I was up. In the two weeks I was frontdesk, I ended up triaging the following notable packages:

  • isc-dhcp - remote code execution exploits - time to get rid of those root-level daemons?
  • simplesamlphp - under embargo, quite curious
  • golang - the return of remote code execution in go get (CVE-2018-6574, similar to CVE-2017-15041 and CVE-2018-7187) - ended up being marked as minor, unfortunately
  • systemd - CVE-2017-18078 was marked as unimportant as this was neutralized by kernel hardening and systemd was not really in use back in wheezy. besides, CVE-2013-4392 was about a similar functionality which was claimed to not be supported in wheezy. i did, however, proposed to forcibly enable the kernel hardening through default sysctl configurations (Debian bug #889098) so that custom kernels would be covered by the protection in stable suites.

There were more minor triage work not mentioned here, those are just the juicy ones...

Speaking of juicy, the other thing I did during the month was to help with the documentation on the Meltdown and Spectre attacks on Intel CPUs. Much has been written about this and I won't do yet another summary. However, it seems that no one actually had written even semi-official documentation on the state of fixes in Debian, which lead to many questions to the (LTS) security team(s). Ola Lundqvist did a first draft of a page detailing the current status, and I expanded on the page to add formatting and more details. The page is visible here:

I'm still not fully happy with the results: we're missing some userland like Qemu and a timeline of fixes. In comparison, the Ubuntu page still looks much better in my opinion. But it's leagues ahead of what we had before, which was nothing... The next step for LTS is to backport the retpoline fixes back into a compiler. Roberto C. Sanchez is working on this, and the remaining question is whether we try to backport to GCC 4.7 or we backport GCC 4.9 itself into wheezy. In any case, it's a significant challenge and I'm glad I'm not the one dealing with such arcane code right now...

Other free software work

Not much to say this month, en vrac:

  • did the usual linkchecker maintenance
  • finally got my Prometheus node exporter directory size sample merged
  • added some docs updating the Dat project comparison with IPFS after investigating Dat. Turns out Dat's security garantees aren't as good as I hoped...
  • reviewed some PRs in the Git-Mediawiki project
  • found what I consider to be a security issue in the Borg backup software, but was disregarded as such by upstream. This ended up in a simple issue that I do not hope much from.
  • so I got more interested in the Restic community as well. I proposed a code of conduct to test the waters, but the feedback so far has been mixed, unfortunately.
  • started working on a streams page for the Sigal gallery. Expect an article about Sigal soon.
  • published undertime in Debian, which brought a slew of bug reports (and consequent fixes).
  • started looking at alternative GUIs because GTK2 is going a way and I need to port two projects. I have a list of "hello world" in various frameworks now, still not sure which one I'll use.
  • also worked on updating the Charybdis and Atheme-services packages with new co-maintainers (hi!)
  • worked with Darktable to try and render an exotic image out of my new camera. Might turn into a LWN article eventually as well.
  • started getting more involved in the local free software forum, a nice little community. In particular, i went to a "repair cafe" and wrote a full report on the experience there.

I'm trying to write more for LWN these days so it's taking more time. I'm also trying to turn those reports into articles to help ramping up that rhythm, which means you'll need to subscribe to LWN to get the latest goods before the 2 weeks exclusivity period.

CryptogramRussians Hacked the Olympics

Two weeks ago, I blogged about the myriad of hacking threats against the Olympics. Last week, the Washington Post reported that Russia hacked the Olympics network and tried to cast the blame on North Korea.

Of course, the evidence is classified, so there's no way to verify this claim. And while the article speculates that the hacks were a retaliation for Russia being banned due to doping, that doesn't ring true to me. If they tried to blame North Korea, it's more likely that they're trying to disrupt something between North Korea, South Korea, and the US. But I don't know.

Worse Than FailureCodeSOD: What a Stream

In Java 8, they added the Streams API. Coupled with lambdas, this means that developers can write the concise and expressive code traditionally oriented with functional programming. It’s the best bits of Java blended with the best bits of Clojure! The good news, is that it allows you to write less code! The better news is that you can abuse it to write more code, if you’re so inclined.

Antonio inherited some code written by “Frenk”, who was thus inclined. Frenk wasn’t particularly happy with their job, but were one of the “rockstar programmers” in the eyes of management, so Frenk was given the impossible-to-complete tasks and given complete freedom in the solution.

Frenk had a problem, though. Nothing Frenk was doing was actually all that impossible. If they solved everything with code that anyone else could understand, they wouldn’t look like an amazing genius. So Frenk purposefully obfuscated every line of code, ignoring indentation, favoring one-character variable names, and generally trying to solve each problem in the most obtuse way possible.

Which yielded this.

    Resource[] r; //@Autowired ndr
    Map<File, InputStream> m = null;
    if (file != null)
    m.put(file, new FileInputStream(file));}else

    m = -> { try { return x.getFile(); }
catch (Exception e) { throw new IllegalStateException(e);}},
    x -> {try{return x.getInputStream();}catch (Exception e){throw new IllegalStateException(e)

As purposefully unreadable code, I’d say that Frenk fails. That’s not to say that it isn’t bad, but Frenk’s attempts to make it unreadable… just make it annoying. I understand what the code does, but I’m just left wondering at why.

I can definitely say that this has never been tested in a case where the file variable is non-null, because that wouldn’t work. Antonio confirms that their IDE was throwing up plenty of warnings about calling a method on a variable that was probably null, with the m.put(…) line. It’s nice that they half-way protect against nulls- one variable is checked, but the other isn’t.

Frenk’s real artistry is in employing streams to convert an array to a map. On its surface, it’s not an objectively wrong approach- this is the kind of things streams are theoretically good at. Examine each element in the array, and apply a lambda that extracts the key and another lambda that extracts the value and put it into a map.

There are many real-world cases where I might use this exact technique. But in this case, Antonio refactored it to something a bit cleaner:

        Resource[] resources; //@Autowired again
        Map<File, InputStream> resourceMap = new HashMap<>();
        if (file != null)
            resourceMap.put(file, new FileInputStream(file));
            for (Resource res : resources)
                resourceMap.put(res.getFile(), res.getInputStream());

Here, the iterative approach is much simpler, and the intent of the code is more clear. Just because you have a tool doesn’t make it the right tool for the job. And before you wonder about the lack of exception handling- both the original block and the refactored version were already wrapped up in an exception handling block that can handle the IOException that failed access to the files would throw.

[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!

Planet DebianRuss Allbery: Review: Coding Freedom

Review: Coding Freedom, by E. Gabriella Coleman

Publisher: Princeton University Press
Copyright: 2013
ISBN: 0-691-14461-3
Format: Trade paperback
Pages: 223

Subtitled The Ethics and Aesthetics of Hacking, Coding Freedom is a rare beast in my personal reading: an academic anthropological study of a fairly new virtual community. It's possible that many books of this type are being written, but they're not within my normal reading focus. It's also a bit of an awkward review, since the community discussed here is (one of) mine. I'm going to have an insider's nitpicks and "well, but" reactions to the anthropology, which is a valid reaction but not necessarily the intended audience.

I'm also coming to this book about four years after everyone finished talking about it, and even longer after Coleman's field work in support of the book. I think Coding Freedom suffers from that lack of currency. If this book were written today, I suspect its focus would change, at least in part. More on that in a moment.

Coding Freedom's title is more subtle and layered than it may first appear. It is about the freedom to write code, and about free software as a movement, but not only that. It's also about how concepts of freedom are encoded in the culture and language of hacking communities, and about the concept of code as speech (specifically free speech in the liberal tradition). And the title also captures the idea of code switching, where a speaker switches between languages even in the middle of sentences. The free software community does something akin to code switching between the domains of technical software problems, legal problems, and political beliefs and ideologies. Coleman covers all of that ground in this book.

Apart from an introduction and conclusion, the book is divided into five chapters in three parts. The opening part talks about the typical life story and community involvement of a free software hacker and briefly sketches the legal history of free software licenses. The second part talks about the experience of hacking, with a particular focus on playful expression and the tension between collaboration, competitiveness, and proving one's membership in the group. The final part dives into software as speech, legal and political struggles against the DMCA and other attempts to restrict code through copyright law, and the free software challenge to the liberal regime of capitalism and private property, grounded in the also-liberal value of free speech.

There's a lot here to discuss, but it's also worth noting what's not here, and what I think would have been here if the same field work were done today. There's nothing about gender or inclusion, which have surpassed DMCA issues to become the political flash point de jour. (Coleman notes early in the book that she intentionally omitted that topic as one that deserves its own separate treatment.) The presentation of social norms and behaviors also felt strongly centered in an early 2000s attitude towards social testing, with low tolerance of people who haven't proven their competence. Coleman uses the term meritocracy with very few caveats and complications. I don't think one would do that in work starting today; the flaws, unwritten borders, and gatekeeping for who can participate in that supposed meritocracy are now more frequently discussed.

Those omissions left me somewhat uncomfortable throughout. Coleman follows the community self-image from a decade or more ago (which makes sense, given that's when most of her field research and the majority of examples she draws on in the book are from): valuing technical acumen and skilled play, devoted to free speech, and welcoming and valuing anyone with similar technical abilities. While this self-image is not entirely wrong, it hides a world of unspoken rules and vicious gatekeeping to control who gets to have free speech within the community, what types of people are valued, and who is allowed to not do emotional labor. And who isn't.

These are rather glaring gaps, and for me they limit the usefulness of Coding Freedom as an accurate analysis of the community.

That said, I do want to acknowledge that this wasn't Coleman's project. Her focus, instead, is on the way free software communities noticed and pushed into the open some hidden conflicts in the tradition of liberalism. Free political speech and democratic politics have gone hand-in-hand with capitalism and an overwhelming emphasis on private property, extended into purely virtual objects such as computer software. Free software questions that alliance, pokes at it, and at times even rips it apart.

The free software movement is deeply embedded in liberalism. Although it has members from anarchist, communist, and other political traditions, the general community is not very radical in its understanding of speech, labor, or politics. It has a long tradition of trying to avoid disruptive politics, apart from issues that touch directly on free software, to maximize its political alliances and avoid alienating any members. Free software is largely not a critique of liberalism from the outside; it's a movement that expresses a conflict inside the liberal tradition. It asks whether self-expression is consistent with, and more important than, private property, a question that liberalism otherwise attempts to ignore.

This is the part of the book I found fascinating: looking at my community from the outside, putting emergent political positions in that community into a broader context, and showing the complex and skillful ways that the community discusses, analyzes, and reaches consensus on those positions while retaining a broad base of support and growing membership. Coleman provides a sense of being part of something larger in the best and most complicated way: not a revolution, not an ideology, but a community with complex boundaries, rituals that are both scoffed at and followed, and gatekeeping behavior that exist in part because any human community will create and enforce boundaries.

When one is deeply inside a culture, it's easy to get lost in the ethical debates over whether a particular community behavior is good or bad. It takes an anthropologist to recast all those behaviors, good and bad, as humans being human, and to ask curious questions about what social functions those behaviors serve. Coding Freedom gave me a renewed appreciation of the insight that can come from the disinterested observer. If nothing else, it might help me choose my battles more strategically, and have more understanding and empathy.

This is a very academic work, at least compared to what I normally read. I never lost the thread of Coleman's argument, but I found it hard going and heavy on jargon in a few places. If, like me, you're not familiar with current work in anthropology, you'll probably feel like part of the discussion is going over your head, and that some terms you're reading with their normal English meaning are actually terms of art with more narrow and specific definitions. This is a book rather than an academic paper, and it does try to be approachable, but it's more research than popularization.

I wish Coding Freedom were more engaged with the problems of free software today, instead of the problems of free software in 2002, the era of United States v. Elcom Ltd. and Free Dmitry. I wish that Coleman had been far more critical of the concept of a meritocracy, and had dug deeper into the gatekeeping and boundaries around who is allowed to participate and who is discouraged or excluded. And while I'm not going to complain about academic rigor, I wish the prose were a bit lighter and a bit more approachable, and that it hadn't taken me months to read this book.

But, that said, I'm not sorry to have finally read it. The perspective from the anthropological view of one's own community is quite valuable. The distance provides an opportunity for less judgmental analysis, and a reminder that human social structures are robust and complex attempts to balance contradictory goals.

Coleman made me feel more connected, not to an overarching ideology or political goal, but to a tangled, flawed, dynamic, and responsive community, whose primary shared purpose is to support that human complexity. Sometimes it's easy to miss that forest for the day-to-day trees.

If you want to get more of a feel for Coleman's work, her keynote on Anonymous at DebConf14 in Portland in 2014 is very interesting and consistent in tone and approach with this book (albeit on a somewhat more controversial topic).

Rating: 6 out of 10

Planet DebianPaul Wise: FLOSS Activities February 2018





  • myrepos: merge patches, triage bugs
  • Debian: forward domain expiry, discuss sensitive files with service maintainer
  • Debian QA: bug triage
  • Debian package tracker: deploy latest code
  • Debian mentors: check why package wasn't uploaded, restart importer after crash
  • Debian wiki: remove extraneous tmp file, fix user email address, unblacklist IP addresses, whitelist email addresses, whitelist email domain
  • Debian website: investigate translation update issue



The work on harmony and librecaptcha was sponsored by my employer. All other work was done on a volunteer basis.


Planet DebianNorbert Preining: Ten Mincho – Great font and ugly Adobe

I recently stumbled upon a very interesting article by Ken Lunde (well known from CJKV Information Processing book) on a new typeface for Japanese called Ten Mincho, designed by Ryoko Nishizuka and Robert Slimbach. Reading that the Kanji and Roman part is well balanced, and the later one designed by Robert Slimbach, I was very tempted to get these fonts for my own publications and reports. But well, not with Adobe 🙁

The fonts are available at TypeKit, but a detailed study of the license terms and EULA gave me to cold shivers:

These are not perpetual licenses. You won’t have direct access to the font files, so you will need to keep the Creative Cloud application running in order to keep using the Marketplace fonts you’ve synced.
A few things to know about fonts from Marketplace

So may I repeat:

  • You pay for the fonts but you can only use them while running Creative Cloud.
  • You have no way to use the fonts with any other application.
  • Don’t even think about using the fonts on Linux with TeX.
  • We can remove these fonts at our free will from your library, that is not perpetual license.

Also when you purchase the fonts you are warned in the last step that:

All sales are final. Sorry, no refunds. Please contact us if you have any questions before purchasing.

So not only can you not use the fonts you purchased freely, you also cannot ask for reimbursement.

Adobe, that is a shame.

Planet DebianJohn Goerzen: Emacs #2: Introducing org-mode

In my first post in my series on Emacs, I described returning to Emacs after over a decade of vim, and org-mode being the reason why.

I really am astounded at the usefulness, and simplicity, of org-mode. It is really a killer app.

So what exactly is org-mode?

I wrote yesterday:

It’s an information organization platform. Its website says “Your life in plain text: Org mode is for keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system.”

That’s true, but doesn’t quite capture it. org-mode is a toolkit for you to organize things. It has reasonable out-of-the-box defaults, but it’s designed throughout for you to customize.

To highlight a few things:

  • Maintaining TODO lists: items can be scattered across org-mode files, contain attachments, have tags, deadlines, schedules. There is a convenient “agenda” view to show you what needs to be done. Items can repeat.
  • Authoring documents: org-mode has special features for generating HTML, LaTeX, slides (with LaTeX beamer), and all sorts of other formats. It also supports direct evaluation of code in-buffer and literate programming in virtually any Emacs-supported language. If you want to bend your mind on this stuff, read this article on literate devops. The entire Worg website
    is made with org-mode.
  • Keeping notes: yep, it can do that too. With full-text search, cross-referencing by file (as a wiki), by UUID, and even into other systems (into mu4e by Message-ID, into ERC logs, etc, etc.)

Getting started

I highly recommend watching Carsten Dominik’s excellent Google Talk on org-mode. It is an excellent introduction.

org-mode is included with Emacs, but you’ll often want a more recent version. Debian users can apt-get install org-mode, or it comes with the Emacs packaging system; M-x package-install RET org-mode RET may do it for you.

Now, you’ll probably want to start with the org-mode compact guide’s introduction section, noting in particular to set the keybindings mentioned in the activation section.

A good tutorial…

I’ve linked to a number of excellent tutorials and introductory items; this post is not going to serve as a tutorial. There are two good videos linked at the end of this post, in particular.

Some of my configuration

I’ll document some of my configuration here, and go into a bit of what it does. This isn’t necessarily because you’ll want to copy all of this verbatim — but just to give you a bit of an idea of some of what can be configured, an idea of what to look up in the manual, and maybe a reference for “now how do I do that?”

First, I set up Emacs to work in UTF-8 by default.

(prefer-coding-system 'utf-8)
(set-language-environment "UTF-8")

org-mode can follow URLs. By default, it opens in Firefox, but I use Chromium.

(setq browse-url-browser-function 'browse-url-chromium)

I set the basic key bindings as documented in the Guide, plus configure the M-RET behavior.

(global-set-key "\C-cl" 'org-store-link)
(global-set-key "\C-ca" 'org-agenda)
(global-set-key "\C-cc" 'org-capture)
(global-set-key "\C-cb" 'org-iswitchb)

(setq org-M-RET-may-split-line nil)

Configuration: Capturing

I can press C-c c from anywhere in Emacs. It will capture something for me, and include a link back to whatever I was working on.

You can define capture templates to set how this will work. I am going to keep two journal files for general notes about meetings, phone calls, etc. One for personal, one for work items. If I press C-c c j, then it will capture a personal item. The %a in all of these includes the link to where I was (or a link I had stored with C-c l).

(setq org-default-notes-file "~/org/")
(setq org-capture-templates
        ("t" "Todo" entry (file+headline "" "Tasks")
         "* TODO %?\n  %i\n  %u\n  %a")
        ("n" "Note/Data" entry (file+headline "" "Notes/Data")
         "* %?   \n  %i\n  %u\n  %a")
        ("j" "Journal" entry (file+datetree "~/org/")
         "* %?\nEntered on %U\n %i\n %a")
        ("J" "Work-Journal" entry (file+datetree "~/org/")
         "* %?\nEntered on %U\n %i\n %a")
(setq org-irc-link-to-logs t)

I like to link by UUIDs, which lets me move things between files without breaking locations. This helps generate UUIDs when I ask Org to store a link target for future insertion.

(require 'org-id)
(setq org-id-link-to-org-use-id 'create-if-interactive)

Configuration: agenda views

I like my week to start on a Sunday, and for org to note the time when I mark something as done.

(setq org-log-done 'time)
(setq org-agenda-start-on-weekday 0)

Configuration: files and refiling

Here I tell it what files to use in the agenda, and to add a few more to the plain text search. I like to keep a general inbox (from which I can move, or “refile”, content), and then separate tasks, journal, and knowledge base for personal and work items.

  (setq org-agenda-files (list "~/org/"
  (setq org-agenda-text-search-extra-files
        (list "~/org/"

  (setq org-refile-targets '((nil :maxlevel . 2)
                             (org-agenda-files :maxlevel . 2)
                             ("~/org/" :maxlevel . 2)
                             ("~/org/" :maxlevel . 2)
(setq org-outline-path-complete-in-steps nil)         ; Refile in a single go
(setq org-refile-use-outline-path 'file)

Configuration: Appearance

I like a pretty screen. After you’ve gotten used to org a bit, you might try this.

(require 'org-bullets)
(add-hook 'org-mode-hook
          (lambda ()
            (org-bullets-mode t)))
(setq org-ellipsis "⤵")

Coming up next…

This hopefully showed a few things that org-mode can do. Coming up next, I’ll cover how to customize TODO keywords and tags, archiving old tasks, forwarding emails to org-mode, and using git to synchronize between machines.

You can also see a list of all articles in this series.

Resources to accompany this article

Planet DebianDirk Eddelbuettel: #17: Dependencies.

Dependencies are invitations for other people to break your package.
-- Josh Ulrich, private communication

Welcome to the seventeenth post in the relentlessly random R ravings series of posts, or R4 for short.

Dependencies. A truly loaded topic.

As R users, we are spoiled. Early in the history of R, Kurt Hornik and Friedrich Leisch built support for packages right into R, and started the Comprehensive R Archive Network (CRAN). And R and CRAN had a fantastic run with. Roughly twenty years later, we are looking at over 12,000 packages which can (generally) be installed with absolute ease and no suprises. No other (relevant) open source language has anything of comparable rigour and quality. This is a big deal.

And coding practices evolved and changed to play to this advantage. Packages are a near-unanimous recommendation, use of the install.packages() and update.packages() tooling is nearly universal, and most R users learned to their advantage to group code into interdependent packages. Obvious advantages are versioning and snap-shotting, attached documentation in the form of help pages and vignettes, unit testing, and of course continuous integration as a side effect of the package build system.

But the notion of 'oh, let me just build another package and add it to the pool of packages' can get carried away. A recent example I had was the work on the prrd package for parallel recursive dependency testing --- coincidentally, created entirely to allow for easier voluntary tests I do on reverse dependencies for the packages I maintain. It uses a job queue for which I relied on the liteq package by Gabor which does the job: enqueue jobs, and reliably dequeue them (also in a parallel fashion) and more. It looks light enough:

R> tools::package_dependencies(package="liteq", recursive=FALSE, db=AP)$liteq
[1] "assertthat" "DBI"        "rappdirs"   "RSQLite"   

Two dependencies because it uses an internal SQLite database, one for internal tooling and one for configuration.

All good then? Not so fast. The devil here is the very innocuous and versatile RSQLite package because when we look at fully recursive dependencies all hell breaks loose:

R> tools::package_dependencies(package="liteq", recursive=TRUE, db=AP)$liteq
 [1] "assertthat" "DBI"        "rappdirs"   "RSQLite"    "tools"     
 [6] "methods"    "bit64"      "blob"       "memoise"    "pkgconfig" 
[11] "Rcpp"       "BH"         "plogr"      "bit"        "utils"     
[16] "stats"      "tibble"     "digest"     "cli"        "crayon"    
[21] "pillar"     "rlang"      "grDevices"  "utf8"      
R> tools::package_dependencies(package="RSQLite", recursive=TRUE, db=AP)$RSQLite
 [1] "bit64"      "blob"       "DBI"        "memoise"    "methods"   
 [6] "pkgconfig"  "Rcpp"       "BH"         "plogr"      "bit"       
[11] "utils"      "stats"      "tibble"     "digest"     "cli"       
[16] "crayon"     "pillar"     "rlang"      "assertthat" "grDevices" 
[21] "utf8"       "tools"     

Now we went from four to twenty-four, due to the twenty-two dependencies pulled in by RSQLite.

There, my dear friend, lies madness. The moment one of these packages breaks we get potential side effects. And this is no laughing matter. Here is a tweet from Kieran posted days before a book deadline of his when he was forced to roll a CRAN package back because it broke his entire setup. (The original tweet has by now been deleted; why people do that to their entire tweet histories is somewhat I fail to comprehened too; in any case the screenshot is from a private discussion I had with a few like-minded folks over slack.)

That illustrates the quote by Josh at the top. As I too have "production code" (well, CRANberries for one relies on it), I was interested to see if we could easily amend RSQLite. And yes, we can. A quick fork and few commits later, we have something we could call 'RSQLighter' as it reduces the dependencies quite a bit:

R> IP <- installed.packages()   # using my installed mod'ed version
R> tools::package_dependencies(package="RSQLite", recursive=TRUE, db=IP)$RSQLite
 [1] "bit64"     "DBI"       "methods"   "Rcpp"      "BH"        "bit"      
 [7] "utils"     "stats"     "grDevices" "graphics" 

That is less than half. I have not proceeded with the fork because I do not believe in needlessly splitting codebases. But this could be a viable candidate for an alternate or shadow repository with more minimal and hence more robust dependencies. Or, as Josh calls, the tinyverse.

Another maddening aspect of dependencies is the ruthless application of what we could jokingly call Metcalf's Law: the likelihood of breakage does of course increase with the number edges in the dependency graph. A nice illustration is this post by Jenny trying to rationalize why one of the 87 (as of today) tidyverse packages has now state "ORPHANED" at CRAN:

An invitation for other people to break your code. Well put indeed. Or to put rocks up your path.

But things are not all that dire. Most folks appear to understand the issue, some even do something about it. The DBI and RMySQL packages have saner strict dependencies, maybe one day things will improve for RMariaDB and RSQLite too:

R> tools::package_dependencies(package=c("DBI", "RMySQL", "RMariaDB"), recursive=TRUE, db=AP)
[1] "methods"

[1] "DBI"     "methods"

 [1] "bit64"     "DBI"       "hms"       "methods"   "Rcpp"      "BH"       
 [7] "plogr"     "bit"       "utils"     "stats"     "pkgconfig" "rlang"    


And to be clear, I do not believe in giving up and using everything via docker, or virtualenvs, or packrat, or ... A well-honed dependency system is wonderful and the right resource to get code deployed and updated. But it required buy-in from everyone involved, and an understanding of the possible trade-offs. I think we can, and will, do better going forward.

Or else, there will always be the tinyverse ...

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Cory DoctorowHey, Sydney! I’m coming to see you tonight (then Adelaide and Wellington!)

I’m just about to go to the airport to fly to Sydney for tonight’s event, What should we do about Democracy?

It’s part of the Australia/New Zealand tour for Walkaway, and from Sydney, I’m moving on to the Adelaide Festival and then to Wellington for Writers and Readers Week and the NetHui one-day event on copyright.

It feels like democracy is under siege, even in rich, peaceful countries like Australia that have escaped financial shocks and civil strife. Populist impulses have been unleashed in the UK and USA. There is a record lack of trust in the institutions of politics and government, exacerbated by the ways in which social media and digital technology can spread ‘fake news’ and are being harnessed by foreign powers to meddle in politics. Important issues that citizens care about, like climate change, are sidelined by professional politicians, enhancing the appeal of outsider figures. Do these problems add up to the failure of democracy? Are Brexit and Trump outliers, or the new normal? Join a lively panel of experts and commentators explore some big questions about the future of democracy, and think more clearly about what we ought to do.

Speakers Cory Doctorow, A.C. Grayling, Rebecca Huntley and Lenore Taylor

Chair Jeremy Moss

Cory DoctorowMy short story about better cities, where networks give us the freedom to schedule our lives to avoid heat-waves and traffic jams

I was lucky enough to be invited to submit a piece to Ian Bogost’s Atlantic series on the future of cities (previously: James Bridle, Bruce Sterling, Molly Sauter, Adam Greenfield); I told Ian I wanted to build on my 2017 Locus column about using networks to allow us to coordinate our work and play in a way that maximized our freedom, so that we could work outdoors on nice days, or commute when the traffic was light, or just throw an impromptu block party when the neighborhood needed a break.

The story is out today, with a gorgeous illustration by Molly Crabapple; the Atlantic called it “The City of Coordinated Leisure,” but in my heart it will always be “Coase’s Day Off: a microeconomics of coordinated leisure.”

There had been some block parties on Lima Street when Arturo had been too small to remember them, but then there had been a long stretch of unreasonably seasonable weather and no one had tried it, not until the year before, on April 18, a Thursday after a succession of days that vied to top each other for inhumane conditions, the weather app on the hallway wall showing 112 degrees before breakfast.

Mr. Papazian was the block captain for that party, and the first they’d known of it was when Arturo’s dad called out to his mom that Papazian had messaged them about a block party, and there was something funny in Dad’s tone, a weird mix of it’s so crazy and let’s do it.

That had been a day to remember, and Arturo had remembered, and watched the temperature.

The City of Coordinated Leisure [Cory Doctorow/The Atlantic]

Planet DebianChris Lamb: Free software activities in February 2018

Here is my monthly update covering what I have been doing in the free software world in February 2018 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month I:

I also made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues:

  • Add support for comparing Berkeley DB files. (Unfortunately this is currently incomplete because the libraries do not report metadata reliably!) (#890528)
  • Add support for comparing "XMLBeans" binary schemas. [...]
  • Drop spurious debugging code in Android tests. [...]


My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.

Patches contributed

  • debian-policy: Replace dh_systemd_install with dh_installsystemd. (#889167)
  • juce: Missing build-depends on graphviz. (#890035)
  • roffit: debian/rules does not override targets as intended. (#889975)
  • Please add rel="canonical" to bug pages. (#890338)

Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:


  • redis:
    • 4.0.8-1 — New upstream release and fix a potential hardlink vulnerability.
    • 4.0.8-2 — Also listen on ::1 (IPv6) by default. (#891432)
  • python-django:
    • 1.11.10-1 — New upstream security release.
    • 2.0.2-1 — New upstream security release.
  • redisearch:
    • 1.0.6-1 — New upstream release.
    • 1.0.7-1 — New upstream release & add Lintian overrides for package-does-not-install-examples.
    • 1.0.8-1 — New upstream release, which includes my reproducibility-related change improvement.
  • adminer:
    • 4.6.1-1 — New upstream release and override debian-watch-does-not-check-gpg-signature as upstream do not release signatures.
    • 4.6.2-1 — New upstream release.
  • process-cpp:
    • 3.0.1-3 — Make the documentation reproducible.
    • 3.0.1-4 — Correct Vcs-Bzr to Vcs-Git.
  • sleekxmpp (1.3.3-3) — Make the build reproducible. (#890193)
  • python-redis (2.10.6-2) — Correct autopkgtest dependencies and misc packaging updates.
  • bfs (1.2.1-1) — New upstream release.

I also made misc packaging updates for docbook-to-man (1:2.0.0-41), gunicorn (19.7.1-4), installation-birthday (8) & python-daiquiri (1.3.0-3).

Finally, I performed the following sponsored uploads: check-manifest (0.36-2), django-ipware (2.0.1-1), nose2 (0.7.3-3) & python-keyczar (0.716+ds-2).

Debian bugs filed

  • zsh: Please make apt install completion work on "local" files. (#891140)
  • git-gui: Ignores git hooks. (#891552)
  • python-coverage:
    • Installs pyfile.html into wrong directory breaking HTML report generation. (#890560)
    • Document copyright information for bundled JavaScript source. (#890578)

FTP Team

As a Debian FTP assistant I ACCEPTed 123 packages: apticron, aseba, atf-allwinner, bart-view, binutils, browserpass, bulk-media-downloader, ceph-deploy, colmap, core-specs-alpha-clojure, ctdconverter, debos, designate, editorconfig-core-py, essays1743, fis-gtm, flameshot, flex, fontmake, fonts-league-spartan, fonts-ubuntu, gcc-8, getdns, glyphslib, gnome-keyring, gnome-themes-extra, gnome-usage, golang-github-containerd-cgroups, golang-github-go-debos-fakemachine, golang-github-mattn-go-zglob, haskell-regex-tdfa-text, https-everywhere, ibm-3270, ignition-fuel-tools, impass, inetsim, jboss-bridger, jboss-threads, jsonrpc-glib, knot-resolver, libctl, liblouisutdml, libopenraw, libosmo-sccp, libtest-postgresql-perl, libtickit, linux, live-tasks, minidb, mithril, mutter, neuron, node-acorn-object-spread, node-babel, node-call-limit, node-color, node-colormin, node-console-group, node-consolidate, node-cosmiconfig, node-css-color-names, node-date-time, node-err-code, node-gulp-load-plugins, node-html-comment-regex, node-icss-utils, node-is-directory, node-mdn-data, node-mississippi, node-mutate-fs, node-node-localstorage, node-normalize-range, node-postcss-filter-plugins, node-postcss-load-options, node-postcss-load-plugins, node-postcss-minify-font-values, node-promise-retry, node-promzard, node-require-from-string, node-rollup, node-rollup-plugin-buble, node-ssri, node-validate-npm-package-name, node-vue-resource, ntpsec, nvidia-cuda-toolkit, nyx, pipsi, plasma-discover, pokemmo, pokemmo-installer, polymake, privacybadger, proxy-switcher, psautohint, purple-discord, pytest-astropy, pytest-doctestplus, pytest-openfiles, python-aiomeasures, python-coverage, python-fitbit, python-molotov, python-networkmanager, python-os-service-types, python-pluggy, python-stringtemplate3, python3-antlr3, qpack, quintuple, r-cran-animation, r-cran-clustergeneration, r-cran-phytools, re2, sat-templates, sfnt2woff-zopfli, sndio, thunar, uhd, undertime, usbauth-notifier, vmdb2 & xymonq.

I additionally filed 15 RC bugs against packages that had incomplete debian/copyright files against: browserpass, designate, fis-gtm, flex, gnome-keyring, ibm-3270, knot-resolver, libopenraw, libtest-postgresql-perl, mithril, mutter, ntpsec, plasma-discover, pytest-arraydiff & r-cran-animation.

Krebs on SecurityHow to Fight Mobile Number Port-out Scams

T-Mobile, AT&T and other mobile carriers are reminding customers to take advantage of free services that can block identity thieves from easily “porting” your mobile number out to another provider, which allows crooks to intercept your calls and messages while your phone goes dark. Tips for minimizing the risk of number porting fraud are available below for customers of all four major mobile providers, including Sprint and Verizon.

Unauthorized mobile phone number porting is not a new problem, but T-Mobile said it began alerting customers about it earlier this month because the company has seen a recent uptick in fraudulent requests to have customer phone numbers ported over to another mobile provider’s network.

“We have been alerting customers via SMS that our industry is experiencing a phone number port out scam that could impact them,” T-Mobile said in a written statement. “We have been encouraging them to add a port validation feature, if they’ve not already done so.”

Crooks typically use phony number porting requests when they have already stolen the password for a customer account (either for the mobile provider’s network or for another site), and wish to intercept the one-time password that many companies send to the mobile device to perform two-factor authentication.

Porting a number to a new provider shuts off the phone of the original user, and forwards all calls to the new device. Once in control of the mobile number, thieves can request any second factor that is sent to the newly activated device, such as a one-time code sent via text message or or an automated call that reads the one-time code aloud.

In these cases, the fraudsters can call a customer service specialist at a mobile provider and pose as the target, providing the mark’s static identifiers like name, date of birth, social security number and other information. Often this is enough to have a target’s calls temporarily forwarded to another number, or ported to a different provider’s network.

Port out fraud has been an industry problem for a long time, but recently we’ve seen an uptick in this illegal activity,” T-Mobile said.  “We’re not providing specific metrics, but it’s been enough that we felt it was important to encourage customers to add extra security features to their accounts.”

In a blog post published Tuesday, AT&T said bad guys sometimes use illegal porting to steal your phone number, transfer the number to a device they control and intercept text authentication messages from your bank, credit card issuer or other companies.

“You may not know this has happened until you notice your mobile device has lost service,” reads a post by Brian Rexroad, VP of security relations at AT&T. “Then, you may notice loss of access to important accounts as the attacker changes passwords, steals your money, and gains access to other pieces of your personal information.”

Rexroad says in some cases the thieves just walk into an AT&T store and present a fake ID and your personal information, requesting to switch carriers. Porting allows customers to take their phone number with them when they change phone carriers.

The law requires carriers to provide this number porting feature, but there are ways to reduce the risk of this happening to you.

T-Mobile suggests adding its port validation feature to all accounts. To do this, call 611 from your T-Mobile phone or dial 1-800-937-8997 from any phone. The T-Mobile customer care representative will ask you to create a 6-to-15-digit passcode that will be added to your account.

“We’ve included alerts in the T-Mobile customer app and on, but we don’t want customers to wait to get an alert to take action,” the company said in its statement. “Any customer can call 611 at any time from their mobile phone and have port validation added to their accounts.”

Verizon requires a match on a password or a PIN associated with the account for a port to go through. Subscribers can set their PIN via their Verizon Wireless website account or by visiting a local shop.

Sprint told me that in order for a customer to port their number to a different carrier, they must provide the correct Sprint account number and PIN number for the port to be approved. Sprint requires all customers to create a PIN during their initial account setup.

AT&T calls its two-factor authentication “extra security,” which involves creating a unique passcode on your AT&T account that requires you to provide that code before any changes can be made — including ports initiated through another carrier. Follow this link for more information. And don’t use something easily guessable like your SSN (the last four of your SSN is the default PIN, so make sure you change it quickly to something you can remember but that’s non-obvious).

Bigger picture, these porting attacks are a good reminder to use something other than a text message or a one-time code that gets read to you in an automated phone call. Whenever you have the option, choose the app-based alternative: Many companies now support third-party authentication apps like Google Authenticator and Authy, which can act as powerful two-factor authentication alternatives that are not nearly as easy for thieves to intercept.

Several of the mobile companies referred me to the work of a Mobile Authentication task force created by the carriers last fall. They say the issue of unauthorized ports to commit fraud is being addressed by this initiative.

For more on tightening your mobile security stance, see last year’s story, “Is Your Mobile Carrier Your Weakest Link?

CryptogramApple to Store Encryption Keys in China

Apple is bowing to pressure from the Chinese government and storing encryption keys in China. While I would prefer it if it would take a stand against China, I really can't blame it for putting its business model ahead of its desires for customer privacy.

Two more articles.

Worse Than FailureCodeSOD: The Part Version

Once upon a time, there was a project. Like most projects, it was understaffed, under-budgeted, under-estimated, and under the gun. Death marches ensued, and 80 hour weeks became the norm. The attrition rate was so high that no one who was there at the start of the project was there at the end of the project. Like the Ship of Theseus, each person was replaced at least once, but it was still the same team.

Eric wasn’t on that team. He was, however, a consultant. When the project ended and nothing worked, Eric got called in to fix it. And then called back to fix it some more. And then called back to implement new features. And called back…

While diagnosing one problem, Eric stumbled across the method getPartVersions. A part number was always something like “123456–1”, where the first group of numbers were the part number itself, and the portion after the “-” was the version of that part.

So, getPartVersions, then, should be something like:

String getPartVersions(String part) {
    //sanity checks omitted
    return part.split("-")[1];

The first hint that things weren’t implemented in a sane way was the method’s signature:

    private List<Integer> getPartVersions(final String searchString)

Why was it returning a list? The calling code always used the first element in the list, and the list was always one element long.

    private List<Integer> getPartVersions(final String searchString) {
        final List<Integer> partVersions = new ArrayList<>();
        if (StringUtils.indexOfAny(searchString, DELIMITER) != -1) {
            final String[] splitString = StringUtils.split(searchString, DELIMITER);
            if (splitString != null && splitString.length > 1) {
                //this is the partIdentifier, we make it empty it so it will not be parsed as a version
                splitString[0] = "";
                for (String s : splitString) {
                    s = s.trim();
                    try {
                        if (s.length() <= 2) {
                    } catch (final NumberFormatException ignored) {
                        //Do nothing probably not an partVersion
        return partVersions;

A part number is always in the form “{PART}-{VERSION}”. That is what the variable searchString should contain. So, they do their basic sanity checks- is there a dash there, does it split into two pieces, etc. Even these sanity checks hint at a WTF, as StringUtils obviously is just wrappers around built-in string functions.

Things get really odd, though, with this:

                splitString[0] = "";
                for (String s : splitString) //…

Throw away the part number, then iterate across the entire series of strings we made by splitting. Check the length- if it’s less than or equal to two, it must be the part version. Parse it into an integer and put it in the list. The real “genius” element of this code is that since the first entry in the splitString array is set to an empty string, Integer.parseInt will throw an exception, thus ensuring we don’t accidentally put the part number in our list.

I’ve personally written methods that have this sort of tortured logic, and given what Eric tells us about the history of the project, I suspect I know what happened here. This method was written before the requirement it fulfilled was finalized. No one, including the business users, actually knew the exact format or structure of a part number. The developer got five different explanations, which turned out to be wrong in 15 different ways, and implemented a compromise that just kept getting tweaked until someone looked at the results and said, “Yes, that’s right.” The dev then closed out the requirement and moved onto the next one.

Eric left the method alone: he wasn’t being paid to refactor things, and too much downstream code depended on the method signature returning a List<Integer>.

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

Planet DebianJan Wagner: Deploying a (simple) docker container system

When a small platform for shipping containers is needed, not speaking about Kubernets or something, you have a couple of common things you might want to deploy at first.

Usual things that I have to roll out everytime deloying such a platform:

Bootstraping docker and docker-compose

Most services are build upon multiple containers. A useful tool for doing this is for example docker-compose where you can describe your whole 'application'. So we need to deploy it beside docker itself.

Deploying Watchtower

An essential operational part is to keep you container images up to date.

Watchtower is an application that will monitor your running Docker containers and watch for changes to the images that those containers were originally started from. If watchtower detects that an image has changed, it will automatically restart the container using the new image.

Deploying http(s) reverse proxy Træfik

If you want to provide multiple (web)services on port 80 and 443, you have to think about how this should be solved. Usually you would use a http(s) reverse proxy, there are many of software implementations available.
The challenging part in such an environment is that services may appear and disappear frequently. (Re)-configuration of the proxy service it the gap that needs to be closed.

Træfik (pronounced like traffic) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease [...] to manage its configuration automatically and dynamically.

Træfik has many interesting features for example 'Let's Encrypt support (Automatic HTTPS with renewal)'.


Planet DebianJohn Goerzen: Emacs #1: Ditching a bunch of stuff and moving to Emacs and org-mode

I’ll admit it. After over a decade of vim, I’m hooked on Emacs.

I’ve long had this frustration over how to organize things. I’ve followed approaches like GTD and ZTD, but things like email or large files are really hard to organize.

I had been using Asana for tasks, Evernote for notes, Thunderbird for email, a combination of ikiwiki and some other items for a personal knowledge base, and various files in an archive directory on my PC. When my new job added Slack to the mix, that was finally the last straw.

A lot of todo-management tools integrate with email — poorly. When you want to do something like “remind me to reply to this in a week”, a lot of times that’s impossible because the tool doesn’t store the email in a fashion you can easily reply to. And that problem is even worse with Slack.

It was right around then that I stumbled onto Carsten Dominik’s Google Talk on org-mode. Carsten was the author of org-mode, and although the talk is 10 years old, it is still highly relevant.

I’d stumbled across org-mode before, but each time I didn’t really dig in because I had the reaction of “an outliner? But I need a todo list.” Turns out I was missing out. org-mode is all that.

Just what IS Emacs? And org-mode?

Emacs grew up as a text editor. It still is, and that heritage is definitely present throughout. But to say Emacs is an editor would be rather unfair.

Emacs is something more like a platform or a toolkit. Not only do you have source code to it, but the very configuration is a program, and there are hooks all over the place. It’s as if it was super easy to write a Firefox plugin. A couple lines, and boom, behavior changed.

org-mode is very similar. Yes, it’s an outliner, but that’s not really what it is. It’s an information organization platform. Its website says “Your life in plain text: Org mode is for keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system.”


If you’ve ever read productivity guides based on GTD, one of the things they stress is effortless capture of items. The idea is that when something pops into your head, get it down into a trusted system quickly so you can get on with what you were doing. org-mode has a capture system for just this. I can press C-c c from anywhere in Emacs, and up pops a spot to type my note. But, critically, automatically embedded in that note is a link back to what I was doing when I pressed C-c c. If I was editing a file, it’ll have a link back to that file and the line I was on. If I was viewing an email, it’ll link back to that email (by Message-Id, no less, so it finds it in any folder). Same for participating in a chat, or even viewing another org-mode entry.

So I can make a note that will remind me in a week to reply to a certain email, and when I click the link in that note, it’ll bring up the email in my mail reader — even if I subsequently archived it out of my inbox.

YES, this is what I was looking for!

The tool suite

Once you’re using org-mode, pretty soon you want to integrate everything with it. There are browser plugins for capturing things from the web. Multiple Emacs mail or news readers integrate with it. ERC (IRC client) does as well. So I found myself switching from Thunderbird and mairix+mutt (for the mail archives) to mu4e, and from xchat+slack to ERC.

And wouldn’t you know it, I liked each of those Emacs-based tools better than the standalone they replaced.

A small side tidbit: I’m using OfflineIMAP again! I even used it with GNUS way back when.

One Emacs process to rule them

I used to use Emacs extensively, way back. Back then, Emacs was a “large” program. (Now my battery status applet literally uses more RAM than Emacs). There was this problem of startup time back then, so there was a way to connect to a running Emacs process.

I like to spawn programs with Mod-p (an xmonad shortcut to a dzen menubar, but Alt-F2 in more traditional DEs would do the trick). It’s convenient to not run several emacsen with this setup, so you don’t run into issues with trying to capture to a file that’s open in another one. The solution is very simple: I created a script, named it em, and put it on my path. All it does is this:

exec emacsclient -c -a "" "$@"

It creates a new emacs process if one doesn’t already exist; otherwise, it uses what you’ve got. A bonus here: parameters such as -nw work just fine, so it really acts just as if you’d typed emacs at the shell prompt. It’s a suitable setting for EDITOR.

Up next…

I’ll be talking about my use of, and showing off configurations for:

  • org-mode, including syncing between computers, capturing, agenda and todos, files, linking, keywords and tags, various exporting (slideshows), etc.
  • mu4e for email, including multiple accounts, bbdb integration
  • ERC for IRC and IM

You can also see a list of all articles in this series.

Planet DebianRenata D'Avila: Woman. Not in tech.

Thank you, Livia Gabos, for helping me to improve this article by giving me feedback on it.

Before I became an intern with Outreachy, my Twitter bio read: "Woman. Not in tech." Well, if you didn't get the picture, let me explain that meant.

It all began with a simple request I received almost an year ago:

Hey, do you want to join our [company] event and give a talk about being a women in tech?

I don't have a job in the tech industry. So, yes, while society does put me in the 'woman' column, I have to admit it's a little hard to give a talk about being 'in tech' when I'm not 'in tech'.

What I can talk about, though, it's about all the women who are not in tech. The many, many friends I have who come to Women in Tech events and meetings, who reach out to me by e-mail, Twitter or even in person, who are struggling to get into tech.

I can talk about the only other girl in my class who, besides me, managed to get an internship. And how we both only got the position because we had passed a written exam about informatics, instead of going through usual channels such as referrals, CV analysis or interviews.

I can talk about the women who are seen as lazy, or that they just don't get it the lessons in tech courses because they don't have the same background and the same amount of time available to study or to do homework at home as their male peers do, since they have to take care of relatives, take care of children, take care of the housework for their family, most of the times while working in one or two jobs just to be able to study.

I can talk about the women and about the mothers who after many years being denied the possibility for a tech career are daring to change paths, but are denied junior positions in favor of younger men who "can be trained on the job" and have "so much more willingness to learn".

I can talk about the women who are seen as uninterested in one or more FLOSS technologies because they don't contribute to said technology, since the men in FLOSS projects have continuously failed in engage and - most importantly - keep them included (but maybe that's just because women lack role models).

A screenshot of the proposal made by the Brazilian community for DebConf19. Even though it lists a lot of women in tech groups, the all-male organizing team says "There is an expectation that the coming of women DDs may spark the interest of these female students by Debian." Even though there are so many Women in Tech communities in Curitiba, as listed above, the all-male 'core team' of the local Debian community itself couldn't find a single woman to work with them for the DebConf proposal. Go figure.

I can talk about the many women I met not at tech conferences, but at teachers' conferences, that have way more experience with computers and programming than I. Women who after years working on the field have given up IT to become teachers, not because it was their lifelong dream, but because they didn't feel comfortable and well-integrated in a male-dominated full-of-mysoginistic-ideals tech industry. Because it was - and is - almost impossible for them to break the glass ceiling.

I can even talk about all the women who are lesbians that a certain community of Women In Tech could not find when they wanted someone to write an article about 'being homossexual in tech' to be published right on Brazil's Lesbian Visibility Day, so they had to go and ask a gay man to talk about his own experience. Well, it seems like those women aren't "in tech" either.

Tokenization can be especially apparent when the lone person in a minority group is not only asked to speak for the group, but is consistently asked to speak about being a member of that group. Geek Feminism - Tokenism

The things is, a lot of people don't want to hear any those stories. Companies in particular only want token women from outside the company (because, let's face it, most tech companies can't find the talent within) who will come up to the stage and inspire other women saying what a great experience it is to be in tech - and that "everyone should try it too!".

"Don't talk about diversity unless you're also commited to inclusion." Naomi Ceder

I do believe all women should try and get knowledge about tech and that is what I work towards. We shouldn't have to rely only on the men in our life to get things done with our computers or our cell phones or our digital life.

But to tell other women they should get into the tech industry? I guess not.

After all, who am I to tell other women they should come to tech - and to stay in tech - when I know we are bound to face all this?


For Brazilian women not in tech, I'm organizing a crowdfunding campaign to get at least five of them the opportunity to attend MiniDebConf in Curitiba, Parana, in April. None of these girls can afford the trip and they don't have a company to sponsor them. If you are willing to help, please get in touch or check this link: Women in MiniDebConf.

More on the subject:

Planet DebianBenjamin Mako Hill: XORcise

XORcise (ɛɡ.zɔʁ.siz) verb 1. To remove observations from a dataset if they satisfy one of two criteria, but not both. [e.g., After XORcising adults and citizens, only foreign children and adult citizens were left.]

Krebs on SecurityBot Roundup: Avalanche, Kronos, NanoCore

It’s been a busy few weeks in cybercrime news, justifying updates to a couple of cases we’ve been following closely at KrebsOnSecurity. In Ukraine, the alleged ringleader of the Avalanche malware spam botnet was arrested after eluding authorities in the wake of a global cybercrime crackdown there in 2016. Separately, a case that was hailed as a test of whether programmers can be held accountable for how customers use their product turned out poorly for 27-year-old programmer Taylor Huddleston, who was sentenced to almost three years in prison for making and marketing a complex spyware program.

First, the Ukrainian case. On Nov. 30, 2016, authorities across Europe coordinated the arrest of five individuals thought to be tied to the Avalanche crime gang, in an operation that the FBI and its partners abroad described as an unprecedented global law enforcement response to cybercrime. Hundreds of malicious web servers and hundreds of thousands of domains were blocked in the coordinated action.

The global distribution of servers used in the Avalanche crime machine. Source:

The alleged leader of the Avalanche gang — 33-year-old Russian Gennady Kapkanov — did not go quietly at the time. Kapkanov allegedly shot at officers with a Kalashnikov assault rifle through the front door as they prepared to raid his home, and then attempted to escape off of his 4th floor apartment balcony. He was later released, after police allegedly failed to file proper arrest records for him.

But on Monday Agence France-Presse (AFP) reported that Ukrainian authorities had once again collared Kapkanov, who was allegedly living under a phony passport in Poltav, a city in central Ukraine. No word yet on whether Kapkanov has been charged, which was supposed to happen Monday.

Kapkanov’s drivers license. Source:


Lawyers for Taylor Huddleston, a 27-year-old programmer from Hot Springs, Ark., originally asked a federal court to believe that the software he sold on the sprawling hacker marketplace Hackforums — a “remote administration tool” or “RAT” designed to let someone remotely administer one or many computers remotely — was just a benign tool.

The bad things done with Mr. Huddleston’s tools, the defendant argued, were not Mr. Huddleston’s doing. Furthermore, no one had accused Mr. Huddleston of even using his own software.

The Daily Beast first wrote about Huddleston’s case in 2017, and at the time suggested his prosecution raised questions of whether a programmer could be held criminally responsible for the actions of his users. My response to that piece was “Dual-Use Software Criminal Case Not So Novel.

Photo illustration by Lyne Lucien/The Daily Beast

The court was swayed by evidence that yes, Mr. Huddleston could be held criminally responsible for those actions. It sentenced him to 33 months in prison after the defendant acknowledged that he knew his RAT — a Remote Access Trojan dubbed “NanoCore RAT” — was being used to spy on webcams and steal passwords from systems running the software.

Of course Huddleston knew: He didn’t market his wares on some Craigslist software marketplace ad, or via video promos on his local cable channel: He marketed the NanoCore RAT and another software licensing program called Net Seal exclusively on Hackforums[dot]net.

This sprawling, English language forum has a deep bench of technical forum discussions about using RATs and other tools to surreptitiously record passwords and videos of “slaves,” the derisive term for systems secretly infected with these RATs.

Huddleston knew what many of his customers were doing because many NanoCore users also used Huddleston’s Net Seal program to keep their own RATs and other custom hacking tools from being disassembled or “cracked” and posted online for free. In short: He knew what programs his customers were using Net Seal on, and he knew what those customers had done or intended to do with tools like NanoCore.

The sentencing suggests that where you choose to sell something online says a lot about what you think of your own product and who’s likely buying it.

Daily Beast author Kevin Poulsen noted in a July 2017 story that Huddleston changed his tune and pleaded guilty. The story pointed to an accompanying plea in which Huddleston stipulated that he “knowingly and intentionally aided and abetted thousands of unlawful computer intrusions” in selling the program to hackers and that he “acted with the purpose of furthering these unauthorized computer intrusions and causing them to occur.”


Bleeping Computer’s Catalin Cimpanu observes that Huddleston’s case is similar to another being pursued by U.S. prosecutors against Marcus “MalwareTech” Hutchins, the security researcher who helped stop the spread of the global WannaCry ransomware outbreak in May 2017. Prosecutors allege Hutchins was the author and proprietor of “Kronos,” a strain of malware designed to steal online banking credentials.

Marcus Hutchins, just after he was revealed as the security expert who stopped the WannaCry worm. Image:

On Sept. 5, 2017, KrebsOnSecurity published “Who is Marcus Hutchins?“, a breadcrumbs research piece on the public user profiles known to have been wielded by Hutchins. The data did not implicate him in the Kronos trojan, but it chronicles the evolution of a young man who appears to have sold and published online quite a few unique and powerful malware samples — including several RATs and custom exploit packs (as well as access to hacked PCs).

MalwareTech declined to be interviewed by this publication in light of his ongoing prosecution. But Hutchins has claimed he never had any customers because he didn’t write the Kronos trojan.

Hutchins has pleaded not guilty to all four counts against him, including conspiracy to distribute malicious software with the intent to cause damage to 10 or more affected computers without authorization, and conspiracy to distribute malware designed to intercept protected electronic communications.

Hutchins said through his @MalwareTechBlog account on Twitter Feb. 26 that he wanted to publicly dispute my Sept. 2017 story. But he didn’t specify why other than saying he was “not allowed to.”

MWT wrote: “mrw [my reaction when] I’m not allowed to debunk the Krebs article so still have to listen to morons telling me why I’m guilty based on information that isn’t even remotely correct.”

Hutchins’ tweet on Feb. 26, 2018.

According to a story at BankInfoSecurity, the evidence submitted by prosecutors for the government includes:

  • Statements made by Hutchins after he was arrested.
  • A CD containing two audio recordings from a county jail in Nevada where he was detained by the FBI.
  • 150 pages of Jabber chats between the defendant and an individual.
  • Business records from Apple, Google and Yahoo.
  • Statements (350 pages) by the defendant from another internet forum, which were seized by the government in another district.
  • Three to four samples of malware.
  • A search warrant executed on a third party, which may contain some privileged information.

The case against Hutchins continues apace in Wisconsin. A scheduling order for pretrial motions filed Feb. 22 suggests the court wishes to have a speedy trial that concludes before the end of April 2018.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #148

Here's what happened in the Reproducible Builds effort between Sunday February 18 and Saturday February 24 2018:

Logo and Outreachy/GSoC

Reproducible work in other projects

There were a number of blog posts related to reproducible builds published this week:

Development and fixes in Debian key packages

Norbert Preining added calls to dh_stripnondeterminism to a number of TexLive packages which should let them become reproducible in Debian (#886988).

"Y2K-bug reloaded"

As part of the work on reproducible builds for openSUSE, Bernhard M. Wiedemann built packages 15 years in the future and discovered a widespread systematic errors in how Perl's Time::Local functions are used.

This affected a diverse set of software - including git and our strip-nondeterminism (via Archive::Zip)

grep was run on 16,896 tarballs in openSUSE's devel:languages:perl project and 102 of them contained timegm or timelocal calls. Of those, over 30 were problematic and some more need to be analyzed:

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

60 package reviews have been added, 32 have been updated and 30 have been removed in this week, adding to our knowledge about identified issues.

Two new toolchain issue types have been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (41)
  • Andreas Beckmann (1)
  • Boyuan Yang (1) development


This week's edition was written by Bernhard M. Wiedemann, kpcyrd, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianNorbert Preining: CafeOBJ 1.5.7 released

Yesterday we released CafeOBJ 1.5.7 with lots of changes concerning the inductive theorem prover CITP, as well as fixes to make CafeOBJ work with current SBCL. The documentation has gained a few more documents (albeit in Japanese), please see Documentation pages for the full list. The reference manual has been updated and is available as PDF, Html, or Wiki.


To quote from our README:

CafeOBJ is a new generation algebraic specification and programming language. As a direct successor of OBJ, it inherits all its features (flexible mix-fix syntax, powerful typing system with sub-types, and sophisticated module composition system featuring various kinds of imports, parameterised modules, views for instantiating the parameters, module expressions, etc.) but it also implements new paradigms such as rewriting logic and hidden algebra, as well as their combination.


Binary packages for Linux, MacOS, and Windows are already available, both in 32 and 64 bit and based on Allegro CL and SBCL (with some exceptions). All downloads can be found at the CafeOBJ download page. The source code can also be found on the download page, or directly from here: cafeobj-1.5.7.tar.gz.

The CafeOBJ Debian package is already updated.

Macports file has also been updated, please see the above download/install page for details how to add our sources to your macport.

Bug reports

If you find a bug, have suggestions, or complains, please open an issue at the Github issue page.

For other inquiries, please use

TEDFollow your dreams without fear: 4 questions with Zubaida Bai

Cartier and TED believe in the power of bold ideas to empower local initiatives to have global impact. To celebrate Cartier’s dedication to launching the ideas of female entrepreneurs into concrete change, TED has curated a special session of talks around the theme “Bold Alchemy” for the Cartier Women’s Initiative Awards, featuring a selection of favorite TED speakers.

Leading up to the session, TED talked with women’s health advocate and TED Fellow Zubaida Bai about what inspires her work to improve the health and livelihoods of women worldwide.

TED: Tell us who you are.
Zubaida Bai: I am a women’s health advocate, a mother, a designer and innovator of health and livelihood solutions for underserved women and girls. I’ve traveled to the poorest communities in the world, listened compassionately to women and observed their challenges and indignities. As an entrepreneur and thought leader, I’m putting my passion into a movement that will address market failures, break taboos, and elevate the health of women and girls as a core topic in the world.

TED: What’s a bold move you’ve made in your career?
ZB: The decision I made with my husband and co-founder to make our company a for-profit venture. We wanted to prove that the poor are not poor in mind, and if you offer them a quality product that they need, and can afford, they will buy it. We also wanted to show that our business mode — serving the bottom of the pyramid — was scalable. Being a social sustainable enterprise is tough, especially if you serve women and children. But relying on non-profit donations especially for women’s health comes with a price. And that price is often an endless cycle of fundraising that makes it hard to create jobs and economically lift up the very communities being served. We are proud that every woman in our facilities in Chennai receives healthcare in addition to her salary.

TED: Tell us about a woman who inspires you.
ZB: My mother. She worked very hard under social constraints in India that were not favorable towards women. She was always working side jobs and creating small enterprises to help keep our family going, and I learned a lot from her. She also pushed me and believed in me and always created opportunities for me that she was denied and didn’t have access to.

TED: If you could go back in time, what would you tell your 18-year-old self?
ZB: To believe in your true potential. To follow your dreams without fear, as success is believing in your dreams and having the courage to pursue them — not the end result.

The private TED session at Cartier takes place April 26 in Singapore. It will feature talks from a diverse range of global leaders, entrepreneurs and change-makers, exploring topics ranging from the changing global workforce to maternal health to data literacy, and it will include a performance from the only female double violinist in the world.

TEDYou are here for a reason: 4 questions with Halla Tómasdóttir

Cartier and TED believe in the power of bold ideas to empower local initiatives to have global impact. To celebrate Cartier’s dedication to launching the ideas of female entrepreneurs into concrete change, TED has curated a special session of talks around the theme “Bold Alchemy” for the Cartier Women’s Initiative Awards, featuring a selection of favorite TED speakers.

Leading up to the session, TED talked with financier, entrepreneur and onetime candidate for president of Iceland, Halla Tómasdóttir, about what influences, inspires and drives her to be bold.

TED: Tell us who you are.
Halla Tómasdóttir: I think of myself first and foremost as a change catalyst who is passionate about good leadership and a gender-balanced world. My leadership career started in corporate America with Mars and Pepsi Cola, but since then I have served as an entrepreneur, educator, investor, board director, business leader and presidential candidate. I am married, a proud mother of two teenagers and a dog and am perhaps best described by the title given to me by the New Yorker: “A Living Emoji of Sincerity.”

TED: What’s a bold move you’ve made in your career?
HT: I left a high-profile position as the first female CEO of the Iceland Chamber of Commerce to become an entrepreneur with the vision to incorporate feminine values into finance. I felt the urge to show a different way in a sector that felt unsustainable to me, and I longed to work in line with my own values.

TED: Tell us about a woman who inspires you.
HT: The women of Iceland inspired me at an early age, when they showed incredible courage, solidarity and sisterhood and “took the day off” (went on a strike) and literally brought the country to its knees — as nothing worked when women didn’t do any work. Five years later, Iceland was the first country in the world to democratically elect a woman as president. I was 11 years old at the time, and her leadership has inspired me ever since. Her clarity on what she cares about and her humble way of serving those causes is truly remarkable.

TED: If you could go back in time, what would you tell your 18-year-old self?
HT: I would say: Halla, just be you and know that you are enough. People will frequently tell you things like: “This is the way we do things around here.” Don’t ever take that as a valid answer if it doesn’t feel right to you. We are not here to continue to do more of the same if it doesn’t work or feel right anymore. We are here to grow, ourselves and our society. You are here for a reason: make your life and leadership matter.

The private TED session at Cartier takes place April 26 in Singapore. It will feature talks from a diverse range of global leaders, entrepreneurs and change-makers, exploring topics ranging from the changing global workforce to maternal health to data literacy, and it will include a performance from the only female double violinist in the world.

Worse Than Failure-0//

In software development, there are three kinds of problems: small, big and subtle. The small ones are usually fairly simple to track down; a misspelled label, a math error, etc. The large ones usually take longer to find; a race condition that you just can't reproduce, an external system randomly feeding you garbage, and so forth.

Internet word cloud

The subtle problems are an entirely different beast. It can be as simple as somebody entering 4321 instead of 432l (432L), or similar with 'i', 'l', '1', '0' and 'O'. It can be an interchanged comma and period. It can be something more complex, such as an unsupported third party library that throws back errors for undefined conditions, but randomly provides so little information as to be useful to neither user nor developer.

Brujo B encountered such a beast back in 2003 in a sub-equatorial bank that had been especially fond of VB6. This bank had tried to implement standards. In particular, they wanted all of their error messages to appear consistently for their users. To this end, they put a great deal of time and effort into building a library to display error messages in a consistent format. Specifically:


An example error message might be:

  File Not Found - 127 / File 'your.file' could not be found / FileImporter

Unfortunately, the designers of this routine could not compensate for all of the third party tools and libraries that did NOT set some/most/all of those variables. This led to interesting presentations of errors to both users and developers:

  - 34 / Network Connection Lost /
  Unauthorized - 401 //

Crystal Reports was particularly unhelpful, in that it refused to populate any field from which error details could be obtained, leading to the completely unhelpful:


...which could only be interpreted as Something really bad happened, but we don't know what that is and you have no way to figure that out. It didn't matter what Brujo and peers did. Everything that they tried to cajole Crystal Reports into giving context information failed to varying degrees; they could only patch specific instances of errors; but the Ever-Useless™ -0// error kept popping up to bite them in the arse.

After way too much time trying to slay the beast, they gave up, accepted it as one of their own and tried their best to find alternate ways of figuring out what the problems were.

Several years after moving on to saner pastures, Brujo returned to visit old friends. On the wall they had added a cool painting with many words that "describe the company culture". Layered in were management approved words, like "Trust" and "Loyalty". Some were more specific in-jokes, names of former employees, or references to big achievements the organization had made.

One of them was -0//

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

Don MartiWhat I don't get about Marketing

I want to try to figure out something I still don't understand about Marketing.

First, read this story by Sarah Vizard at Marketing Week: Why Google and Facebook should heed Unilever’s warnings.

All good points, right?

With the rise of fake news and revelations about how the Russians used social platforms to influence both the US election and EU referendum, the need for change is pressing, both for the platforms and for the advertisers that support them.

We know there's a brand equity crisis going on. Brand-unsafe placements are making mainstream brands increasingly indistinguishable from scams. So the story makes sense so far. But here's what I don't get.

For the call to action to work, Unilever really needs other brands to rally round but these have so far been few and far between.

Other brands? Why?

If brands are worth anything, they can at least help people tell one product apart from another.

Think Small VW ad

Saying that other brands need to participate in saving Unilever's brands from the three-ring shitshow of brand-unsafe advertising is like saying that Volkswagen really needs other brands to get into simple layouts and natural-sounding copy just because Volkswagen's agency did.

Not everybody has to make the same stuff and sell it the same way. Brands being different from each other is a good thing. (Right?)

generic food

Sometimes a problem on the Internet isn't a "let's all work together" kind of problem. Sometimes it's an opportunity for one brand to get out ahead of another.

What if every brand in a category kept on playing in the trash fire except one?

Planet Linux AustraliaLev Lafayette: Drupal "Access denied" Message

It happens rarely enough, but on occasion (such as an upgrade to a database system (e.g., MySQL, MariaDB) or system version of a web-scripting language (e.g., PHP), you can end up with one's Drupal site failing to load, displaying only the error message similar to:

PDOException: SQLSTATE[HY000] [1044] Access denied for user 'username'@'localhost' to database 'database' in lock_may_be_available() (line 167 of /website/includes/

read more

Planet DebianRenata D'Avila: Working with git branches was the best decision I made

This is a short story about how chosing to use git branches saved me from some trouble.

How did I decide to use a new branch?

Up until certain point, I was just commiting all the code (and notes) I wrote into the master branch, which is the default for Git. No big deal, if I broke something, I could just go back and revert one or more commits and it would be okay.

It got to the point, though, that I would have send the code to people other than the mentors who had been seeing me breaking things: I would have to submit it for other developers to test it. While a broken code was the ideal for my mentors to see and help me in figuring out how to fix it, that wouldn't be useful for people seeing it in production and giving me feedback and suggestions for improvement. I had to send to them a good code that worked, and I had to do that while I worked on the last functionality needed, which is the recurrence rule for events.

After working on the recurrence rule for a few hours I realized that, since it wasn't really functional yet, I couldn't simply commit it on top of the rest of the code that had already been commited/published. Sure, I could have commented the function and commited the code that way, but it would be just too much trouble to have to comment/uncomment it every time I would work on that part.

That is how I chose to create a new git branch instead and starting commiting there the changes I had made.

I created a new branch called "devel" and asked git to change to it:

git checkout -b devel

Then, I did a "git status" to check everything was as expected and the branch had been changed to devel:

git status
renata@debian:~/foss_events$ git status
On branch devel

Next: staging the files and creating the commit:

git add --all .

Because the branch was created on my local machine, if I straight out try to just push the code upstream, it will give an error, because there is no "devel" branch on Github yet. So let's give some arguments to the git push command, asking it to set an upstream branch in the origin, which will receive the code:

git push --set-upstream origin devel

How did this save me from some trouble?

Simply because, before I sent the code to the moin-devel list, I decided to clean out the repository of the old and repeated code... by deleting those files. I wanted to do that so anyone who came to try it out would be able to spot the macro easily and not to worry whether the other files had to be installed anywhere for the code to work.

Once I had deleted those files using rm -r on the command line and right before I commited, I did a "git status" to check if the delete action had been recorded... that was when I noticed that I was still on the devel branch, and not on the master branch, where I wanted this change to take place! My development branch should stay the mess it was because it was stuff I used to try out.

I had used "rm -r", though, so how did I get those files back to the devel branch? I mean, the easy way, not the downloading-the-repo-again way.

Simple! I would have to discard the changes (deletes) I had made on the devel branch and change to the master branch to do it again:


git checkout -f

This will throw away the changes. Then, it's possible to move back to the master branch:

git checkout master

And, yup, I'm totally writing this article for future reference to myself.

Read more


Planet DebianNorbert Preining: Debian updates: CafeOBJ, Calibre, Html5-parser, LaTeXML, TeX Live

The last few days have seen a somehow quite unusual frenzy of uploads to Debian from my side. Mostly due to the fact that while doing my tax declaration (btw, a huge pain here in Japan) I needed some spare time and dedicated them to long overdue package maintenance work as well as some new request.

So what has happened (in alphabetic order):

  • CafeOBJ has been out of Debian/testing due to build errors on some platforms for quite some time. We (upstream) have fixed these incompatibilities which arose in minor version changes of SBCL, very disappointing. Anyway, we hope that the latest release of CafeOBJ will soon enter Debian/testing again.
  • Calibre gets the usual 2-3 weekly updates following upstream. Not much to report here – and in fact also not much work. There are a few items I still want to fix, in particular Rar support, but the maintainer of unrar, Martin Meredith is completely unresponsive, although I have submitted patches to reinstantiate the shared library. That means that CBR support etc is still missing from Debian’s Calibre.
  • Html5-parser is a support library for Calibre, which saw an update which I have finally packaged. I haven’t had any complications with the previous version, though.
  • LaTeXML hasn’t been updated in nearly 3 years in Debian, despite the fact that a new upstream is available for quite some time. I got contacted by upstream about this, and realized I had contact with the maintainer of LaTeXML back in 2015. He isn’t using and maintaining LaTeXML anymore, and kindly agreed that I take it over under the Debian TeX Maintainers umbrella. So I have updated the packaging for the new release.
  • TeX Live got the usual monthly update I reported about the other day.

I thought with all that done I can rest a bit and concentrate on my bread-job (software R&D engineer) or my sweets-job (Researcher in Mathematical Logic), but out of the blue a RC bug of the TeX Live packages just flew in. That will be another evening.


CryptogramE-Mail Leaves an Evidence Trail

If you're going to commit an illegal act, it's best not to discuss it in e-mail. It's also best to Google tech instructions rather than asking someone else to do it:

One new detail from the indictment, however, points to just how unsophisticated Manafort seems to have been. Here's the relevant passage from the indictment. I've bolded the most important bits:

Manafort and Gates made numerous false and fraudulent representations to secure the loans. For example, Manafort provided the bank with doctored [profit and loss statements] for [Davis Manafort Inc.] for both 2015 and 2016, overstating its income by millions of dollars. The doctored 2015 DMI P&L submitted to Lender D was the same false statement previously submitted to Lender C, which overstated DMI's income by more than $4 million. The doctored 2016 DMI P&L was inflated by Manafort by more than $3.5 million. To create the false 2016 P&L, on or about October 21, 2016, Manafort emailed Gates a .pdf version of the real 2016 DMI P&L, which showed a loss of more than $600,000. Gates converted that .pdf into a "Word" document so that it could be edited, which Gates sent back to Manafort. Manafort altered that "Word" document by adding more than $3.5 million in income. He then sent this falsified P&L to Gates and asked that the "Word" document be converted back to a .pdf, which Gates did and returned to Manafort. Manafort then sent the falsified 2016 DMI P&L .pdf to Lender D.

So here's the essence of what went wrong for Manafort and Gates, according to Mueller's investigation: Manafort allegedly wanted to falsify his company's income, but he couldn't figure out how to edit the PDF. He therefore had Gates turn it into a Microsoft Word document for him, which led the two to bounce the documents back-and-forth over email. As attorney and blogger Susan Simpson notes on Twitter, Manafort's inability to complete a basic task on his own seems to have effectively "created an incriminating paper trail."

If there's a lesson here, it's that the Internet constantly generates data about what people are doing on it, and that data is all potential evidence. The FBI is 100% wrong that they're going dark; it's really the golden age of surveillance, and the FBI's panic is really just its own lack of technical sophistication.

Krebs on SecurityUSPS Finally Starts Notifying You by Mail If Someone is Scanning Your Snail Mail Online

In October 2017, KrebsOnSecurity warned that ne’er-do-wells could take advantage of a relatively new service offered by the U.S. Postal Service that provides scanned images of all incoming mail before it is slated to arrive at its destination address. We advised that stalkers or scammers could abuse this service by signing up as anyone in the household, because the USPS wasn’t at that point set up to use its own unique communication system — the U.S. mail — to alert residents when someone had signed up to receive these scanned images.

Image: USPS

The USPS recently told this publication that beginning Feb. 16 it started alerting all households by mail whenever anyone signs up to receive these scanned notifications of mail delivered to that address. The notification program, dubbed “Informed Delivery,” includes a scan of the front of each envelope destined for a specific address each day.

The Postal Service says consumer feedback on its Informed Delivery service has been overwhelmingly positive, particularly among residents who travel regularly and wish to keep close tabs on any bills or other mail being delivered while they’re on the road. It has been available to select addresses in several states since 2014 under a targeted USPS pilot program, but it has since expanded to include many ZIP codes nationwide. U.S. residents can find out if their address is eligible by visiting

According to the USPS, some 8.1 million accounts have been created via the service so far (Oct. 7, 2017, the last time I wrote about Informed Delivery, there were 6.3 million subscribers, so the program has grown more than 28 percent in five months).

Roy Betts, a spokesperson for the USPS’s communications team, says post offices handled 50,000 Informed Delivery notifications the week of Feb. 16, and are delivering an additional 100,000 letters to existing Informed Delivery addresses this coming week.

Currently, the USPS allows address changes via the USPS Web site or in-person at any one of more than 35,000 USPS retail locations nationwide. When a request is processed, the USPS sends a confirmation letter to both the old address and the new address.

If someone already signed up for Informed Delivery later posts a change of address request, the USPS does not automatically transfer the Informed Delivery service to the new address: Rather, it sends a mailer with a special code tied to the new address and to the username that requested the change. To resume Informed Delivery at the new address, that code needs to be entered online using the account that requested the address change.

A review of the methods used by the USPS to validate new account signups last fall suggested the service was wide open to abuse by a range of parties, mainly because of weak authentication and because it is not easy to opt out of the service.

Signing up requires an eligible resident to create a free user account at, which asks for the resident’s name, address and an email address. The final step in validating residents involves answering four so-called “knowledge-based authentication” or KBA questions.

The USPS told me it uses two ID proofing vendors: Lexis Nexisand, naturally, recently breached big three credit bureau Equifax — to ask the magic KBA questions, rotating between them randomly.

KrebsOnSecurity has assailed KBA as an unreliable authentication method because so many answers to the multiple-guess questions are available on sites like Spokeo and Zillow, or via social networking profiles.

It’s also nice when Equifax gives away a metric truckload of information about where you’ve worked, how much you made at each job, and what addresses you frequented when. See: How to Opt Out of Equifax Revealing Your Salary History for how much leaks from this lucrative division of Equifax.

All of the data points in an employee history profile from Equifax will come in handy for answering the KBA questions, or at least whittling away those that don’t match salary ranges or dates and locations of the target identity’s previous addresses.

Once signed up, a resident can view scanned images of the front of each piece of incoming mail in advance of its arrival. Unfortunately, anyone able to defeat those automated KBA questions from Equifax and Lexis Nexis — be they stalkers, jilted ex-partners or private investigators — can see who you’re communicating with via the Postal mail.

Maybe this is much ado about nothing: Maybe it’s just a reminder that people in the United States shouldn’t expect more than a post card’s privacy guarantee (which in can leak the “who” and “when” of any correspondence, and sometimes the “what” and “why” of the communication). We’d certainly all be better off if more people kept that guarantee in mind for email in addition to snail mail. At least now the USPS will deliver your address a piece of paper letting you know when someone signs up to look at those W’s in your snail mail online.

Planet DebianJo Shields: EOL notification – Debian 7, Ubuntu 12.04

Mono packages will no longer be built for these ancient distribution releases, starting from when we add Ubuntu 18.04 to the build matrix (likely early to mid April 2018).

Unless someone with a fat wallet screams, and throws a bunch of money at Azure, anyway.

Planet DebianAndrea Veri: Adding reCAPTCHA v2 support to Mailman

As a follow-up to the reCAPTCHA v1 post published back in 2014 here it comes an updated version for migrating your Mailman instance off from version 1 (being decommissioned on the 31th of March 2018) to version 2. The original python-recaptcha library was forked into and made compatible with reCAPTCHA version 2.

The relevant changes against the original library can be resumed as follows:

  1. Added ‘version=2’ against displayhtml, load_scripts functions
  2. Introduce the v2submit (along with submit to keep backwards compatibility) function to support reCAPTCHA v2
  3. The updated library is backwards compatible with version 1 to avoid unexpected code breakages for instances still running version 1

The required changes are located on the following files:


---	2018-02-26 14:56:48.000000000 +0000
+++ /usr/lib/mailman/Mailman/Cgi/	2018-02-26 14:08:34.000000000 +0000
@@ -31,6 +31,7 @@
 from Mailman import i18n
 from Mailman.htmlformat import *
 from Mailman.Logging.Syslog import syslog
+from recaptcha.client import captcha
 # Set up i18n
 _ = i18n._
@@ -244,6 +245,10 @@
     replacements[''] = mlist.FormatFormStart('listinfo')
     replacements[''] = mlist.FormatBox('fullname', size=30)
+    # Captcha
+    replacements[''] = captcha.displayhtml(mm_cfg.RECAPTCHA_PUBLIC_KEY, use_ssl=True, version=2)
+    replacements[''] = captcha.load_script(version=2)
     # Do the expansion.
     doc.AddItem(mlist.ParseTags('listinfo.html', replacements, lang))
     print doc.Format()


---	2018-02-26 14:56:38.000000000 +0000
+++ /usr/lib/mailman/Mailman/Cgi/	2018-02-26 14:08:18.000000000 +0000
@@ -32,6 +32,7 @@
 from Mailman.UserDesc import UserDesc
 from Mailman.htmlformat import *
 from Mailman.Logging.Syslog import syslog
+from recaptcha.client import captcha
 SLASH = '/'
 ERRORSEP = '\n\n<p>'
@@ -165,6 +166,17 @@
     _('There was no hidden token in your submission or it was corrupted.'))
             results.append(_('You must GET the form before submitting it.'))
+    # recaptcha
+    captcha_response = captcha.v2submit(
+        cgidata.getvalue('g-recaptcha-response', ""),
+        mm_cfg.RECAPTCHA_PRIVATE_KEY,
+        remote,
+    )
+    if not captcha_response.is_valid:
+        results.append(_('Invalid captcha: %s' % captcha_response.error_code))
     # Was an attempt made to subscribe the list to itself?
     if email == mlist.GetListEmail():
         syslog('mischief', 'Attempt to self subscribe %s: %s', email, remote)


--- listinfo.html	2018-02-26 15:02:34.000000000 +0000
+++ /usr/lib/mailman/templates/en/listinfo.html	2018-02-26 14:18:52.000000000 +0000
@@ -3,7 +3,7 @@
     <TITLE><MM-List-Name> Info Page</TITLE>
+    <MM-Recaptcha-Script> 
   <BODY BGCOLOR="#ffffff">
@@ -116,6 +116,11 @@
+      <tr>
+        <td>Please fill out the following captcha</td>
+        <td><mm-recaptcha-javascript></TD>
+      </tr>
+      <tr>
 	<td colspan="3">

The updated RPMs are being rolled out to Fedora, EPEL 6 and EPEL 7. In the meantime you can find them here.

If Mailman complains about not being able to load recaptcha.client follow these steps:

cd /usr/lib/mailman/pythonlib
ln -s /usr/lib/python2.6/site-packages/recaptcha/client recaptcha

And then on {subscribe,listinfo}.py:

import recaptcha

Planet DebianMartín Ferrari: Report from SnowCamp #2

Snow! After a lovely car journey through the Alps yesterday, I had a good sleep and I am now in the airport waiting to fly back to Dublin.

I think most attendees will agree that the SnowCamp was a success; I was certainly sad to leave.. It always feels too short!

After my first report, I spent a few hours on fixing a long-standing bug in the KGB bot, which caused it to take several minutes to sync channels and start emitting notifications. I also used for the first time the salsa merge requests feature! The next release of the bot will include this patch and take just a few seconds to be up and running.

I also worked on another RC bug opened on a package I had fixed only the day before, due to another test failure: #891356: golang-google-cloud FTBFS: FAIL; which I have finished and uploaded a few minutes ago.

Finally, I had some more talks which can't be reported upon, and then the Camp was over :-(


Cory DoctorowPodcast: The Man Who Sold the Moon, Part 05

Here’s part five of my reading (MP3) (part four, part three, part two, part one) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.


Worse Than FailureCodeSOD: Waiting for the Future

One of the more interesting things about human psychology is how bad we are at thinking about the negative consequences of our actions if those consequences are in the future. This is why the death penalty doesn’t deter crime, why we dump massive quantities of greenhouse gases into the atmosphere, and why the Y2K bug happened in the first place, and why we’re going to do it again when every 32-bit Unix system explodes in 2038. If the negative consequence happens well after the action which caused it, humans ignore the obvious cause and effect and go on about making problems that have to be fixed later.

Fran inherited a bit of technical debt. Specifically, there’s an auto-numbered field in the database. Due to their business requirements, when the field hits 999,999, it needs to wrap back around to 000,001. Many many years ago, the original developer “solved” that problem thus:

function getstan($callingMethod = null)

    $sequence = 1;

    // get insert id back
    $rs = db()->insert("sequence", array(
        'processor' => 'widgetsinc',
        'RID'       => $this->data->RID,
        'method'    => $callingMethod,
        'card'      => $this->data->cardNumber
    ), false, false);
    if ($rs) { // if query succeeded...
        $sequence = $rs;
        if ($sequence > 999999) {
            db()->q("delete from sequence where processor='widgetsinc'");
                array('processor' => 'widgetsinc', 'RID' => $this->data->RID, 'card' => $this->data->cardNumber), false,
            $sequence = 1;

    return (substr(str_pad($sequence, 6, "0", STR_PAD_LEFT), -6));

The sequence table uses an auto-numbered column. They insert a row into the table, which returns the generated ID used. If that ID is greater than 999,999, they… delete the old rows. They then insert a new row. Then they return “000001”.

Unfortunately, sequences don’t work this way in MySQL, or honestly any other database. They keep counting up unless you alter or otherwise reset the sequence. So, the counter keeps ticking up, and this method keeps deleting the old rows and returning “000001”. The original developer almost certainly never tested what this code does when the counter breaks 999,999, because that day was so far out into the future that they could put off the problem.

Speaking of putting off solving problems, Fran also adds:

For the past 2 years this function has been returning 000001 and is starting to screw up reports.

Broken for at least two years, but only now is it screwing up reports badly enough that anyone wants to do anything to fix it.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Planet DebianNorbert Preining: Disappointing visitors

I recently realized that one of my blog post on Writing Japanese in LaTeX gets a disproportionate number of visitors. It turned out that most of them are not really interested in LaTeX, but more in Latex …

This is a screenshot of one of the search engines ( that point to my blog. I still cannot grasp how I made it to the top of the list, though 😉 Maybe I should open a different business, the pay would definitely better than what I get now.

Planet Linux AustraliaOpenSTEM: At Mercy of the Weather

It is the time of year when Australia often experiences extreme weather events. February is renowned as the hottest month and, in some parts of the country, also the wettest month. It often brings cyclones to our coasts and storms, which conversely enough, may trigger fires as lightening strikes the hot, dry bush. Aboriginal people […]


Planet DebianNorbert Preining: Debian/TeX Live 2017.20180225-1

To my big surprise, the big rework didn’t create any havoc at all, not one bug report regarding the change. That is good. OTOH, I took some time off due to various surprising (and sometimes disturbing) things that have happened in the last month, so the next release took a bit longer than expected.

I am giving here the list of new and updated packages extracted from my local tlmgr.log file, but I am having some doubts, the list seems in both cases a bit too long for me 😉 Anyway, it was a very busy month for TeX packages.

We are also moving at high speed to TeX Live 2018. There will be probably one more release of Debian packages before we switch to the 2018 branch, which aligns nicely with the release planning of TeX Live.


New packages

abnt, adigraph, algobox, algolrevived, aligned-overset, alkalami, amscls-doc, authorarchive, axodraw2, babel-azerbaijani, beamertheme-saintpetersburg, beilstein, bib2gls, biblatex-enc, biblatex-oxref, biochemistry-colors, blowup, bredzenie, bxcalc, bxjaprnind, bxorigcapt, bxtexlogo, cesenaexam, cheatsheet, childdoc, cje, cm-mf-extra-bold, cmsrb, coelacanth, collection-plaingeneric, combofont, context-handlecsv, crossreftools, ctan-o-mat, currency, dejavu-otf, dijkstra, draftfigure, ducksay, dviinfox, dynkin-diagrams, endofproofwd, eqnnumwarn, fancyhandout, fetchcls, fixjfm, fontawesome5, fontloader-luaotfload, forms16be, glossaries-finnish, gotoh, graphicxpsd, gridslides, hackthefootline, hagenberg-thesis, hecthese, hithesis, hlist, hyphen-belarusian, ifptex, ifxptex, invoice2, isopt, istgame, jfmutil, knowledge, komacv-rg, ku-template, labelschanged, ladder, latex-mr, latex-refsheet, lccaps, limecv, llncsconf, luapackageloader, lyluatex, maker, marginfit, mathfam256, mathfixs, mcexam, mensa-tex, modernposter, modular, mptrees, multilang, musicography, na-box, na-position, niceframe-type1, nicematrix, notestex, numnameru, octave, outlining, pdfprivacy, pdfreview, pixelart, plex, plex-otf, pm-isomath, poetry, polexpr, pst-antiprism, pst-calculate, pst-dart, pst-geometrictools, pst-poker, pst-rputover, pst-spinner, pst-vehicle, pxufont, rutitlepage, scientific-thesis-cover, scratch, scratchx, sectionbreak, sesstime, sexam, shobhika, short-math-guide, simpleinvoice, simplekv, spark-otf, spark-otf-fonts, spectralsequences, stealcaps, termcal-de, textualicomma, thaienum, thaispec, theatre, thesis-gwu, tikz-feynhand, tikz-karnaugh, tikz-ladder, tikz-layers, tikz-relay, tikz-sfc, tikzcodeblocks, tikzducks, timbreicmc, translator, typewriter, typoaid, uhhassignment, unitn-bimrep, univie-ling, upzhkinsoku, wallcalendar, witharrows, xechangebar, xii-lat, xltabular, xsim, xurl, zebra-goodies, zhlipsum.

Updated packages

ESIEEcv, FAQ-en, GS1, HA-prosper, IEEEconf, IEEEtran, MemoirChapStyles, academicons, achemso, acro, actuarialsymbol, adobemapping, afm2pl, aleph, algorithm2e, amiri, amscls, amsldoc-it, amsmath, amstex, amsthdoc-it, animate, aomart, apa6, appendixnumberbeamer, apxproof, arabi, arabluatex, arara, archaeologie, arsclassica, autosp, awesomebox, babel, babel-english, babel-french, babel-georgian, babel-hungarian, babel-latvian, babel-russian, babel-ukrainian, bangorcsthesis, bangorexam, baskervillef, bchart, beamer, beamerswitch, beebe, besjournals, beuron, bgteubner, biber, biblatex, biblatex-abnt, biblatex-anonymous, biblatex-apa, biblatex-archaeology, biblatex-arthistory-bonn, biblatex-bookinother, biblatex-cheatsheet, biblatex-chem, biblatex-chicago, biblatex-fiwi, biblatex-gb7714-2015, biblatex-gost, biblatex-iso690, biblatex-manuscripts-philology, biblatex-philosophy, biblatex-publist, biblatex-realauthor, biblatex-sbl, biblatex-shortfields, biblatex-source-division, biblatex-trad, biblatex-true-citepages-omit, bibleref, bibletext, bibtex, bibtexperllibs, bibtexu, bidi, bnumexpr, bookcover, bookhands, bpchem, br-lex, bxbase, bxjscls, bxnewfont, bxpapersize, bytefield, c90, callouts, calxxxx-yyyy, catechis, cbfonts-fd, ccicons, cdpbundl, cellspace, changebar, checkcites, chemfig, chemmacros, chemschemex, chet, chickenize, chktex, circuitikz, citeall, cjk-gs-integrate, cjkutils, classicthesis, cleveref, cm, cmexb, cmpj, cns, cochineal, collref, complexity, comprehensive, computational-complexity, context, context-filter, context-fullpage, context-letter, context-title, context-vim, contracard, cooking-units, correctmathalign, covington, cquthesis, crossrefware, cslatex, csplain, csquotes, css-colors, ctan-o-mat, ctanify, ctex, ctie, curves, cweb, cyrillic-bin, cyrplain, datatool, datetime2-bahasai, datetime2-german, datetime2-spanish, datetime2-ukrainian, dccpaper, dejavu-otf, delimset, detex, dnp, doclicense, docsurvey, dox, dozenal, drawmatrix, dtk, dtl, dvi2tty, dviasm, dvicopy, dvidvi, dviljk, dvipdfmx, dvipng, dvipos, dvips, e-french, easyformat, ebproof, eledmac, elements, elpres, elzcards, embrac, emisa, enotez, eplain, epstopdf, eqparbox, esami, etoc, etoolbox, europasscv, euxm, exam, expex, factura, fancyhdr, fancylabel, fbb, fei, feyn, fibeamer, fira, fithesis, fmtcount, fnspe, fontinst, fontname, fontools, fonts-tlwg, fontspec, fonttable, fontware, footnotehyper, forest, fvextra, genealogytree, genmisc, gfsdidot, glossaries, glossaries-extra, glyphlist, gost, graphbox, graphics, graphics-def, graphics-pln, gregoriotex, gsftopk, gtl, guide-to-latex, gustlib, gustprog, halloweenmath, handout, hepthesis, hobby, hvfloat, hvindex, hyperref, hyperxmp, hyph-utf8, hyphen-base, hyphen-churchslavonic, hyphen-german, hyphen-latin, ifluatex, ifplatform, ijsra, impatient-cn, inconsolata, ipaex, iscram, jadetex, japanese-otf, japanese-otf-uptex, jlreq, jmlr, jmn, jsclasses, kantlipsum, karnaugh-map, ketcindy, keyfloat, kluwer, koma-script, komacv, kpathsea, l3build, l3experimental, l3kernel, l3packages, lacheck, lambda, langsci, latex-bin, latex2e-help-texinfo, latex2e-help-texinfo-fr, latex2e-help-texinfo-spanish, latex2man, latex2nemeth, latexbug, latexconfig, latexdiff, latexindent, latexmk, lato, lcdftypetools, leadsheets, leipzig, lettre, libertine, libertinegc, libertinus, libertinust1math, limap, lion-msc, listofitems, lithuanian, lni, lollipop, lt3graph, lua-check-hyphen, lualatex-math, luamplib, luatex, luatexja, luatexko, luatodonotes, luaxml, lwarp, m-tx, macros2e, make4ht, makedtx, makeindex, mandi, manfnt-font, marginnote, markdown, math-into-latex-4, mathpunctspace, mathtools, mcf2graph, media9, metafont, metapost, mex, mfirstuc, mflua, mfnfss, mfware, mhchem, microtype, minted, mltex, morewrites, mpostinl, mptopdf, msu-thesis, multiexpand, musixtex, mwcls, mwe, nddiss, ndsu-thesis, newpx, newtx, newtxtt, nlctdoc, noto, novel, numspell, nwejm, oberdiek, ocgx2, omegaware, oplotsymbl, optidef, ot-tableau, otibet, overlays, overpic, pagecolor, patgen, pdflatexpicscale, pdfpages, pdftex, pdftools, pdfwin, pdfx, perfectcut, pgf, pgfgantt, pgfplots, phfqit, philokalia, phonenumbers, phonrule, pkgloader, placeat, platex, platex-tools, platexcheat, poemscol, polski, polynom, powerdot, prerex, presentations, preview, probsoln, program, ps2pk, pst-barcode, pst-cie, pst-circ, pst-dart, pst-eucl, pst-exa, pst-fit, pst-fractal, pst-func, pst-geo, pst-ghsb, pst-node, pst-ode, pst-ovl, pst-pdf, pst-pdgr, pst-plot, pst-pulley, pst-solarsystem, pst-solides3d, pst-tools, pst2pdf, pstool, pstools, pstricks, pstricks-add, psutils, ptex, ptex-base, ptex-fontmaps, ptex-fonts, ptex2pdf, pxbase, pxchfon, pxjahyper, pxrubrica, pythontex, qpxqtx, quran, ran_toks, randomlist, randomwalk, rec-thy, refenums, reledmac, repere, resphilosophica, revtex4, robustindex, roex, rubik, sasnrdisplay, screenplay-pkg, scsnowman, seetexk, siunitx, skak, skrapport, songs, spreadtab, srbook-mem, stage, struktex, svg, sympytexpackage, synctex, systeme, t1utils, tcolorbox, testidx, tetex, tex, tex-refs, tex4ebook, tex4ht, texconfig, texcount, texdef, texdirflatten, texdoc, texfot, texosquery, texshade, texsis, textgreek, texware, texworks, thalie, thesis-ekf, thuthesis, tie, tikz-kalender, tikz-timing, tikzmark, tikzpeople, tikzsymbols, tlcockpit, tlshell, tocloft, toptesi, tpic2pdftex, tqft, tracklang, translation-biblatex-de, translations, ttfutils, tudscr, tugboat, turabian-formatting, ucharclasses, udesoftec, ulthese, unfonts-core, unfonts-extra, unicode-data, unicode-math, uowthesistitlepage, updmap-map, uplatex, upmethodology, uptex-base, uptex-fonts, variablelm, varsfromjobname, velthuis, visualtikz, vlna, web, widetable, wordcount, xassoccnt, xcharter, xcjk2uni, xcntperchap, xdvi, xecjk, xepersian, xetex, xetexconfig, xetexko, xetexref, xgreek, xii, xindy, xint, xmltex, xmltexconfig, xpinyin, xsavebox, ycbook, yhmath, zhnumber, zxjatype.

Planet Linux AustraliaChris Samuel: Vale Dad

[I’ve been very quiet here for over a year for reasons that will become apparent in the next few days when I finish and publish a long post I’ve been working on for a while – difficult to write, hence the delay]

It’s 10 years ago today that my Dad died, and Alan and I lost the father who had meant so much to both of us. It’s odd realising that it’s over 1/5th of my life since he died, it doesn’t seem that long.

Vale dad, love you…

This item originally posted here:

Vale Dad

Planet DebianDaniel Lange: Debian Gitlab ( tricks

Debian is moving the git hosting from, an instance of Fusionforge, to which is a Gitlab instance.

There is some background reading available on This also has pointers to an import script to ease migration for people that move repositories. It's definitely worth hanging out in #alioth on oftc, too, to learn more about salsa / gitlab in case you have a persistent irc connection.

As of now() salsa has 15,320 projects, 2,655 users in 298 groups.
Alioth has 29,590 git repositories (which is roughly equivalent to a project in Gitlab), 30,498 users in 1,154 projects (which is roughly equivalent a group in Gitlab).

So we currently have 50% of the git repositories migrated. One month after leaving beta. This is very impressive.
As Alioth has naturally accumulated some cruft, Alexander Wirt (formorer) estimates that 80% of the repositories in use have already been migrated.

So it's time to update your local .git/config URLs!

Mehdi Dogguy has written nice scripts to ease handling salsa / gitlab via the (extensive and very well documented) API. Among them is list_projects that gets you nice overview of the projects in a specific group. This is especially true for the "Debian" group that contains the former collab-maint repositories, so source code that can and shall be maintained by Debian Developers collectively.

Finding migrated repositories

Salsa can search quite quickly via the Web UI:✓&search=htop

Salsa search screenshot

but finding the URL to clone the repository from is more clicks and ~4MB of data each time (yeah, the modern web), so

$ curl --silent"htop" | jq .
    "id": 9546,
    "description": "interactive processes viewer",
    "name": "htop",
    "name_with_namespace": "Debian / htop",
    "path": "htop",
    "path_with_namespace": "debian/htop",
    "created_at": "2018-02-05T12:44:35.017Z",
    "default_branch": "master",
    "tag_list": [],
    "ssh_url_to_repo": "",
    "http_url_to_repo": "",
    "web_url": "",
    "avatar_url": null,
    "star_count": 0,
    "forks_count": 0,
    "last_activity_at": "2018-02-17T18:23:05.550Z"

is a bit nicer.

Please notice the git url format is a bit odd, it's either or

Notice the ":" -> "/" after the hostname. Bit me once.

Finding repositories to update

At this time I found it useful to check which of the repositories I have cloned had not yet been updated in the local .git/config:

find ~/debconf ~/my_sources ~/shared -ipath '*.git/config' -exec grep -H 'url.*git\.debian' '{}' \;

Thanks to Jörg Jaspert (Ganneff) the Debconf repositories have all been moved to Salsa now.
Hint: Bug him for his scripts if you need to do complex moves.

Updating the URLs has been an hours work on my side and there is little you can do to speed that up if - as in the Debconf case - teams have used the opportunity to clean up and things are not as easy as using sed -i.

But there is no reason to do this more than once, so for the laptops...

Speeding up migration on multiple devices

rsync -armuvz --existing --include="*/" --include=".git/config" --exclude="*" ~/debconf/ laptop:debconf/

will rsync the .git/config files that you changed to other systems where you keep partial copies.

On these a simple git pull to get up to remote HEAD or using the git_pull_all one-liner from will suffice.

Git short URL

Stefano Rivera (tumbleweed) shared this clever trick:

git config --global url."ssh://".insteadOf salsa:

This way you can git clone salsa:debian/htop.

Planet DebianEnrico Zini: Automatic deploy from gitlab/salsa CI

At SnowCamp I migrated Front Desk-related repositories to Salsa gitlab and worked on setting up Continuous Integration for the web applications I maintain in Debian.

The result is a reusable Django app that integrates with gitlab's webhooks

It is currently working for and I'll soon reuse it for and

The only setup needed on DSA side is to enable systemd linger on the deploy user.

The CI/deploy workflow is this:

  • gitlab runs tests in the CI
  • gitlab notifies pipeline status changes via a webhook
  • when a selected pipeline changes status to success, the application queues a deploy for that shasum by creating a shasum.deploy file in a queue directory
  • a systemd .path unit running as the deploy user triggers when the new file is created and runs deploy as the deploy user

And deploy does this:

  • git fetch
  • abort of the shasum of the head of the deploy branch does not match one of the .deploy files in the queue directory
  • abort if the head of the deploy branch is not signed by a gpg key present in a deploy keyring
  • abort if the head of the deploy branch is not a successor of the currently deployed commit
  • update the working copy
  • run a deploy script
  • remove all .deploy files seen when the script was called
  • send an email to the site admins with a log of the whole deploy process, whether it succeeded or it was aborted

For more details, see the app's

I find it wonderful that we got to a stage where we can have this in Debian, and I am very grateful to all the work that has been done and is being done in setting up and maintaining Salsa.

Planet DebianJunichi Uekawa: My chrome extension became useful for me.

My chrome extension became useful for me. It's nice. chrome.tabs API is a little weird, I want to open and manage tabs and then auto-process some things but executeScript interface is strange. Also I don't know how I would detect page transition without polling.

Planet DebianNicolas Dandrimont: Report from Debian SnowCamp: day 3

[Previously: day 1, day 2]

Thanks to Valhalla and other members of LIFO, a bunch of fine Debian folks have convened in Laveno, on the shores of Lake Maggiore, for a nice weekend of relaxing and sprinting on various topics, a SnowCamp.

As a starter, and on request from Valhalla, please enjoy an attempt at a group picture (unfortunately, missing a few people). Yes, the sun even showed itself for a few moments today!

One of the numerous SnowCamp group pictures

As for today’s activities… I’ve cheated a bit by doing stuff after sending yesterday’s report and before sleep: I reviewed some of Stefano’s dc18 pull requests; I also fixed papered over the debexpo uscan bug.

After keeping eyes closed for a few hours, the day was then spent tickling the python-gitlab module, packaged by Federico, in an attempt to resolve in a generic way.

The features I intend to implement are mostly inspired from jcowgill’s multimedia-cli:

  • per-team yaml configuration of “expected project state” (access level, hooks and other integrations, enablement of issues, merge requests, CI, …)
  • new repository creation (according to a team config or a personal config, e.g. for collab-main the Debian group)
  • audit of project configurations
  • mass-configuration changes for projects

There could also be some use for bits of group management, e.g. to handle the access control of the DebConf group and its subgroups, although I hear Ganneff prefers shell scripts.

My personal end goal is to (finally) do the 3D printer team repository migration, but e.g. the Python team would like to update configuration of all repos to use the new KGB hook instead of irker, so some generic interest in the tool exists.

As the tool has a few dependencies (because I really have better things to do than reimplement another wrapper over the GitLab API) I’m not convinced devscripts is the right place for it to live… We’ll see when I have something that does more than print a list of projects to show!

In the meantime, I have the feeling Stefano has lined up a new batch of DebConf website pull requests for me, so I guess that’s what I’m eating for breakfast “tomorrow”… Stay tuned!

My attendance to SnowCamp is in part made possible by donations to the Debian project. If you want to keep the project going, please consider donating, joining the Debian partners program, or sponsoring the upcoming Debian Conference.


Rondam RamblingsDevin Nunes doesn't realize that he's part of the government

I was reading about the long anticipated release of the Democratic rebuttal to the famous Republican dossier memo.  I've been avoiding writing about this, or any aspect of the Russia investigation, because there is just so much insanity going on there and I didn't want to get sucked into that tar pit.  But I could not let this slide: [O]n Saturday, committee chairman Devin Nunes (R-Calif.)

Planet DebianJohn Goerzen: Remembering Tom Wallis, The System Administrator That Made The World Better

I never asked Tom why he hired me.

I was barely 17 at the time – already a Debian developer, full of more enthusiasm than experience, and Tom offered me a job. It was my first real sysadmin job, and to my delight, I got to work with Unix. For two years, I was the part-time assistant systems administrator for the Computer Science department at Wichita State University. And Tom was my boss, mentor, and old friend. Tom was walking proof that a system administrator can make the world a better place.

That amazing time was two decades ago now. And in the time since, every so often Tom and I would exchange emails. I enjoyed occasionally dropping by his office at work and surprising him.

So it was a shock to get an email this week that Tom had married for the first time at age 54, and passed away four days later due to a boating accident while on his honeymoon.

Tom was a man with a big laugh and an even bigger heart. When I started a Linux Users Group (LUG) on campus, there was Tom – helping to arrange a place to meet, Internet access when we needed it, and gave his evenings to simply be present and a supporter.

I had (and still have) a passion for Free/Open Source software. Linux was just new at the time, and was barely present in the department when I started. I was fortunate that CS was the “little dept. that could” back then, with wonderful people but not a lot of money, so a free operating system helped with a lot of problems. Tom supported me in my enthusiasm to introduce Debian all over the place. His trust meant much, and brought out the best in me.

I learned a lot from Tom, and more than just technology. A state university can be heavily bureaucratic place at times. Tom was friends with every “can-do” person on campus, it seemed, and they all managed to pull through and get things done – sometimes working around policies that were a challenge.

I have sometimes wondered if I am doing enough, making a big enough difference in the world. Does a programmer really make a difference in people’s lives?

Tom Wallis is proof that the answer is yes. From the stories I heard at his funeral today, I can only guess how many other lives he touched.

This week, Tom gave me one final gift: a powerful reminder that sysadmins and developers can make the world a better place, can touch people’s lives. I hope Tom knew how much I appreciated him. If I find a way to make a difference in someone’s life — maybe an intern I’ve hired, or someone I take flying — than I will have found a way to pass on Tom’s gift to another, and I hope I can.


(This penguin was sitting out on the table of memorabilia from Tom today. I remember it from a shelf in his office.)

Planet DebianVincent Bernat: OPL2LPT: an AdLib sound card for the parallel port

The AdLib sound card was the first popular sound card for IBM PC—prior to that, we were pampered by the sound of the PC speaker. Connected to an 8-bit ISA slot, it is powered by a Yamaha YM3812 chip, also known as OPL2. This chip can drive 9 sound channels whose characteristics can be fine tuned through 244 write-only registers.

AdLib sound card

I had one but I am unable to locate it anymore. Models on eBay are quite rare and expensive. It is possible to build one yourself (either Sergey’s one or this faithful reproduction). However, you still need an ISA port. The limitless imagination of some hackers can still help here. For example, you can combine Sergey’s Xi 8088 processor board, with his ISA 8-bit backplane, his Super VGA card and his XT-CF-Lite card to get your very own modernish IBM PC compatible. Alternatively, you can look at the AdLib sound card on a parallel port from Raphaël Assénat.

The OPL2LPT sound card🔗

Recently, the 8-Bit Guy released a video about an AdLib sound card for the parallel port, the OPL2LPT. While current motherboards don’t have a parallel port anymore, it’s easy to add a PCI-Express one. So, I bought a pre-soldered OPL2LPT and a few days later, it was at my doorstep:

OPL2LPT sound card

UPDATED (2018.03): You can also buy a 3D-printed enclosure for the OPL2LPT. After writing this article, I have been offered one:

OPL2LPT enclosure

The expected mode of operation for such a device is to plug it to an ISA parallel port (accessible through I/O port 0x378), load a DOS driver to intercept calls to AdLib’s address and run some AdLib-compatible game. While correctly supported by Linux, the PCI-Express parallel port doesn’t operate like an ISA one. QEMU comes with a parallel port emulation but, due to timing issues, cannot correctly drive the OPL2LPT. However, VirtualBox emulation is good enough.1

On Linux, the OPL2LPT can be programmed almost like an actual AdLib. The following code writes a value to a register:

static void lpt_write(uint8_t data, uint8_t ctrl) {
  ieee1284_write_data(port, data);
  ieee1284_write_control(port, (ctrl | C1284_NINIT) ^ C1284_INVERTED);
  ieee1284_write_control(port,  ctrl                ^ C1284_INVERTED);
  ieee1284_write_control(port, (ctrl | C1284_NINIT) ^ C1284_INVERTED);

void opl_write(uint8_t reg, uint8_t value) {
  lpt_write(reg, C1284_NSELECTIN | C1284_NSTROBE);
  usleep(4); // 3.3 microseconds

  lpt_write(value, C1284_NSELECTIN);

To “natively” use the OPL2LPT, I have modified the following applications:

  • ScummVM, an emulator for classic point-and-click adventure games, including many Lucas­Arts games—patch
  • QEMU, a quick generic emulator—patch with a minimal emulation for timers and hard-coded sleep delays 🙄
  • DOSBox, an x86 emulator bundled with DOSpatch with a complete emulation for timers and a dedicated working thread2

You can compare the results in the following video, with the introduction of Indiana Jones and the Last Crusade, released in 1989:3

  • 0:00, DOSBox with an emulated PC speaker
  • 0:58, DOSBox with an emulated AdLib
  • 1:51, VirtualBox with the OPL2LPT (on an emulated parallel port)
  • 2:42, patched ScummVM with the OPL2LPT (native)
  • 3:33, patched QEMU with the OPL2LPT (native)
  • 4:24, patched DOSBox with the OPL2LPT (native)
  • 5:17, patched DOSBox with an improved OPL3 emulator (Nuked OPL3)
  • 6:10, ScummVM with the CD track (FM Towns version)

I let you judge how good is each option! There are two ways to buy an OPL2LPT: in Europe, from Serdashop or in North America, from the 8-Bit Guy.


Indiana Jones and the Fate of Atlantis🔗

Here is another video featuring Indiana Jones and the Fate of Atlantis, released in 1992, running in DOSBox with the OPL2LPT. It’s the second game using the iMUSE sound system: music is synchronized with actions and transitions are done seamlessly. Quite a feat at the time!

Monkey Island 2🔗

The first game featuring iMuse is Monkey Island 2, released in 1991. The video below displays the first minutes of the game, running in DOSBox with the OPL2LPT.

Notably, at 5:33, when Guybrush is in Woodtick, a small town on Scabb Island, the music plays around a variation of a basic theme with a different instrument for each building without any interruption.

How the videos were recorded🔗

With a VGA adapter, many games use Mode 13h, a 256-color mode with a 320×200 resolution. On a 4:3 display, this mode doesn’t feature square pixels: they are stretched vertically by a factor of 1.2.

The above videos were recorded with FFmpeg (and edited with Blender). It packs a lot of useful filters making it easy to automate video capture. Here is an example:

FONT="font=Monkey Island 1991 refined:
ffmpeg -y \
 -thread_queue_size 64 \
 -f x11grab -draw_mouse 0 -r 30 -s 640x400 -i :0+844,102 \
 -thread_queue_size 64 \
 -f pulse -ac 1 -i default \
 -filter_complex "[0:v]pad=854:400:0:0,
      drawtext=${FONT}:y= 10:text=Indiana Jones 3,
      drawtext=${FONT}:y= 34:text=Intro,
      drawtext=${FONT}:y=148:text=PC speaker,
      [game][vis]overlay=x=640:y=280" \
 -pix_fmt yuv420p -c:v libx264 -qp 0 -preset ultrafast \

The interesting part is the filter_complex argument. The input video is padded from 640×400 to 854×400 as a first step to a 16:9 aspect ratio.4 Using The Secret Font of Monkey Island, some text is added to the right of the video. The result is then scaled to 854×480 to get the final aspect ratio while stretching pixels to the expected 1.2 factor. The video up to this point is sent to a stream named game. As a second step, from the input audio, we build two visualisations: a waveform and a spectrum. They are stacked vertically and the result is a stream named vis. The last step is to overlay the visualisation stream over the gameplay stream.

  1. There is no dialog to configure a parallel port. This needs to be done from the command-line after the instance creation:

    $ VBoxManage modifyvm "FreeDOS (games)" --lptmode1 /dev/parport0
    $ VBoxManage modifyvm "FreeDOS (games)" --lpt1 0x378 7


  2. With QEMU or DOSBox, it should be the responsability of the executing game to respect the required delays for the OPL2 to process the received bytes. However, QEMU doesn’t seem to try to emulate I/O delays while DOSBox seems to not be precise enough. For the later, to overcome this shortcoming, the OPL2LPT is managed from a dedicated thread receiving the writes and ensuring the required delays are met. ↩︎

  3. Indiana Jones and the Last Crusade was the first game I tried after plugging in the brand new AdLib sound card I compelled my parents to buy on a trip to Canada in 1992. At the time, no brick and mortar retailer sold this card in my French city and online purchases (through the Minitel) were limited to consumer goods (like a VHS player). Hard times. 😏 ↩︎

  4. A common method to extend a 4:3 video to a 16:9 aspect ratio without adding black bars is to add a blurred background using the same video as a source. I didn’t do this here but it is also possible with FFmpeg↩︎

Planet DebianDima Kogan: Vnlog integration with feedgnuplot

This is mostly a continuation of the last post, but it's so nice!

As feedgnuplot reads data, it interprets it into separate datasets with IDs that can be used to refer to these datasets. For instance you can pass feedgnuplot --autolegend to create a legend for each dataset, labelling each with its ID. Or you can set specific directives for one dataset but not another: feedgnuplot --style position 'with lines' --y2 temperature would plot the position data with lines, and the temperature data on the second y axis.

Let's say we were plotting a data stream

1 1
2 4
3 9
4 16
5 25

Without --domain this data would be interpreted like this:

  • without --dataid. This stream would be interpreted as two data sets: IDs 0 and 1. There're 5 points in each one
  • with --dataid. This stream would be interpreted as 5 different datasets with IDs 1, 2, 3, 4 and 5. Each of these datasets would contain point point each.

This is a silly example for --dataid, obviously. You'd instead have a dataset like

temperature 34 position 4
temperature 35 position 5
temperature 36 position 6
temperature 37 position 7

and this would mean two datasets: temperature and position. This is nicely flexible because it can be as sparse as you want: each row doesn't need to have one temperature and one position, although in many datasets you would have exactly this. Real datasets are often more complex:

1 temperature1 34 temperature2 35 position 4
2 temperature1 35 temperature2 36
3 temperature1 36 temperature2 33
4 temperature1 37 temperature2 32 position 7

Here the first column could be a domain of sort sort, time for instance. And we have two different temperature sensors. And we don't always get a position report for whatever reason. This works fine, but is verbose, and usually the data is never stored in this way; I'd use awk to convert the data from its original form into this form for plotting. Now that vnlog is a thing, feedgnuplot has direct support for it, and this works like a 3rd way to get dataset IDs: vnlog headers. I'd represent the above like this:

# time temperature1 temperature2 position
1 34 35 4
2 35 36 -
3 36 33 -
4 37 32 7

This would be the working representation; I'd log directly to this format, and work with this data even before plotting it. But I can now plot this directly:

$ < data.vnl 
  feedgnuplot --domain --vnlog --autolegend --with lines 
              --style position 'with points pt 7' --y2 position

I think the command above makes it clear what was intended. It looks like this:


The input data is now much more concise, I don't need a different format for plotting, and the amount of typing has been greatly reduced. And I can do the normal vnlog things. What if I want to plot only temperatures:

$ < data.vnl 
  vnl-filter -p time,temp |
  feedgnuplot --domain --vnlog --autolegend --with lines


Planet DebianMartín Ferrari: Report from SnowCamp #1

As Nicolas already reported, a bunch of Debian folk gathered in the North of Italy for a long weekend of work and socialisation.

Valhalla had the idea of taking the SunCamp concept and doing it in another location, and along with people from LIFO they made it happen. Thanks to all who worked on this!

I arrived late on Wednesday, after a very relaxed car journey from Lyon. Sadly, on Thursday I had to attend some unexpected personal issues, and it was not very productive for Debian work. Luckily, between Friday and today, I managed to get back in track.

I uploaded new versions of Prometheus-related package to stretch-backports, so they are in line with current versions in testing:

  • prometheus-alertmanager, which also provides a fix for #891202: False owner/group for /var/lib/prometheus.
  • python-prometheus-client, carrying some useful updates for users.

I fixed two RC bugs in important Go packages, both caused by the recent upload of Golang 1.10:

  • #890927: golang-golang-x-tools: FTBFS and Debci failure with golang-1.10-go
  • #890938: golang-google-cloud FTBFS: FAIL: TestAgainstJSONEncodingNoTags

I also had useful chats about continuous testing of Go package, and improvements to git-buildpackage to better support our workflow. I plan to try and write some code for it.

Finally, I had some long discussions about joining an important team in Debian, but I can't still report on that :-)


Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main March 2018 Meeting: Unions - Hacking society's operating system

Mar 6 2018 18:30
Mar 6 2018 20:30
Mar 6 2018 18:30
Mar 6 2018 20:30
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Tuesday, March 6, 2018

6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000


Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

March 6, 2018 - 18:30

read more

Planet DebianNicolas Dandrimont: Report from Debian SnowCamp: day 2

[Previously: day 1]

Thanks to Valhalla and other members of LIFO, a bunch of fine Debian folks have convened in Laveno, on the shores of Lake Maggiore, for a nice weekend of relaxing and sprinting on various topics, a SnowCamp.

Today’s pièce de résistance was the long overdue upgrade of the machine hosting to (jessie then) stretch. We’ve spent most of the afternoon doing the upgrades with Mattia.

The first upgrade to jessie was a bit tricky because we had to clean up a lot of cruft that accumulated over the years. I even managed to force an unexpected database restore test 😇. After a few code fixes, and getting annoyed at apache2.4 for ignoring VirtualHost configs that don’t end with .conf (and losing an hour of debugging time in the process…), we managed to restore the functonality of the website.

We then did the stretch upgrade, which was somewhat smooth sailing in comparison… We had to remove some functionality which depended on packages that didn’t make it to stretch: fedmsg, and the SOAP interface. We also noticed that the gpg2 transition completely broke the… “interesting” GPG handling of mentors… An install of gnupg1 later everything should be working as it was before.

We’ve also tried to tackle our current need for a patched FTP daemon. To do so, we’re switching the default upload queue directory from / to /pub/UploadQueue/. Mattia has submitted bugs for dput and dupload, and will upload an updated dput-ng to switch the default. Hopefully we can do the full transition by the next time we need to upgrade the machine.

Known bugs: the uscan plugin now fails to parse the uscan output… But at least it “supports” version=4 now 🙃

Of course, we’re still sorely lacking volunteers who would really care about; the codebase is a pile of hacks upon hacks upon hacks, all relying on an old version of a deprecated Python web framework. A few attempts have been made at a smooth transition to a more recent framework, without really panning out, mostly for lack of time on the part of the people running the service. I’m still convinced things should restart from scratch, but I don’t currently have the energy or time to drive it… Ugh.

More stuff will happen tomorrow, but probably not on See you then!

My attendance to SnowCamp is in part made possible by donations to the Debian project. If you want to keep the project going, please consider donating, joining the Debian partners program, or sponsoring the upcoming Debian Conference.


Sam VargheseJoyce affair: incestuous relationship between pollies and journos needs some exposure

Barnaby Joyce has come (no pun intended) and Barnaby Joyce has gone, but one issue that is intimately connected with the circus that surrounded him for the last three weeks has yet to be subjected to any scrutiny.

And that is the highly incestuous relationship that exists between Australian journalists and politicians and often results in news being concealed from the public.

The Australian media examined the scandal around Deputy Prime Minister Joyce from many angles, ever since a picture of his pregnant mistress, Vikki Campion, appeared on the front page of the The Daily Telegraph.

Various high-profile journalists tried to offer mea culpas to justify their non-reporting of the affair.

This is not the first time that journalists in Canberra have known about newsworthy stories connected to politicians and kept quiet.

In 2005, journalists Michael Brissenden, Tony Wright and Paul Daley were at a dinner with former treasurer Peter Costello at which he told them he had set next April (2006) as the absolute deadline “that is, mid-term,” for John Howard to stand aside; if not, he would challenge him.

Costello was said by Brissenden to have declared that a challenge “will happen then” if “Howard is still there”. “I’ll do it,” he said. He said he was “prepared to go the backbench”. He said he’d “carp” at Howard’s leadership “from the backbench” and “destroy it” until he “won” the leadership.

But the three journalists kept mum about what would have been a big scoop, because Costello’s press secretary asked them not to write the yarn.

There was a great deal of speculation in the run-up to the 2007 election as to whether Howard would step down; one story in July 2006 said there had been an unspoken 1994 agreement between him and Costello to vacate the PM’s seat and make way for Costello to get the top job.

Had the three journalists at that 2005 dinner gone ahead and reported the story — as journalists are supposed to do — it is unlikely that Howard would have been able to carry on as he did. It would have forced Costello to challenge for the leadership or quit. In short, it would have changed the course of politics.

But Brissenden, Daley and Wright kept mum.

In the case of Joyce, it has been openly known since at least April 2017 that he was schtupping Campion. Indeed, the picture of Campion on the front page of the Telegraph indicates she was at least seven months pregnant — later it became known that the baby is due in April — which means Joyce must have been sleeping with her at least from June onwards.

The story was in the public interest, because Joyce and Campion are both paid from the public purse. When their affair became an issue, Joyce had her moved around to the offices of his National Party mates, Matt Canavan and Damian Drum, at salaries that went as high as $190,000. Joyce is also no ordinary politician – he is the deputy prime minister and thus acts as the head of the country whenever the prime minister is out of the country. Thus anything that affects his functioning is of interest to the public as he can make decisions that affect them.

But journalists like Katharine Murphy of the Guardian and Jacqueline Maley of the Sydney Morning Herald kept mum. A female journalist who is not part of this clique, Sharri Markson, broke the story. She was roundly criticised by many who belong the Murphy-Maley school of thinking.

Chris Uhlmann kept mum. So did Malcolm Farr and a host of others like Fran Bailey.

Both Murphy and Maley cited what they called “ethics” to justify keeping mum. But after the story broke, they leapt on it with claws extended. Another journalist, Julia Baird, tried to spin the story as one that showed how a woman in Joyce’s position would have been treated – much worse, was her opinion. She chose former prime minister Julia Gillard as her case study but did not offer the fact that Gillard was also a highly incompetent prime minister and that the flak she earned was also due to this aspect of her character.

Baird once was a columnist for Fairfax’s Weekend magazine and her profile pic in the publication at the time showed her in Sass & Bide jeans – the very business in which her husband was involved. Given that, when she moralises, one needs to take it with a kilo of salt.

But the central point is that, though she has a number of platforms to break a story, Baird never wrote a word about Joyce’s philandering. He promoted himself as a man who espoused family values by being photographed with his wife and four daughters repeatedly. He moralised more times than any other about the sanctity of marriage. Thus, he was fair game. Or so commonsense would dictate.

Why do these journalists and many others keep quiet and try to stay in the good books of politicians? The answer is simple: though the jobs of journalists and public relations people are diametric opposites, journalists have no qualms about crossing the divide because the money in PR is much more.

Salaries are much higher if a journalist gets onto the PR team of a senior politician. And with jobs in journalism disappearing at a rate of knots year after year, journalists like Murphy, Maley and Baird hedge their bets in order to stay in politicians’ good books. Remember Mark Simkin, a competent news reporter at the ABC? He joined the staff of — hold your breath — Tony Abbott when the man was prime minister. Simkin is rarely seen in public these days.

Nobody calls journalists on this deception and fraud. It emboldens them to continue to pose as people who act in the public interest when in reality they are no different from the average worker. Yet they climb on pulpits week after week and pontificate to the masses.

It has been said that journalists are like prostitutes: first, they do it for the fun of it, then they do it for a few friends, and finally they end up doing it for money. You won’t find too many arguments from me about that characterisation.

CryptogramFriday Squid Blogging: The Symbiotic Relationship Between the Bobtail Squid and a Particular Microbe

This is the story of the Hawaiian bobtail squid and Vibrio fischeri.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sociological ImagesDigital Drag?

Screenshot used with permission

As I was scrolling through Facebook a few weeks ago, I noticed a new trend: Several friends posted pictures (via an app) of what they would look like as “the opposite sex.” Some of them were quite funny—my female-identified friends sported mustaches, while my male-identified friends revealed long flowing locks. But my sociologist-brain was curious: What makes this app so appealing? How does it decide what the “opposite sex” looks like? Assuming it grabs the users’ gender from their profiles, what would it do with users who listed their genders as non-binary, trans, or genderqueer? Would it assign them male or female? Would it crash? And, on a basic level, why are my friends partaking in this “game?”

Gender is deeply meaningful for our social world and for our identities—knowing someone’s gender gives us “cues” about how to categorize and connect with that person. Further, gender is an important way our social world is organizedfor better or worse. Those who use the app engage with a part of their own identities and the world around them that is extremely significant and meaningful.

Gender is also performative. We “do” gender through the way we dress, talk, and take up space. In the same way, we read gender on people’s bodies and in how they interact with us. The app “changes people’s gender” by changing their gender performance; it alters their hair, face shape, eyes, and eyebrows. The app is thus a outlet to “play” with gender performance. In other words, it’s a way of doing digital drag. Drag is a term that is often used to refer to male-bodied people dressing in a feminine way (“drag queens”) or female-bodied people dressing in a masculine way (“drag kings”), but all people who do drag do not necessarily fit in this definition. Drag is ultimately about assuming and performing a gender. Drag is increasingly coming into the mainstream, as the popular reality TV series RuPaul’s Drag Race has been running for almost a decade now. As more people are exposed to the idea of playing with gender, we might see more of them trying it out in semi-public spaces like Facebook.

While playing with gender may be more common, it’s not all fun and games. The Facebook app in particular assumes a gender binary with clear distinctions between men and women, and this leaves many people out. While data on individuals outside of the gender binary is limited, a 2016 report from The Williams Institute estimated that 0.6% of the U.S. adult population — 1.4 million people — identify as transgender. Further, a Minnesota study of high schoolers found about 3% of the student population identify as transgender or gender nonconforming, and researchers in California estimate that 6% of adolescents are highly gender nonconforming and 20% are androgynous (equally masculine and feminine) in their gender performances.

The problem is that the stakes for challenging the gender binary are still quite high. Research shows people who do not fit neatly into the gender binary can face serious negative consequences, like discrimination and violence (including at least 28 killings of transgender individuals in 2017 and 4 already in 2018).  And transgender individuals who are perceived as gender nonconforming by others tend to face more discrimination and negative health outcomes.

So, let’s all play with gender. Gender is messy and weird and mucking it up can be super fun. Let’s make a digital drag app that lets us play with gender in whatever way we please. But if we stick within the binary of male/female or man/woman, there are real consequences for those who live outside of the gender binary.

Recommended Readings:

Allison Nobles is a PhD candidate in sociology at the University of Minnesota and Graduate Editor at The Society Pages. Her research primarily focuses on sexuality and gender, and their intersections with race, immigration, and law.

(View original at

Planet DebianBenjamin Mako Hill: “Stop Mang Fun of Me”

Somebody recently asked me if I am the star of quote #75514 (a snippet of online chat from a large collaboratively built collection):

<mako> my letter "eye" stopped worng
<luca> k, too?
<mako> yeah
<luca> sounds like a mountain dew spill
<mako> and comma
<mako> those three
<mako> ths s horrble
<luca> tme for a new eyboard
<luca> 've successfully taen my eyboard apart
       and fxed t by cleanng t wth alcohol
<mako> stop mang fun of me
<mako> ths s a laptop!!

It was me. A circuit on my laptop had just blown out my I, K, ,, and 8 keys. At the time I didn’t think it was very funny.

I no idea anyone had saved a log and had forgotten about the experience until I saw the quote. I appreciate it now so I’m glad somebody did!

This was unrelated to the time that I poured water into two computers in front of 1,500 people and the time that I carefully placed my laptop into a full bucket of water.

Planet DebianGunnar Wolf: Material for my UNL course, «Security in application development», available on GitLab

I have left this blog to linger without much activity... My life has got quite a bit busy. So, I'll try to put some life back here ☺

During the last trimester last year, I was invited as a distance professor to teach «Security in application development» in the «TUSL (Techical Universitary degree on Free Software)» short career taught by the online studies branch of Universidad Nacional del Litoral, based in Santa Fé, Argentina. The career is a three year long program that provides a facilitating, professional, terminal degree according to current Argentinian regulations (that demand people providing professional services on informatics to be "matriculated"). It is not a full Bachelors degree, as it does not allow graduated students to continue with a postgraduate; I have sometimes seen such programs offered as Associate degrees in some USA circles.

Anyway - I am most proud to say I had already a bit of experience giving traditional university courses, but this is my first time actually designing a course that's completely taken in writing; I have distance-taught once, but it was completely video-based, with forums used mostly for student participation.

So, I wrote quite a bit of material for my course. And, not to brag, but I think I did it nicely. The material is completely in Spanish, but some of you might be interested in it. And the most natural venue to share it with is, of course, the TUSL group in GitLab.

The TUSL group is quite interesting; when I made my yearly pilgrimage to Argentina in December, we met and chatted, even had a small conference for students and interested people in the region. I hope to continue to be involved in their efforts.

Anyway, as for my material — Strange as it might seem, I wrote mostly using the Moodle editor. I have been translating my writings to a more flexible Markdown, but you will find parts of it are still just HTML dumps taken with wget (taken as I don't want the course to be cleaned and forgotten!) The repository is split between the reading materials I gave the students (links to external material and to material written by myself) and the activities, where I basically just mirrored/statified the interactions through the forums.

I hope this material is interesting to some of you. And, of course, feel free to fix my errors and send merge requests ☺

Planet Linux AustraliaTim Serong: Strange Bedfellows

The Tasmanian state election is coming up in a week’s time, and I’ve managed to do a reasonable job of ignoring the whole horrible thing, modulo the promoted tweets, the signs on the highway, the junk the major (and semi-major) political parties pay to dump in my letterbox, and occasional discussions with friends and neighbours.

Promoted tweets can be blocked. The signs on the highway can (possibly) be re-purposed for a subsequent election, or can be pulled down and used for minor windbreak/shelter works for animal enclosures. Discussions with friends and neighbours are always interesting, even if one doesn’t necessarily agree. I think the most irritating thing is the letterbox junk; at best it’ll eventually be recycled, at worst it becomes landfill or firestarters (and some of those things do make very satisfying firestarters).

Anyway, as I live somewhere in the wilds division of Franklin, I thought I’d better check to see who’s up for election here. There’s no independents running this time, so I’ve essentially got the choice of four parties; Shooters, Fishers and Farmers Tasmania, Tasmanian Greens, Tasmanian Labor and Tasmanian Liberals (the order here is the same as on the TEC web site; please don’t infer any preference based on the order in which I list parties in this blog post).

I feel like I should be setting party affiliations aside and voting for individuals, but of the sixteen candidates listed, to the best of my knowledge I’ve only actually met and spoken with two of them. Another I noticed at random in a cafe, and I was ignored by a fourth who was milling around with some cronies at a promotional stand out the front of Woolworths in Huonville a few weeks ago. So, party affiliations it is, which leads to an interesting thought experiment.

When you read those four party names above, what things came most immediately to mind? For me, it was something like this:

  • Shooters, Fishers & Farmers: Don’t take our guns. Fuck those bastard Greenies.
  • Tasmanian Greens: Protect the natural environment. Renewable energy. Try not to kill anything. Might collaborate with Labor. Liberals are big money and bad news.
  • Tasmanian Labor: Mellifluous babble concerning health, education, housing, jobs, pokies and something about workers rights. Might collaborate with the Greens. Vehemently opposed to the Liberals.
  • Tasmanian Liberals: Mellifluous babble concerning jobs, health, infrastructure, safety and the Tasmanian way of life, peppered with something about small business and family values. Vehemently opposed to Labor and the Greens.

And because everyone usually automatically thinks in terms of binaries (e.g. good vs. evil, wrong vs. right, one vs. zero), we tend to end up imagining something like this:

  • Shooters, Fishers & Farmers vs. Greens
  • Labor vs. Liberal
  • …um. Maybe Labor and the Greens might work together…
  • …but really, it’s going to be Labor or Liberal in power (possibly with some sort of crossbench or coalition support from minor parties, despite claims from both that it’ll be majority government all the way).

It turns out that thinking in binaries is remarkably unhelpful, unless you’re programming a computer (it’s zeroes and ones all the way down), or are lost in the wilderness (is this plant food or poison? is this animal predator or prey?) The rest of the time, things tend to be rather more colourful (or grey, depending on your perspective), which leads back to my thought experiment: what do these “naturally opposed” parties have in common?

According to their respective web sites, the Shooters, Fishers & Farmers and the Greens have many interests in common, including agriculture, biosecurity, environmental protection, tourism, sustainable land management, health, education, telecommunications and addressing homelessness. There are differences in the policy details of course (some really are diametrically opposed), but in broad strokes these two groups seem to care strongly about – and even agree on – many of the same things.

Similarly, Labor and Liberal are both keen to tell a story about putting the people of Tasmania first, about health, education, housing, jobs and infrastructure. Honestly, for me, they just kind of blend into one another; sure there’s differences in various policy details, but really if someone renamed them Labal and Liberor I wouldn’t notice. These two are the status quo, and despite fighting it out with each other repeatedly, are, essentially, resting on their laurels.

Here’s what I’d like to see: a minority Tasmanian state government formed from a coalition of the Tasmanian Greens plus the Shooters, Fishers & Farmers party, with the Labor and Liberal parties together in opposition. It’ll still be stuck in that irritating Westminster binary mode, but at least the damn thing will have been mixed up sufficiently that people might actually talk to each other rather than just fighting.

Planet DebianAndrew Shadura: How to stop gnome-settings-daemon messing with keyboard layouts

In case you, just like me, want to have a heavily customised keyboard layout configuration, possibly with different layouts on different input devices (I recommend inputplug to make that work), you probably don’t want your desktop environment to mess with your settings or, worse, re-set them to some default from time to time. Unfortunately, that’s exactly what gnome-settings-daemon does by default in GNOME and Unity. While I could modify inputplug to detect that and undo the changes immediately, it turned out this behaviour can be disabled with an underdocumented option:

gsettings set org.gnome.settings-daemon.plugins.keyboard active false

Thanks to Sebastien Bacher for helping me with this two years ago.

Planet DebianJo Shields: Update on MonoDevelop Linux releases

Once upon a time, had two package repositories – one for RPM files, one for Deb files. This, as it turned out, was untenable – just building on an old distribution was insufficient to offer “works on everything” packages, due to dependent library APIs not being necessarily forward-compatible. For example, openSUSE users could not install MonoDevelop, because the versions of libgcrypt, libssl, and libcurl on their systems were simply incompatible with those on CentOS 7. MonoDevelop packages were essentially abandoned as unmaintainable.

Then, nearly 2 years ago, a reprieve – a trend towards development of cross-distribution packaging systems made it viable to offer MonoDevelop in a form which did not care about openSUSE or CentOS or Ubuntu or Debian having incompatible libraries. A release was made using Flatpak (born xdg-app). And whilst this solved a host of distribution problems, it introduced new usability problems. Flatpak means sandboxing, and without explicit support for sandbox escape at the appropriate moment, users would be faced with a different experience than the one they expected (e.g. not being able to P/Invoke libraries in /usr/lib, as the sandbox’s /usr/lib is different).

In 2 years of on-off development (mostly off – I have a lot of responsibilities and this was low priority), I wasn’t able to add enough sandbox awareness to the core of MonoDevelop to make the experience inside the sandbox feel as natural as the experience outside it. The only community contribution to make the process easier was this pull request against DBus#, which helped me make a series of improvements, but not at a sufficient rate to make a “fully Sandbox-capable” version any time soon.

In the interim between giving up on MonoDevelop packages and now, I built infrastructure within our CI system for building and publishing packages targeting multiple distributions (not the multi-distribution packages of yesteryear). And so to today, when recent MonoDevelop .debs and .rpms are or will imminently be available in our Preview repositories. Yes it’s fully installed in /usr, no sandboxing. You can run it as root if that’s your deal.

MonoDevelop on CentOS 6

Where’s the ARM builds?

Where’s the ARM64 builds?

Why aren’t you offering builds for $DISTRIBUTION?

It’s already an inordinate amount of work to support the 10(!) distributions I already do. Especially when, due to an SSL state engine bug in all versions of Mono prior to 5.12, nuget restore in the MonoDevelop project fails about 40% of the time. With 12 (currently) builds running concurrently, the likelihood of a successful publication of a known-good release is about 0.2%. I’m on build attempt 34 since my last packaging fix, at time of writing.

Can this go into my distribution now?

Oh God no. make dist should generate tarballs which at least work now, but they’re very much not distribution-quality. See here.

What about Xamarin Studio/Visual Studio for Mac for Linux?

Probably dead, for now. Not that it ever existed, of course. *cough*. But if it did exist, a major point of concern for making something capital-S-Supportable (VS Enterprise is about six thousand dollars) is being able to offer a trustworthy, integration-tested product. There are hundreds of lines of patches applied to “the stack” in Mac releases of Visual Studio for Mac, Xamarin.Whatever, and Mono. Hundreds to Gtk+2 alone. How can we charge people money for a product which might glitch randomly because the version of Gtk+2 in the user’s distribution behaves weirdly in some circumstances? If we can’t control the stack, we can’t integration test, and if we can’t integration test, we can’t make a capital-P Product. The frustrating part of it all is that the usability issues of MonoDevelop in a sandbox don’t apply to the project types used by Xamarin Studio/VSfM developers. Android development end-to-end works fine. Better than Mac/Windows in some cases, in fact (e.g. virtualization on AMD processors). But because making Gtk#2 apps sucks in MonoDevelop, users aren’t interested. And without community buy-in on MonoDevelop, there’s just no scope for making MonoDevelop-plus-proprietary-bits.

Why does the web stuff not work?

WebkitGtk dropped support for Gtk+2 years ago. It worked in Flatpak MonoDevelop because we built an old WebkitGtk, for use by widgets.

Aren’t distributions talking about getting rid of Gtk+2?

Yes 😬

CryptogramElection Security

I joined a letter supporting the Secure Elections Act (S. 2261):

The Secure Elections Act strikes a careful balance between state and federal action to secure American voting systems. The measure authorizes appropriation of grants to the states to take important and time-sensitive actions, including:

  • Replacing insecure paperless voting systems with new equipment that will process a paper ballot;

  • Implementing post-election audits of paper ballots or records to verify electronic tallies;

  • Conducting "cyber hygiene" scans and "risk and vulnerability" assessments and supporting state efforts to remediate identified vulnerabilities.

    The legislation would also create needed transparency and accountability in elections systems by establishing clear protocols for state and federal officials to communicate regarding security breaches and emerging threats.

Worse Than FailureError'd: Everybody's Invited!

"According to Outlook, it seems that I accidentally invited all of the EU and US citizens combined," writes Wouter.


"Just an array a month sounds like a pretty good deal to me! And I do happen to have some arrays to spare..." writes Rutger W.


Lucas wrote, "VMWare is on the cutting edge! They can support TWICE as much Windows 10 as their competitors!"


"I just wish it was CurrentMonthName so that I could take advantage of the savings!" Ken wrote.


Mark B. "I had no idea that Redboxes were so cultured."


"I'm a little uncomfortable about being connected to an undefined undefined," writes Joel B.


[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Krebs on SecurityChase ‘Glitch’ Exposed Customer Accounts

Multiple customers have reported logging in to their bank accounts, only to be presented with another customer’s bank account details. Chase has acknowledged the incident, saying it was caused by an internal “glitch” Wednesday evening that did not involve any kind of hacking attempt or cyber attack.

Trish Wexler, director of communications for the retail side of JP Morgan Chase, said the incident happened Wednesday evening, for “a pretty limited number of customers” between 6:30 pm  and 9 pm ET who “sporadically during that time while logged in to could see someone else’s account details.”

“We know for sure the glitch was on our end, not from a malicious actor,” Wexler said, noting that Chase is still trying to determine how many customers may have been affected. “We’re going through Tweets from customers and making sure that if anyone is calling us with issues we’re working one on one with customers. If you see suspicious activity you should give us a call.”

Wexler urged customers to “practice good security hygiene” by regularly reviewing their account statements, and promptly reporting any discrepancies. She said Chase is still working to determine the precise cause of the mix-up, and that there have been no reports of JPMC commercial customers seeing the account information of other customers.

“This was all on our side,” Wexler said. “I don’t know what did happen yet but I know what didn’t happen. What happened last night was 100 percent not the result of anything malicious.”

The account mix-up was documented on Wednesday by Fly & Dine, an online publication that chronicles the airline food industry. Fly & Dine included screenshots of one of their writer’s spouses logged into the account of a fellow Chase customer with an Amazon and Chase card and a balance of more than $16,000.

Kenneth White, a security researcher and director of the Open Crypto Audit Project, said the reports he’s seen on Twitter and elsewhere suggested the screwup was somehow related to the bank’s mobile apps. He also said the Chase retail banking app offered an update first thing Thursday morning.

Chase says the oddity occurred for both and users of the Chase mobile app. 

“We don’t have any evidence it was related to any update,” Wexler said.

“There’s only so many kind of logic errors where Kenn logs in and sees Brian’s account,” White said.  “It can be a devil to track down because every single time someone logs in it’s a roll of the dice — maybe they get something in the warmed up cache or they get a new hit. It’s tricky to debug, but this is like as bad as it gets in terms of screwup of the app.”

White said the incident is reminiscent of a similar glitch at online game giant Steam, which caused many customers to see account information for other Steam users for a few hours. He said he suspects the problem was a configuration error someplace within “caching servers,” which are designed to ease the load on a Web application by periodically storing some common graphical elements on the page — such as images, videos and GIFs.

“The images, the site banner, all that’s fine to be cached, but you never want to cache active content or raw data coming back,” White said. “If you’re CNN, you’re probably caching all the content on the homepage. But for a banking app that has access to live data, you never want that to be cached.”

“It’s fairly easy to fix once you identify the problem,” he added. “I can imagine just getting the basics of the core issue [for Chase] would be kind of tricky and might mean a lot of non techies calling your Tier 1 support people.”

Update, 8:10 p.m. ET: Added comment from Chase about the incident affecting both mobile device and Web browser users.


Planet DebianNicolas Dandrimont: Report from Debian SnowCamp: day 1

Thanks to Valhalla and other members of LIFO, a bunch of fine Debian folks have convened in Laveno, on the shores of Lake Maggiore, for a nice weekend of relaxing and sprinting on various topics, a SnowCamp.

This morning, I arrived in Milan at “omfg way too early” (5:30AM, thanks to a 30 minute early (!) night train), and used the opportunity to walk the empty streets around the Duomo while the Milanese .oO(mapreri) were waking up. This gave me the opportunity to take very nice pictures of monuments without people, which is always appreciated!


After a short train ride to Laveno, we arrived at the Hostel at around 10:30. Some people had already arrived the day before, so there already was a hacking kind of mood in the air.  I’d post a panorama but apparently my phone generated a corrupt JPEG 🙄

After rearranging the tables in the common spaces to handle power distribution correctly (♥ Gaffer Tape), we could start hacking!

Today’s efforts were focused on the DebConf website: there were a bunch of pull requests made by Stefano that I reviewed and merged:

I’ve also written a modicum of code.

Finally, I have created the Debian 3D printing team on salsa in preparation for migrating our packages to git. But now is time to do the sleep thing. See you tomorrow?

My attendance to SnowCamp is in part made possible by donations to the Debian project. If you want to keep the project going, please consider donating, joining the Debian partners program, or sponsoring the upcoming Debian Conference.

Planet DebianJonathan Dowland: A Nice looking Blog

I stumbled across this rather nicely-formatted blog by Alex Beal and thought I'd share it. It's a particular kind of minimalist style that I like, because it puts the content first. It reminds me of Mark Pilgrim's old blog.

I can't remember which post in particular I came across first, but the one that I thought I would share was this remarkably detailed personal research project on tracking mood.

That would have been the end of it, but I then stumbled across this great review of "Type Driven Development with Idris", a book by Edwin Brady. I bought this book during the Christmas break but I haven't had much of a chance to deep dive into it yet.

Google AdsenseIntroducing AdSense Auto ads

Finding the time to create great content for your users is an essential part of growing your publishing business. Today we are introducing AdSense Auto ads, a powerful new way to place ads on your site. Auto ads use machine learning to make smart placement and monetization decisions on your behalf, saving you time. Place one piece of code just once to all of your pages, and let Google take care of the rest.
Some of the benefits of Auto ads include:
  • Optimization: Using machine learning, Auto ads show ads only when they are likely to perform well and provide a good user experience.
  • Revenue opportunities: Auto ads will identify any available ad space and place new ads there, potentially increasing your revenue.
  • Easy to use: With Auto ads you only need to place the ad code on your pages once. When you’re ready to use new features and ad formats, simply turn them on and off with the flick of a switch -- there’s no need to change the code again.

How do Auto ads work?

  Select the ad formats you want to show on your pages by switching them on with a simple toggle

 Place the Auto ads code on your pages

Auto ads will now start working for you by analyzing your pages, finding potential ad placements, and showing new ads when they’re likely to perform well and provide a good user experience.
And if you want to have different formats on different pages you can use the new Advanced URL settings feature (e.g. you can choose to place In-feed ads on but not on
Getting started with AdSense Auto ads
Auto ads can work equally well on new sites and on those already showing ads.
Have you manually placed ads on your page?
There’s no need to remove them if you don’t want to. Auto ads will take into account all existing Google ads on your pages.

Already using Anchor or Vignette ads?
Auto ads include Anchor and Vignette ads and many more additional formats such as Text and display, In-feed, and Matched content. Note that all users that used Page-level ads are automatically migrated over to Auto ads without any need to add code to their pages again.

To get started with AdSense Auto ads:
  1. Sign in to your AdSense account.
  2. In the left navigation panel, visit My ads and select Get Started.
  3. On the "Choose your global settings" page, select the ad formats that you'd like to show and click Save.
  4. On the next page, click Copy code.
  5. Paste the ad code between the < head > and </ head > tags of each page where you want to show Auto ads.
  6. Auto ads will start to appear on your pages in about 10-20 minutes.

We'd love to hear what you think about Auto ads in the comments section below this post.

Posted by:
Tom Long, AdSense Engineering Manager
Violetta Kalathaki, AdSense Product Manager

Planet DebianRussell Coker: Dell PowerEdge T30

I just did a Debian install on a Dell PowerEdge T30 for a client. The Dell web site is a bit broken at the moment, it didn’t list the price of that server or give useful specs when I was ordering it. I was under the impression that the server was limited to 8G of RAM, that’s unusually small but it wouldn’t be the first time a vendor crippled a low end model to drive sales of more expensive systems. It turned out that the T30 model I got has 4*DDR4 sockets with only one used for an 8G DIMM. It apparently can handle up to 64G of RAM.

It has space for 4*3.5″ SATA disks but only has 4*SATA connectors on the motherboard. As I never use the DVD in a server this isn’t a problem for me, but if you want 4 disks and a DVD then you need to buy a PCI or PCIe SATA card.

Compared to the PowerEdge T130 I’m using at home the new T30 is slightly shorter and thinner while seeming to have more space inside. This is partly due to better design and partly due to having 2 hard drives in the top near the DVD drive which are a little inconvenient to get to. The T130 I have (which isn’t the latest model) has 4*3.5″ SATA drive bays at the bottom which are very convenient for swapping disks.

It has two PCIe*16 slots (one of which is apparently quad speed), one shorter PCIe slot, and a PCI slot. For a cheap server a PCI slot is a nice feature, it means I can use an old PCI Ethernet card instead of buying a PCIe Ethernet card. The T30 cost $1002 so using an old Ethernet card saved 1% of the overall cost.

The T30 seems designed to be more of a workstation or personal server than a straight server. The previous iterations of the low end tower servers from Dell didn’t have built in sound and had PCIe slots that were adequate for a RAID controller but vastly inadequate for video. This one has built in line in and out for audio and has two DisplayPort connectors on the motherboard (presumably for dual-head support). Apart from the CPU (an E3-1225 which is slower than some systems people are throwing out nowadays) the system would be a decent gaming system.

It has lots of USB ports which is handy for a file server, I can attach lots of backup devices. Also most of the ports support “super speed”, I haven’t yet tested out USB devices that support such speeds but I’m looking forward to it. It’s a pity that there are no USB-C ports.

One deficiency of the T30 is the lack of a VGA port. It has one HDMI and two DisplayPort sockets on the motherboard, this is really great for a system on or under your desk, any monitor you would want on your desk will support at least one of those interfaces. But in a server room you tend to have an old VGA monitor that’s there because no-one wants it on their desk. Not supporting VGA may force people to buy a $200 monitor for their server room. That increases the effective cost of the system by 20%. It has a PC serial port on the motherboard which is a nice server feature, but that doesn’t make up for the lack of VGA.

The BIOS configuration has an option displayed for enabling charging devices from USB sockets when a laptop is in sleep mode. It’s disappointing that they didn’t either make a BIOS build for a non-laptop or have the BIOS detect at run-time that it’s not on laptop hardware and hide that.


The PowerEdge T30 is a nice low-end workstation. If you want a system with ECC RAM because you need it to be reliable and you don’t need the greatest performance then it will do very well. It has Intel video on the motherboard with HDMI and DisplayPort connectors, this won’t be the fastest video but should do for most workstation tasks. It has a PCIe*16 quad speed slot in case you want to install a really fast video card. The CPU is slow by today’s standards, but Dell sells plenty of tower systems that support faster CPUs.

It’s nice that it has a serial port on the motherboard. That could be used for a serial console or could be used to talk to a UPS or other server-room equipment. But that doesn’t make up for the lack of VGA support IMHO.

One could say that a tower system is designed to be a desktop or desk-side system not run in any sort of server room. However it is cheaper than any rack mounted systems from Dell so it will be deployed in lots of small businesses that have one server for everything – I will probably install them in several other small businesses this year. Also tower servers do end up being deployed in server rooms, all it takes is a small business moving to a serviced office that has a proper server room and the old tower servers end up in a rack.

Rack vs Tower

One reason for small businesses to use tower servers when rack servers are more appropriate is the issue of noise. If your “server room” is the room that has your printer and fax then it typically won’t have a door and you just can’t have the noise of a rack mounted server in there. 1RU systems are inherently noisy because the small diameter of the fans means that they have to spin fast. 2RU systems can be made relatively quiet if you don’t have high-end CPUs but no-one seems to be trying to do that.

I think it would be nice if a company like Dell sold low-end servers in a rack mount form-factor (19 inches wide and 2RU high) that were designed to be relatively quiet. Then instead of starting with a tower server and ending up with tower systems in racks a small business could start with a 19 inch wide system on a shelf that gets bolted into a rack if they move into a better office. Any laptop CPU from the last 10 years is capable of running a file server with 8 disks in a ZFS array. Any modern laptop CPU is capable of running a file server with 8 SSDs in a ZFS array. This wouldn’t be difficult to design.

CryptogramHarassment By Package Delivery

People harassing women by delivering anonymous packages purchased from Amazon.

On the one hand, there is nothing new here. This could have happened decades ago, pre-Internet. But the Internet makes this easier, and the article points out that using prepaid gift cards makes this anonymous. I am curious how much these differences make a difference in kind, and what can be done about it.

Worse Than FailureCodeSOD: Functional IsFunction

Julio S recently had to attempt to graft a third-party document viewer onto an internal web app. The document viewer was from a company which specialized in enterprise “document solutions”, which can be purchased for enterprise-sized licensing fees.

Gluing the document viewer onto their internal app didn’t go terribly well. While debugging, and browsing through the vendor’s javascript, he saw a lot of calls to a function called IsFunction. It was loaded from a “utilities.js”-type do-everything library file. Curious, Julio pulled up the implementation.

function IsFunction ( func ) {
    var bChk=false;
    if (func != "undefined") bChk=true;
    else bChk=false;
    return bChk;

I cannot emphasize enough how beautiful this block of code is, by the standards of bad code. There’s so much there. One variable, bChk uses Hungarian notation. Nothing else seems to. It’s a totally superfluous variable, as we could just do return func != "undefined".

Then again why would we even do that? The real beauty, though, is how the name of the function and its implementation have no relationship to each other, and the implementation is utterly useless. For example:

IsFunction("Hello World"); //true
IsFunction({spam: "eggs"}); //true
IsFunction(function() {}); //true, but it was probably an accident
IsFunction(undefined); //true
IsFunction("undefined"); //false

Yes, the only time this function returns false is the specific case where you pass it the string “undefined”. Everything else IsFunction apparently. The useless function sounds important. Someone wrote it, probably as a quick attempt at vaguely defensive programming. “I should make sure my inputs are valid”. They didn’t test it. The certainly didn’t think about it. But they wrote it. And then someone else saw the function in use, and said, “Oh… I should probably use that, too.” Somewhere, there’s probably a “Style Guide”, which mandates that, before attempting to invoke a variable that should contain a function, you use IsFunction to confirm it does. It comes up in code reviews, and code has been held from going into production because someone didn't use IsFunction.

And Julio probably is the first person to actually check the implementation since it was first written.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!


Planet DebianRenata D'Avila: How to use the EventCalendar ical


If you follow this blog, you should probably know by now that I have been working with my mentors to contribute to MoinMoin EventCalendar macro, adding the possility to export the events' data to an icalendar file.

A screenshot of the code, with the function definition for creating the ical file from events from the macro

The code (which can be found on this Github repository) isn't quite ready yet, because I'm still working to convert the recurrence rule to the icalendar format, but other than that, it should be working. Hopefully.

This guide assumes that you have the EventCalendar macro installed on the wiki and that the macro is called on a determined wikipage.

The icalendar file is now generated as an attachment the moment the macro is loaded. I created an "ical" link at the bottom of the calendar. When activated, this link prompts the download of the ical attachment of the page. Being an attachment, there is still the possibility to just view ical the file using the "attachment" menu if the user wishes to do so.

Wiki page showing the calendar, with the 'ical' link at the bottom

There are two ways of importing this calendar on Thunderbird. The first one is to download the file by clicking on the link and then proceeding to import it manually to Thunderbird.

Thunderbird screenshot, with the menus "Events and Tasks" and "Import" selected

The second option is to "Create a new calendar / On the network" and to use the URL address from the ical link as the "location", as it is shown below:

Thunderbird screenshot, showing the new calendar dialog and the ical URL pasted into the "location" textboxd

As usual, it's possible to customize the name for the calendar, the color for the events and such...

Thunderbird screenshot, showing the new calendar with it's events

I noticed a few Wikis that use the EventCalendar, such as Debian wiki itself and the FSFE wiki. Python wiki also seems to be using MoinMoin and EventCalendar, but it seems that they use a Google service to export the event data do iCal.

If you read this and are willing to try the code in your wiki and give me feedback, I would really appreciate. You can find the ways to contact me in my Debian Wiki profile.

Planet DebianJonathan McDowell: Getting Debian booting on a Lenovo Yoga 720

I recently got a new work laptop, a 13” Yoga 720. It proved difficult to install Debian on; pressing F12 would get a boot menu allowing me to select a USB stick I have EFI GRUB on, but after GRUB loaded the kernel and the initrd it would just sit there never outputting anything else that indicated the kernel was even starting. I found instructions about Ubuntu 17.10 which helped but weren’t the complete picture. What seems to be the situation is that the kernel won’t happily boot if “Legacy Support” is not enabled - enabling this (and still booting as EFI) results in a happier experience. However in order to be able to enable legacy boot you have to switch the SATA controller from RAID to AHCI, which can cause Windows to get unhappy about its boot device going away unless you warn it first.

  • Fire up an admin shell in Windows (right click on the start menu)
  • bcdedit /set safeboot minimal
  • Reboot into the BIOS
  • Change the SATA Controller mode from RAID to AHCI (dire warnings about “All data will be erased”. It’s not true, but you’ve back up first, right?) Set “Boot Mode” to “Legacy Support”.
  • Save changes and let Windows boot to Safe Mode
  • Fire up an admin shell in Windows (right click on the start menu again)
  • bcdedit /deletevalue safeboot
  • Reboot again and Windows will load in normal mode with the AHCI drivers

Additionally I had problems getting the GRUB entry added to the BIOS; efibootmgr shows it fine but it never appears in the BIOS boot list. I ended up using Windows to add it as the primary boot option using the following (<guid> gets replaced with whatever the new “Debian” section guid is):

bcdedit /enum firmware
bcdedit /copy "{bootmgr}" /d "Debian"
bcdedit /set "{<guid>}" path \EFI\Debian\grubx64.efi
bcdedit /set "{fwbootmgr}" displayorder "{<guid>}" /addfirst

Even with that at one point the BIOS managed to “forget” about the GRUB entry and require me to re-do the final “displayorder” command.

Once you actually have the thing installed and booting it seems fine - I’m running Buster due to the fact it’s a Skylake machine with lots of bits that seem to want a newer kernel, but claimed battery life is impressive, the screen is very shiny (though sometimes a little too shiny and reflective) and the NVMe SSD seems pretty nippy as you’d expect.

TEDRemembering pastor Billy Graham, and more news in brief

Behold, your recap of TED-related news:

Remembering Billy Graham. For more than 60 years, pastor Billy Graham inspired countless people around the world with his sermons. On Wednesday, February 21, he passed away at his home in North Carolina after struggling with numerous illnesses over the past few years. He was 99 years old. Raised on a dairy farm in N.C., Graham used the power of new technologies, like radio and television, to spread his message of personal salvation to an estimated 215 million people globally, while simultaneously reflecting on technology’s limitations. Reciting the story of King David to audiences at TED1998, “David found that there were many problems that technology could not solve. There were many problems still left. And they’re still with us, and you haven’t solved them, and I haven’t heard anybody here speak to that,” he said, referring to human evil, suffering, and death. To Graham, the answer to these problems was to be found in God. Even after his death, through the work of the Billy Graham Evangelistic Association, led by his son Franklin, his message of personal salvation will live on. (Watch Graham’s TED Talk)

Fashion inspired by Black Panther. TED Fellow and fashion designer Walé Oyéjidé draws on aesthetics from around the globe to create one-of-a-kind pieces that dismantle bias and celebrate often-marginalized groups. For New York Fashion Week, Oyéjidé designed a suit with a coat and scarf for a Black Panther-inspired showcase, sponsored by Marvel Studios. One of Oyéjidé’s scarves is also worn in the movie by its protagonist, King T’Challa. “The film is very much about the joy of seeing cultures represented in roles that they are generally not seen in. There’s villainy and heros, tech genius and romance,” Oyéjidé told the New York Times, “People of color are generally presented as a monolithic image. I’m hoping it smashes the door open to show that people can occupy all these spaces.” (Watch Oyéjidé’s TED Talk)

Nuclear energy advocate runs for governor. Environmentalist and nuclear energy advocate Michael Shellenberger has launched his campaign for governor of California as an independent candidate. “I think both parties are corrupt and broken. We need to start fresh with a fresh agenda,” he says. Shellenberger intends to run on an energy and environmental platform, and he hopes to involve student environmental activists in his campaign. California’s gubernatorial election will be held in November 2018. (Watch Shellenberger’s TED Talk)

Can UV light help us fight the flu? Radiation scientist David Brenner and his research team at Columbia University’s Irving Medical Center are exploring whether a type of ultraviolet light known as far-UVC could be used to kill the flu virus. To test their theory, they released a strain of the flu virus called H1N1 in an enclosed chamber and exposed it to low doses of UVC. In a paper published in Nature’s Scientific Reports, they report that far-UVC successfully deactivated the virus. Previous research has shown that far-UVC doesn’t penetrate the outer layer of human skin or eyes, unlike conventional UV rays, which means that it appears to be safe to use on humans. Brenner suggests that far-UVC could be used in public spaces to fight the flu. “Think about doctors’ waiting rooms, schools, airports and airplanes—any place where there’s a likelihood for airborne viruses,” Brenner told Time. (Watch Brenner’s TED Talk.)

A beautiful sculpture for Madrid. For the 400 anniversary of Madrid’s Plaza Mayor, artist Janet Echelman created a colorful, fibrous sculpture, which she suspended above the historic space. The sculpture, titled “1.78 Madrid,” aims to provoke contemplation of the interconnectedness of time and our spatial reality. The title refers to the number of microseconds that a day on Earth was shortened as a result of the 2011 earthquake in Japan, which was so strong it caused the planet’s rotation to accelerate. At night, colorful lights are projected onto the sculpture, which makes it an even more dynamic, mesmerizing sight for the city’s residents. (Watch Echelman’s TED Talk)

A graduate program that doesn’t require a high school degree. Economist Esther Duflo’s new master’s program at MIT is upending how we think about graduate school admissions. Rather than requiring the usual test scores and recommendation letters, the program allows anyone to take five rigorous, online courses for free. Students only pay to take the final exam, the cost of which ranges from $100 to $1,000 depending on income. If they do well on the final exam, they can apply to MIT’s master’s program in data, economics and development policy. “Anybody could do that. At this point, you don’t need to have gone to college. For that matter, you don’t need to have gone to high school,” Duflo told WBUR. Already, more than 8,000 students have enrolled online. The program intends to raise significant aid to cover the cost of the master’s program and living in Cambridge, with the first class arriving in 2020. (Watch Duflo’s TED Talk)

Have a news item to share? Write us at and you may see it included in this weekly round-up.

Planet DebianMJ Ray: How hard can typing æ, ø and å be?

Petter Reinholdtsen: How hard can æ, ø and å be? comments on the rubbish state of till printers and their mishandling of foreign characters.

Last week, I was trying to type an email, on a tablet, in Dutch. The tablet was running something close to Android and I was using a Bluetooth keyboard, which seemed to be configured correctly for my location in England.

Dutch doesn’t even have many accents. I wanted an e acute (é). If you use the on screen keyboard, this is actually pretty easy, just press and hold e and slide to choose the accented one… but holding e on a Bluetooth keyboard? eeeeeeeeeee!

Some guides suggest Alt and e, then e. Apparently that works, but not on keyboards set to Great British… because, I guess, we don’t want any of that foreign muck since the Brexit vote, or something(!)

Even once you figure out that madness and switch the keyboard back to international, which also enables alt i, u, n and so on to do other accents, I can’t find grave, check, breve or several other accents. I managed to send the emails in Dutch but I’d struggle with various other languages.

Have I missed a trick or what are the Android developers thinking? Why isn’t there a Compose key by default? Is there any way to get one?

CryptogramNew Spectre/Meltdown Variants

Researchers have discovered new variants of Spectre and Meltdown. The software mitigations for Spectre and Meltdown seem to block these variants, although the eventual CPU fixes will have to be expanded to account for these new attacks.

Worse Than FailureShiny Side Up


It feels as though disc-based media have always been with us, but the 1990s were when researchers first began harvesting these iridescent creatures from the wild in earnest, pressing data upon them to create the beast known as CD-ROM. Click-and-point adventure games, encyclopedias, choppy full-motion video ... in some cases, ambition far outweighed capability. Advances in technology made the media cheaper and more accessible, often for the worst. There are some US households that still burn America Online 7.0 CDs for fuel.

But we’re not here to delve into the late-90s CD marketing glut. We’re nestling comfortably into the mid-90s, when Internet was too slow and unreliable for anyone to upload installers onto a customer portal and call it a day. Software had to go out on physical media, and it had to be as bug-free as possible before shipping.

Chris, a developer fresh out of college, worked on product catalog database applications that were mailed to customers on CDs. It was a small shop with no Tech Support department, so he and the other developers had to take turns fielding calls from customers having issues with the admittedly awful VB4 installer. It was supposed to launch automatically, but if the auto-play feature was disabled in Windows 95, or the customer canceled the installer pop-up without bothering to read it, Chris or one of his colleagues was likely to hear about it.

And then came the caller who had no clue what Chris meant when he suggested, "Why don't we open up the CD through the file system and launch the installer manually?"

These were the days before remote desktop tools, and the caller wasn't the savviest computer user. Talking him through minimizing his open programs, double-clicking on My Computer, and browsing into the CD drive took Chris over half an hour.

"There's nothing here," the caller said.

So close to the finish line, and yet so far. Chris stifled his exasperation. "What do you mean?"

"I opened the CD like you said, and it's completely empty."

This was new. Chris frowned. "You're definitely looking at the right drive? The one with the shiny little disc icon?"

"Yes, that's the one. It's empty."

Chris' frown deepened. "Then I guess you got a bad copy of the CD. I'm sorry about that! Let me copy down your name and address, and I'll get a new one sent out to you."

The customer provided his mailing address accordingly. Chris finished scribbling it onto a Post-it square. "OK, lemme read that back to—"

"The shiny side is supposed to be turned upwards, right?" the customer blurted. "Like a gramophone record?"

Chris froze, then slapped the mute button before his laughter spilled out over the line. After composing himself, he returned to the call as the model of professionalism. "Actually, it should be shiny-side down."

"Really? Huh. The little icon's lying, then."

"Yeah, I guess it is," Chris replied. "Unfortunately, that's on Microsoft to fix. Let's turn the disc over and try again."

[Advertisement] Incrementally adopt DevOps best practices with BuildMaster, ProGet and Otter, creating a robust, secure, scalable, and reliable DevOps toolchain.

Planet Linux AustraliaColin Charles: MariaDB Developer’s unconference & M|18

Been a while since I wrote anything MySQL/MariaDB related here, but there’s the column on the Percona blog, that has weekly updates.

Anyway, I’ll be at the developer’s unconference this weekend in NYC. Even managed to snag a session on the schedule, MySQL features missing in MariaDB Server (Sunday, 12.15–13.00). Signup on meetup?

Due to the prevalence of “VIP tickets”, I too signed up for M|18. If you need a discount code, I’ll happily offer them up to you to see if they still work (though I’m sure a quick Google will solve this problem for you). I’ll publish notes, probably in my weekly column.

If you’re in New York and want to say hi, talk shop, etc. don’t hesitate to drop me a line.

Planet DebianSam Hartman: Tools of Love

From my spiritual blog

I have been quiet lately. My life has been filled with gentle happiness, work, and less gentle wedding planning. How do you write about quiet happiness without sounding like the least contemplative aspects of Facebook? How do I share this part of the journey in a way that others can learn from? I was offering thanks the other day and was reminded of one of my early experiences at Fires of Venus. Someone was talking about how they were there working to do the spiritual work they needed in order to achieve their dream of opening a restaurant. I'll admit that when I thought of going to a multi-day retreat focused on spiritual connection to love, opening a restaurant had not been at the forefront of my mind. And yet, this was their dream, and surely dreams are the stuff of love. As they continued, they talked about finding self love deep enough to have the confidence to believe in dreams.

As I recalled this experience, I offered thanks for all the tools I've found to use as a lover. Every time I approach something with joy and awe, I gain new insight into the beauty of the world around us. In my work within the IETF I saw the beauty of the digital world we're working to create. Standing on sacred land, I can find the joy and love of nature and the moment.

I can share the joy I find and offer it to others. I've been mentoring someone at work. They're at a point where they're appreciating some of the great mysteries of computing like “Reflections on Trusting Trust” or two's compliment arithmetic. I’ve had the pleasure of watching their moments of discovery and also helping them understand the complex history in how we’ve built the digital world we have. Each moment of delight reinforces the idea that we live in a world where we expect to find this beauty and connect with it. Each experience reinforces the idea that we live in a world filled with things to love.

And so, I’ve turned even my experiences as a programmer into tools for teaching love and joy. I’ve been learning another new tool lately. I’ve been putting together the dance mix for my wedding. Between that and a project last year, I’ve learned a lot about music. I will never be a professional DJ or song producer. However, I have always found joy in music and dance, and I absolutely can be good enough to share that with my friends. I can be good enough to let music and rhythm be tools I use to tell stories and share joy. In learning skills and improving my ability with music, I better appreciate the music I hear.

The same is true with writing: both my work here and my fiction. I’m busy enough with other things that I am unlikely to even attempt writing as my livelihood. Even so, I have more tools for sharing the love I find and helping people find the love and joy in their world.

These are all just tools. Words and song won’t suddenly bring us all together any more than physical affection and our bodies. However, words, song, and the joy we find in each other and in the world we build can help us find connection and empathy. We can learn to see the love that is there between us. All these tools can help us be vulnerable and open together. And that—the changes we work within ourselves using these tools—can bring us to a path of love. And so how do I write about happiness? I give thanks for the things it allows me to explore. I find value in growing and trying new things. In my best moments, each seems a lens through which I can grow as a lover as I walk Venus’s path.


CryptogramFacebook Will Verify the Physical Location of Ad Buyers with Paper Postcards

It's not a great solution, but it's something:

The process of using postcards containing a specific code will be required for advertising that mentions a specific candidate running for a federal office, Katie Harbath, Facebook's global director of policy programs, said. The requirement will not apply to issue-based political ads, she said.

"If you run an ad mentioning a candidate, we are going to mail you a postcard and you will have to use that code to prove you are in the United States," Harbath said at a weekend conference of the National Association of Secretaries of State, where executives from Twitter Inc and Alphabet Inc's Google also spoke.

"It won't solve everything," Harbath said in a brief interview with Reuters following her remarks.

But sending codes through old-fashioned mail was the most effective method the tech company could come up with to prevent Russians and other bad actors from purchasing ads while posing as someone else, Harbath said.

It does mean a several-days delay between purchasing an ad and seeing it run.

Krebs on SecurityMoney Laundering Via Author Impersonation on Amazon?

Patrick Reames had no idea why sent him a 1099 form saying he’d made almost $24,000 selling books via Createspace, the company’s on-demand publishing arm. That is, until he searched the site for his name and discovered someone has been using it to peddle a $555 book that’s full of nothing but gibberish.

The phony $555 book sold more than 60 times on Amazon using Patrick Reames’ name and Social Security number.

Reames is a credited author on Amazon by way of several commodity industry books, although none of them made anywhere near the amount Amazon is reporting to the Internal Revenue Service. Nor does he have a personal account with Createspace.

But that didn’t stop someone from publishing a “novel” under his name. That word is in quotations because the publication appears to be little more than computer-generated text, almost like the gibberish one might find in a spam email.

“Based on what I could see from the ‘sneak peak’ function, the book was nothing more than a computer generated ‘story’ with no structure, chapters or paragraphs — only lines of text with a carriage return after each sentence,” Reames said in an interview with KrebsOnSecurity.

The impersonator priced the book at $555 and it was posted to multiple Amazon sites in different countries. The book — which as been removed from most Amazon country pages as of a few days ago — is titled “Lower Days Ahead,” and was published on Oct 7, 2017.

Reames said he suspects someone has been buying the book using stolen credit and/or debit cards, and pocketing the 60 percent that Amazon gives to authors. At $555 a pop, it would only take approximately 70 sales over three months to rack up the earnings that Amazon said he made.

“This book is very unlikely to ever sell on its own, much less sell enough copies in 12 weeks to generate that level of revenue,” Reames said. “As such, I assume it was used for money laundering, in addition to tax fraud/evasion by using my Social Security number. Amazon refuses to issue a corrected 1099 or provide me with any information I can use to determine where or how they were remitting the royalties.”

Reames said the books he has sold on Amazon under his name were done through his publisher, not directly via a personal account (the royalties for those books accrue to his former employer) so he’d never given Amazon his Social Security number. But the fraudster evidently had, and that was apparently enough to convince Amazon that the imposter was him.

Reames said after learning of the impersonation, he got curious enough to start looking for other examples of author oddities on Amazon’s Createspace platform.

“I have reviewed numerous Createspace titles and its clear to me that there may be hundreds if not thousands of similar fraudulent books on their site,” Reames said. “These books contain no real content, only dozens of pages of gibberish or computer generated text.”

For example, searching Amazon for the name Vyacheslav Grzhibovskiy turns up dozens of Kindle “books” that appear to be similar gibberish works — most of which have the words “quadrillion,” “trillion” or a similar word in their titles. Some retail for just one or two dollars, while others are inexplicably priced between $220 and $320.

Some of the “books” for sale on Amazon attributed to a Vyacheslav Grzhibovskiy.

“Its not hard to imagine how these books could be used to launder money using stolen credit cards or facilitating transactions for illicit materials or funding of illegal activities,” Reames said. “I can not believe Amazon is unaware of this and is unwilling to intercede to stop it. I also believe they are not properly vetting their new accounts to limit tax fraud via stolen identities.”

Reames said Amazon refuses to send him a corrected 1099, or to discuss anything about the identity thief.

“They say all they can do at this point is send me a letter acknowledging than I’m disputing ever having received the funds, because they said they couldn’t prove I didn’t receive the funds. So I told them, ‘If you’re saying you can’t say whether I did receive the funds, tell me where they went?’ And they said, “Oh, no, we can’t do that.’ So I can’t clear myself and they won’t clear me.”

Amazon said in a statement that the security of customer accounts is one of its highest priorities.

“We have policies and security measures in place to help protect them. Whenever we become aware of actions like the ones you describe, we take steps to stop them. If you’re concerned about your account, please contact Amazon customer service immediately using the help section on our website.”

Beware, however, if you plan to contact Amazon customer support via phone. Performing a simple online search for Amazon customer support phone numbers can turn up some dubious and outright fraudulent results.

Earlier this month, KrebsOnSecurity heard from a fraud investigator for a mid-sized bank who’d recently had several customers who got suckered into scams after searching for the customer support line for Amazon. She said most of these customers were seeking to cancel an Amazon Prime membership after the trial period ended and they were charged a $99 fee.

The fraud investigator said her customers ended up calling fake Amazon support numbers, which were answered by people with a foreign accent who proceeded to request all manner of personal data, including bank account and credit card information. In short order, the customers’ accounts were used to set up new Amazon accounts as well as accounts at, a service that facilitates the purchase of virtual currencies like Bitcoin.

This Web site does a good job documenting the dozens of phony Amazon customer support numbers that are hoodwinking unsuspecting customers. Amazingly, many of these numbers seem to be heavily promoted using Amazon’s own online customer support discussion forums, in addition to third-party sites like

Interestingly, clicking on the Customer Help Forum link link from the Amazon Support Options and Contact Us page currently sends visitors to the page pictured below, which displays a “Sorry, We Couldn’t Find That Page” error. Perhaps the company is simply cleaning things up after being notified last week by KrebsOnSecurity about the bogus phone numbers being promoted on the forum.

In any case, it appears some of these fake Amazon support numbers are being pimped by a number dubious-looking e-books for sale on Amazon that are all about — you guessed it — how to contact Amazon customer support.

If you wish to contact Amazon by phone, the only numbers you should use are:

U.S. and Canada: 1-866-216-1072

International: 1-206-266-2992

Amazon’s main customer help page is here.

Update, 11:44 a.m. ET: Not sure when it happened exactly, but this notice says Amazon has closed its discussion boards.

Update, 4:02 p.m. ET: Amazon just shared the following statement, in addition to their statement released earlier urging people to visit a help page that didn’t exist (see above):

“Anyone who believes they’ve received an incorrect 1099 form or a 1099 form in error can contact and we will investigate.”

“This is the general Amazon help page:”

Update 4:01 p.m ET: Reader zboot has some good stuff. What makes Amazon a great cashout method for cybercrooks as opposed to, say, bitcoin cashouts, is that funds can be deposited directly into a bank account. He writes:

“It’s not that the darkweb is too slow, it’s that you still need to cash out at the end. Amazon lets you go from stolen funds directly to a bank account. If you’ve set it up with stolen credentials, that process may be faster than getting money out of a bitcoin exchange which tend to limit fiat withdraws to accounts created with the amount of information they managed to steal.”

Worse Than FailureCodeSOD: The Telltale Snippet

True! nervous, very, very dreadfully nervous I had been and am; but why will you say that I am mad? The disease had sharpened my senses, not destroyed, not dulled them. Above all was the sense of hearing acute. I heard all things in the heaven and in the earth. I heard many things in hell. How then am I mad? Hearken! and observe how healthily, how calmly I can tell you the whole story. - “The Telltale Heart” by Edgar Allen Poe

Today’s submitter credits themselves as Too Afraid To Say (TATS) who they are. Why? Because like a steady “thump thump” from beneath the floorboards, they are haunted by their crimes. The haunting continues to this very day.

It is impossible to say how the idea entered TATS’s brain, but as a fresh-faced junior developer, they set out to write a flexible web-control in JavaScript. What they wanted was to dynamically add items to the control. Each item was a set of fields- an ID, a tool tip, a description, etc.

Think about how you might pass a list of objects to a method.

    ObjectLookupField.prototype._AddItems = function _AddItems(objItems)
        if (objItems && objItems.length > 0)
            var objItemIDs = [];
            var objTooltips = [];
            var objImages = [];
            var objTypes = [];
            var objDeleted = [];
            var objDescriptions = [];
            var objParentTreeCodes = [];
            var objHasChilderen = [];
            var objPath = [];
            var objMarked = [];
            var objLocked = [];

            var blnSkip;

            for (var intI = 0; intI < objItems.length; intI++)
                objImages.push((objItems[intI].TypeIconURL ? objItems[intI].TypeIconURL : objItems[intI].IconURL));
                objTooltips.push(objItems[intI].Tooltip ? objItems[intI].Tooltip : '');
                objMarked.push(objItems[intI].Marked ? 'Marked' : '');

                                // SNIP, not really related
                            //TATS also implemented `addItems` which requires all these arrays
            window[this._strControlID].addItems([objItemIDs, objImages, objPath, objTooltips, objLocked, objMarked, objParentTreeCodes, objHasChilderen]);

TATS used the infamous “Arrject” pattern. Instead of having a list of objects, where each object has all of the fields it needs, the Arrject pattern has one array per field, and then we’ll hope that each index holds all the related data for a given item. For example:

    arrNames = {"Joebob", "Sallybob", "Suebob"};
    arrAddresses = {"123 Street St", "234 Road Rd", "345 Lane Ln"};
    arrPhones = {"555-1234", "555-2345", "555-3456"};

The 0th index of every array contains everything you want to know about Joebob.

Most uses of the Arrject pattern end up in code that doesn’t use objects at all, but TATS adds their own little twist. They explode an object into a set of arrays, and then pass those arrays to their own method which creates the necessary DOM elements.

TATS smiled, for what did they have to fear? They bade the senior developers welcome: use my code. And they did.

Before long, this little bit of code propagated throughout their entire codebase; copied, pasted, dropped in, loaded as a JS dependency, hosted on a private CDN. It was everywhere. Time passed, and careers changed. TATS got promoted up to senior. Other seniors left and handed their code off to TATS. And that’s when the thumping beneath the floorboards became intolerable. That is why they are “Too Afraid to Say”. This little ghost, this reminder of their mistakes as a junior dev is always there, waiting beneath their feet, and it keeps. getting. louder.

“Villains!” I shrieked, “dissemble no more! I admit the deed!—tear up the planks!—here, here!—it is the beating of his hideous heart!”

[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!


CryptogramOn the Security of Walls

Interesting history of the security of walls:

Dún Aonghasa presents early evidence of the same principles of redundant security measures at work in 13th century castles, 17th century star-shaped artillery fortifications, and even "defense in depth" security architecture promoted today by the National Institute of Standards and Technology, the Nuclear Regulatory Commission, and countless other security organizations world-wide.

Security advances throughout the centuries have been mostly technical adjustments in response to evolving weaponry. Fortification -- the art and science of protecting a place by imposing a barrier between you and an enemy -- is as ancient as humanity. From the standpoint of theory, however, there is very little about modern network or airport security that could not be learned from a 17th century artillery manual. That should trouble us more than it does.

Fortification depends on walls as a demarcation between attacker and defender. The very first priority action listed in the 2017 National Security Strategy states: "We will secure our borders through the construction of a border wall, the use of multilayered defenses and advanced technology, the employment of additional personnel, and other measures." The National Security Strategy, as well as the executive order just preceding it, are just formal language to describe the recurrent and popular idea of a grand border wall as a central tool of strategic security. There's been a lot said about the costs of the wall. But, as the American finger hovers over the Hadrian's Wall 2.0 button, whether or not a wall will actually improve national security depends a lot on how walls work, but moreso, how they fail.

Lots more at the link.

Krebs on SecurityIRS Scam Leverages Hacked Tax Preparers, Client Bank Accounts

Identity thieves who specialize in tax refund fraud have been busy of late hacking online accounts at multiple tax preparation firms, using them to file phony refund requests. Once the Internal Revenue Service processes the return and deposits money into bank accounts of the hacked firms’ clients, the crooks contact those clients posing as a collection agency and demand that the money be “returned.”

In one version of the scam, criminals are pretending to be debt collection agency officials acting on behalf of the IRS. They’ll call taxpayers who’ve had fraudulent tax refunds deposited into their bank accounts, claim the refund was deposited in error, and threaten recipients with criminal charges if they fail to forward the money to the collection agency.

This is exactly what happened to a number of customers at a half dozen banks in Oklahoma earlier this month. Elaine Dodd, executive vice president of the fraud division at the Oklahoma Bankers Association, said many financial institutions in the Oklahoma City area had “a good number of customers” who had large sums deposited into their bank accounts at the same time.

Dodd said the bank customers received hefty deposits into their accounts from the U.S. Treasury, and shortly thereafter were contacted by phone by someone claiming to be a collections agent for a firm calling itself DebtCredit and using the Web site name debtcredit[dot]us.

“We’re having customers getting refunds they have not applied for,” Dodd said, noting that the transfers were traced back to a local tax preparer who’d apparently gotten phished or hacked. Those banks are now working with affected customers to close the accounts and open new ones, Dodd said. “If the crooks have breached a tax preparer and can send money to the client, they can sure enough pull money out of those accounts, too.”

Several of the Oklahoma bank’s clients received customized notices from a phony company claiming to be a collections agency hired by the IRS.

The domain debtcredit[dot]us hasn’t been active for some time, but an exact copy of the site to which the bank’s clients were referred by the phony collection agency can be found at jcdebt[dot]com — a domain that was registered less than a month ago. The site purports to be associated with a company in New Jersey called Debt & Credit Consulting Services, but according to a record (PDF) retrieved from the New Jersey Secretary of State’s office, that company’s business license was revoked in 2010.

“You may be puzzled by an erroneous payment from the Internal Revenue Service but in fact it is quite an ordinary situation,” reads the HTML page shared with people who received the fraudulent IRS refunds. It includes a video explaining the matter, and references a case number, the amount and date of the transaction, and provides a list of personal “data reported by the IRS,” including the recipient’s name, Social Security Number (SSN), address, bank name, bank routing number and account number.

All of these details no doubt are included to make the scheme look official; most recipients will never suspect that they received the bank transfer because their accounting firm got hacked.

The scammers even supposedly assign the recipients an individual “appointed debt collector,” complete with a picture of the employee, her name, telephone number and email address. However, the emails to the domain used in the email address from the screenshot above (debtcredit[dot]com) bounced, and no one answers at the provided telephone number.

Along with the Web page listing the recipient’s personal and bank account information, each recipient is given a “transaction error correction letter” with IRS letterhead (see image below) that includes many of the same personal and financial details on the HTML page. It also gives the recipient instructions on the account number, ACH routing and wire number to which the wayward funds are to be wired.

A phony letter from the IRS instructing recipients on how and where to wire the money that was deposited into their bank account as a result of a fraudulent tax refund request filed in their name.

Tax refund fraud affects hundreds of thousands, if not millions, of U.S. citizens annually. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

On Feb. 2, 2018, the IRS issued a warning to tax preparers, urging them to step up their security in light of increased attacks. On Feb. 13, the IRS warned that phony refunds through hacked tax preparation accounts are a “quickly growing scam.”

“Thieves know it is more difficult to identify and halt fraudulent tax returns when they are using real client data such as income, dependents, credits and deductions,” the agency noted in the Feb. 2 alert. “Generally, criminals find alternative ways to get the fraudulent refunds delivered to themselves rather than the real taxpayers.”

The IRS says taxpayer who receive fraudulent transfers from the IRS should contact their financial institution, as the account may need to be closed (because the account details are clearly in the hands of cybercriminals). Taxpayers receiving erroneous refunds also should consider contacting their tax preparers immediately.

If you go to file your taxes electronically this year and the return is rejected, it may mean fraudsters have beat you to it. The IRS advises taxpayers in this situation to follow the steps outlined in the Taxpayer Guide to Identity Theft. Those unable to file electronically should mail a paper tax return along with Form 14039 (PDF) — the Identity Theft Affidavit — stating they were victims of a tax preparer data breach.

Worse Than FailureCousin of ITAPPMONROBOT

Logitech Quickcam Pro 4000

Every year, Initrode Global was faced with further and further budget shortages in their IT department. This wasn't because the company was doing poorly—on the contrary, the company overall was doing quite well, hitting record sales every quarter. The only way to spin that into a smaller budget was to dream bigger. Thus, every quarter, the budget demanded greater and greater increases in sales, and the exceptional growth was measured against the desired phenomenal growth and found wanting.

IT, being a cost center, was always hit by budget cuts the hardest. What did they need money for? The lights were still on, the mainframes still churning; any additional funds would only encourage them to take wild risks and break things.

One of the things people were worried about breaking were the thin clients. These had been purchased some years ago from Smyrt, who had been acquired the previous year by Hell Computers. There would be no tech support or patching, not from Hell. The IT department was on their own to ensure the clients kept running.

Unfortunately, the things seemed to have a will of their own—and that will did not include remaining up for weeks on end. Every once in a while, when booting Linux on the thin clients, the Thin Film Transistor screen would turn dark as soon as the X server started. They would remain dark after that; however, when the helpdesk SSH'd into the system, the screen would of course render perfectly on their end. So there was nothing to do to troubleshoot except lug a thin client to their work area and test workarounds from there.

The worst part of this kind of troubleshooting is when the problem is an intermittent one. The only way they could think to reproduce the problem was to spend hours in front of the client, turning it off and back on again. In the face of budget cuts, the already understaffed desk had no manpower to do something so trivial and dull.

Tedium is the mother of invention. Many of the most ingenious pieces of automation were put in place when an enterprising programmer was faced with performing a mind-numbing task over and over for the foreseeable future. Such is the case in this instance. Lacking the support staff to power cycle the machine over and over, the staff instead built a robot.

A webcam was found in the back room, dusty and abandoned, the last vestige of a proposed work-from-home solution that never quite came to fruition years before. A sticker of transparent rubber someone found in their desk was placed over the metal rim of the camera so it wouldn't leave any scratches on the glass of the TFT screen. The webcam was placed up close against one strategically chosen corner of the screen, and attached to a Raspberry Pi someone brought from home.

The Pi was programmed to run a bash script, which in turn called a CLI image-grabbing tool and then applied some ImageMagick filters to determine the brightness value of the patch of screen it could see. This brightness value was compared against a known list of brightnesses to determine which state the machine was in: the boot menu, the Linux kernel messages scrolling past, the colorful login screen, or the solid black screen representing the problem. When the Pi detected a login screen, it would run a scripted reboot on the thin client using SSH and a keypair. If, instead, the screen remained dark for a long period of time, it would send an IM through the company messaging solution to alert the staff that they could begin their testing, then exit.

We've seen machines with the ability to manipulate physical servers. Now, we have machines seeing and evaluating the world in front of them. How long before we reach peak Skynet potential here at TDWTF? And what would the robot revolution look like, with founding members such as these?

[Advertisement] Incrementally adopt DevOps best practices with BuildMaster, ProGet and Otter, creating a robust, secure, scalable, and reliable DevOps toolchain.


Don MartiThe tracker will always get through?

(I work for Mozilla. None of this is secret. None of this is Mozilla policy. Not speaking for Mozilla here.)

A big objection to tracking protection is the idea that the tracker will always get through. Some people suggest that as browsers give users more ability to control how their personal information gets leaked across sites, things won't get better for users, because third-party tracking will just keep up. On this view, today's easy-to-block third-party cookies will be replaced by techniques such as passive fingerprinting where it's hard to tell if the browser is succeeding at protecting the user or not, and users will be stuck in the same place they are now, or worse.

I doubt this is the case because we're playing a more complex game than just trackers vs. users. The game has at least five sides, and some of the fastest-moving players with the best understanding of the game are the adfraud hackers. Right now adfraud is losing in some areas where they had been winning, and the resulting shift in adfraud is likely to shift the risks and rewards of tracking techniques.

Data center adfraud

Fraudbots, running in data centers, visit legit sites (with third-party ads and trackers) to pick up a realistic set of third-party cookies to make them look like high-value users. Then the bots visit dedicated fraudulent "cash out" sites (whose operators have the same third-party ads and trackers) to generate valuable ad impressions for those sites. If you wonder why so many sites made a big deal out of "pivot to video" but can't remember watching a video ad, this is why. Fraudbots are patient enough to get profiled as, say, a car buyer, and watch those big-money ads. And the money is good enough to motivate fraud hackers to make good bots, usually based on real browser code. When a fraudbot network gets caught and blocked from high-value ads, it gets recycled for lower and lower value forms of advertising. By the time you see traffic for sale on fraud boards, those bots are probably only getting past just enough third-party anti-fraud services to be worth running.

This version of adfraud has minimal impact on real users. Real users don't go to fraud sites, and fraudbots do their thing in data centers Doesn't everyone do their Christmas shopping while chilling out in the cold aisle at an Amazon AWS data center? Seems legit to me. and don't touch users' systems. The companies that pay for it are legit publishers, who not only have to serve pages to fraudbots—remember, a bot needs to visit enough legit sites to look like a real user—but also end up competing with adfraud for ad revenue. Adfraud has only really been a problem for legit publishers. The adtech business is fine with it, since they make more money from fraud than the fraud hackers do, and the advertisers are fine with it because fraud is priced in, so they pay the fraud-adjusted price even for real impressions.

What's new for adfraud

So what's changing? More fraudbots in data centers are getting caught, just because the adtech firms have mostly been shamed into filtering out the embarassingly obvious traffic from IP addresses that everyone can tell probably don't have a human user on them. So where is fraud going now? More fraud is likely to move to a place where a bot can look more realistic but probably not stay up as long—your computer or mobile device. Expect adfraud concealed within web pages, as a payload for malware, and of course in lots and lots of cheesy native mobile apps.The Google Play Store has an ongoing problem with adfraud, which is content marketing gold for Check Point Software, if you like "shitty app did WHAT?" stories. Adfraud makes way more money than cryptocurrency mining, using less CPU and battery.

So the bad news is that you're going to have to reformat your uncle's computer a lot this year, because more client-side fraud is coming. Data center IPs don't get by the ad networks as well as they once did, so adfraud is getting personal. The good news, is, hey, you know all that big, scary passive fingerprinting that's supposed to become the harder-to-beat replacement for the third-party cookie? Client-side fraud has to beat it in order to get paid, so they'll beat it. As a bonus, client-side bots are way better at attribution fraud (where a fraudulent ad gets credit for a real sale) than data center bots.

Users don't have to get protected from every possible tracking technique in order to shift the web advertising game from a hacking contest to a reputation contest. It often helps simply to shift the advertiser's ROI from negative-externality advertising below the ROI of positive-externality advertising.

Advertisers have two possible responses to adfraud: either try to out-hack it, or join the "flight to quality" and cut back on trying to follow big-money users to low-reputation sites in the first place. Hard-to-detect client-side bots, by making creepy fingerprinting techniques less trustworthy, tend to increase the uncertainty of the hacking option and make flight to quality relatively more attractive.


Planet Linux AustraliaPia Waugh: An optimistic future

This is my personal vision for an event called “Optimistic Futures” to explore what we could be aiming for and figure out the possible roles for government in future.

Technology is both an enabler and a disruptor in our lives. It has ushered in an age of surplus, with decentralised systems enabled by highly empowered global citizens, all creating increasing complexity. It is imperative that we transition into a more open, collaborative, resilient and digitally enabled society that can respond exponentially to exponential change whilst empowering all our people to thrive. We have the means now by which to overcome our greatest challenges including poverty, hunger, inequity and shifting job markets but we must be bold in collectively designing a better future, otherwise we may unintentionally reinvent past paradigms and inequities with shiny new things.

Technology is only as useful as it affects actual people, so my vision starts, perhaps surprisingly for some, with people. After all, if people suffer, the system suffers, so the well being of people is the first and foremost priority for any sustainable vision. But we also need to look at what all sectors and communities across society need and what part they can play:

  • People: I dream of a future where the uniqueness of local communities, cultures and individuals is amplified, where diversity is embraced as a strength, and where all people are empowered with the skills, capacity and confidence to thrive locally and internationally. A future where everyone shares in the benefits and opportunities of a modern, digital and surplus society/economy with resilience, and where everyone can meaningfully contribute to the future of work, local communities and the national/global good.
  • Public sectors: I dream of strong, independent, bold and highly accountable public sectors that lead, inform, collaborate, engage meaningfully and are effective enablers for society and the economy. A future where we invest as much time and effort on transformational digital public infrastructure and skills as we do on other public infrastructure like roads, health and traditional education, so that we can all build on top of government as a platform. Where everyone can have confidence in government as a stabilising force of integrity that provides a minimum quality of life upon which everyone can thrive.
  • The media: I dream of a highly effective fourth estate which is motivated systemically with resilient business models that incentivise behaviours to both serve the public and hold power to account, especially as “news” is also arguably becoming exponential. Actionable accountability that doesn’t rely on the linearity and personal incentives of individuals to respond will be critical with the changing pace of news and with more decisions being made by machines.
  • Private, academic and non-profit sectors: I dream of a future where all sectors can more freely innovate, share, adapt and succeed whilst contributing meaningfully to the public good and being accountable to the communities affected by decisions and actions. I also see a role for academic institutions in particular, given their systemic motivation for high veracity outcomes without being attached to one side, as playing a role in how national/government actions are measured, planned, tested and monitored over time.
  • Finally, I dream of a world where countries are not celebrated for being just “digital nations” but rather are engaged in a race to the top in using technology to improve the lives of all people and to establish truly collaborative democracies where people can meaningfully participate in the shaping the optimistic and inclusive futures.

Technology is a means, not an ends, so we need to use technology to both proactively invent the future we need (thank you Alan Kay) and to be resilient to change including emerging tech and trends.

Let me share a few specific optimistic predictions for 2070:

  • Automation will help us redesign our work expectations. We will have a 10-20 hour work week supported by machines, freeing up time for family, education, civic duties and innovation. People will have less pressure to simply survive and will have more capacity to thrive (this is a common theme, but something I see as critical).
  • 3D printing of synthetic foods and nanotechnology to deconstruct and reconstruct molecular materials will address hunger, access to medicine, clothes and goods, and community hubs (like libraries) will become even more important as distribution, education and social hubs, with drones and other aerial travel employed for those who can’t travel. Exoskeletons will replace scooters :)
  • With rocket travel normalised, and only an hour to get anywhere on the planet, nations will see competitive citizenships where countries focus on the best quality of life to attract and retain people, rather than largely just trying to attract and retain companies as we do today. We will also likely see the emergence of more powerful transnational communities that have nationhood status to represent the aspects of people’s lives that are not geopolitically bound.
  • The public service has highly professional, empathetic and accountable multi-disciplinary experts on responsive collaborative policy, digital legislation, societal modeling, identifying necessary public digital infrastructure for investment, and well controlled but openly available data, rules and transactional functions of government to enable dynamic and third party services across myriad channels, provided to people based on their needs but under their control. We will also have a large number of citizens working 1 or 2 days a week in paid civic duties on areas where they have passion, skills or experience to contribute.
  • The paralympics will become the main game, as it were, with no limits on human augmentation. We will do the 100m sprint with rockets, judo with cyborgs, rock climbing with tentacles. We have access to medical capabilities to address any form of disease or discomfort but we don’t use the technologies to just comply to a normative view of a human. People are free to choose their form and we culturally value diversity and experimentation as critical attributes of a modern adaptable community.

I’ve only been living in New Zealand a short time but I’ve been delighted and inspired by what I’ve learned from kiwi and Māori cultures, so I’d like to share a locally inspired analogy.

Technology is on one hand, just a waka (canoe), a vehicle for change. We all have a part to play in the journey and in deciding where we want to go. On the other hand, technology is also the winds, the storms, the thunder, and we have to continually work to understand and respond to emerging technologies and trends so we stay safely on course. It will take collaboration and working towards common goals if we are to chart a better future for all.

Don MartiThis is why we can't have nice brands.

What if I told you that there was an Internet ad technology that...

  • can reach the same user on mobile and desktop

  • uses open-standard persistent identifiers for users

  • can connect users to their purchase history

  • reaches the users that the advertiser chooses, at the time the advertiser chooses

  • and doesn't depend on the Google/Facebook duopoly?

Don't go looking for it on the Lumascape.

I'm describing email spam.

Every feature that adtech is bragging on, or working toward? Email spam had it in the 1990s.

So why didn't brand advertisers jump all over spam? Why did they mostly leave it to low-reputation brands and scammers?

To be honest, it probably wasn't a decision decision in most cases, just corporate sloth. But staying away from spam was the right answer. In the email inbox, spam from a high-reputation brand doesn't look any different from spam that any fly-by-night operation can send. All spammers can do the same stuff:

They can sell to people...for a fraction of what marketing used to cost. And they can collect data on these consumers, track what they buy, what they love and hate about the experience, and market to them directly much more effectively.

Oh, wait. That one isn't about spam in the 1990s. That's about targeted advertising on social media sites today. The CEO of digital advertising's biggest trade group says most big marketers are screwed unless they completely change their business models.

It's the direct consumer relationships, and the use of consumer data, that is completely game-changing for the marketing world. And most big marketers, such as Procter & Gamble and Unilever, are not ready for this new reality, the IAB says.

But of course they're ready. The difference is that those established brand advertisers aren't any more ready than some guy who watched a YouTube video series on "growth hacking" and is ready to start buying targeted ads and drop-shipping.

The "new reality," the targeted advertising business that the IAB wants brands to join them in, is a place where you win based not on how much the audience trusts you, but on how well you can out-hack the competition. And like any information space organized by hacking skill, it's a hellscape of deceptive crap. Read The Strange Brands in Your Instagram Feed by Alexis C. Madrigal.

Some Instagram retailers are legit brands with employees and products. Others are simply middlemen for Chinese goods, built in bedrooms, and launched with no capital or inventory. All of them have been pulled into existence by the power of Instagram and Facebook ads combined with a suite of e-commerce tools based around Shopify.

Of course, not every brand that buys a social media ad or other targeted ad is crap.

But a social media ad is useless for telling crap brands from non-crap ones. It doesn't carry economic signal. There's no such thing as a free watch. (PDF)

Rory Sutherand writes, in Reducing activities to their core misses the point,

Many billions of pounds of advertising expenditure have been shifted from conventional media, most notably newspapers, and moved into digital media in a quest for targeted efficiency. If advertising simply works by the conveyance of messages, this would be a sensible thing to do. However, it is beginning to become apparent that not all, perhaps not even most, advertising works this way. It seems that a large part of advertising creates trust and conviction in its audience precisely because it is perceived to be costly.

If anyone knows that any seller can watch a few YouTube videos and do a certain activity, does that activity really help the audience distinguish a high-reputation seller from a low-reputation one?

And how does it affect a legit brand when its ads show up on the same medium with all the crappy ones?Twitter has a solution that keeps its ads saleable: just don't show any ads to important people. I'm surprised they can get away with this, but given the mix of rip-off and real brand ads I keep seeing there, it seems to be working.

Extremists and state-sponsored misinformation campaigns aren't "abusing" targeted advertising. They're just taking advantage of a system optimized for deception and using it normally.

Now, I don't want to blame targeted advertising for all of the problems of brand equity. When you put high-fructose corn syrup in your product, brand equity suffers. When you outsource or de-skill the customer support function, brand equity suffers. All the half-ass "looks good this quarter" stuff that established brands are doing is bad for brand equity. It just turns out that the kinds of advertising that you can do on the Internet today are all half-ass "looks good this quarter" stuff. If you want to send a credible economic signal, buy TV time or put a flagship store on some expensive real estate. The Internet's got nothing for you.

Failure to create signal-carrying ad units should be more of a concern for people who want to earn ad money on the Internet than it is. See Bob Hoffman's "refrigerator test." All that work that went into building the most complicated ad medium ever? It went into building an ad medium optimized for low-reputation advertisers. And that kind of ad medium tends to see rates go down over time. It doesn't hold value.

And the medium can't gain value until the users trust it, which means they have to trust the browser. In-browser tracking protection is going to have to enable the legit web advertising industry the same way that spam filters enables the legit email newsletter industry.

Here’s why the epidemic of malicious ads grew so much worse last year

Facebook and Google could lose $2B in ad revenue over ‘toxic content’

How I Cracked Facebook’s New Algorithm And Tortured My Friends

Wanted: Console Text Editor for Windows

Where Did All the Advertising Jobs Go?

Facebook patents tech to determine social class

The Mozilla Blog: A Perspective: Firefox Quantum’s Tracking Protection Gives Users The Right To Be Curious

Breaking up with Facebook: users confess they're spending less time

Survey: Facebook is the big tech company that people trust least

The Perils of Paid Content


Unilever pledges to cut ties with ‘platforms that create division’

Content recommendation services Outbrain and Taboola are no longer a guaranteed source of revenue for digital publishers

The House That Spied on Me

Why Facebook's Disclosure to the City of Seattle Doesn't Add Up

Debunking common blockchain-saving-advertising myths

SF tourist industry struggles to explain street misery to horrified visitors

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

How Facebook Helped Ruin Cambodia's Democracy

Planet Linux AustraliaDonna Benjamin: Site building with Drupal

What even is "Site Building"?

At DrupalDownunder some years back, the wonderful Erica Bramham named her talk "All node, no code". Nodes were the fundamental building blocks in Drupal, they were like single drops of content. These days though, it's all about entities.

But hang on a minute, I'm using lots of buzz words, and worse, I'm using words that mean different things in different contexts. Jargon is one of the first hurdles you need to jump to understand the diverse worlds of the web. People who grow up multi-lingual learn that the meanings of words is somewhat arbitrary. They learn the same thing has different names. This is true for the web too. So the first thing to know about Site Building, is it means different things to different people. 

To me, it means being able to build a website with out knowing how to code. I also believe it means I can build a website without having to set up my own development environment. I know people who vehemently disagree with me about this. But that's ok. This is my blog, and these are my rules.

So - this is a post about site building, using SimplyTest.Me and Drupal 8 out of the box.

1. Go to

2. Type Drupal Core in the search field, and select "Drupal core" from the list

3. Choose the latest development branch, right at the bottom of the list.


For me, right now, that's 8.6.x, and here's a screenshot of what that looks like.

SimplyTest Me Screenshot, showing drop down fields described in the text.


4. Click "Launch sandbox".

Now wait.

In a few moments, you should see a fresh shiny Drupal 8 site, ready for you to explore.

For me today, it looks like this.  

Drupal 8.6.x front page screenshot


In the top right of the window, you should see a "Log in" link.

Click that, and enter admin/admin to login. 

You're now ready to practice some site building!

First, you'll need to create some content to play with.  Here's a short screencast that shows you how to login, add an article, and change the title using Quick Edit.

A guide to what's next

Follow the Drupal User guide to start building your site!

If you want to start at the beginning, you'll get a great overview of Drupal, and some important info on how to plan your site. But if you want to roll up your sleeves and get building, you can skip the chapter on site installation and jump straight to chapter 4, and dive into basic site configuration.



You have 24 hours to experiment with the sandbox - after that it disappears.


Get in touch

If you want something more permanent, you might want to "try drupal" or contact us at to discuss our Drupal services.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV February 2018 Workshop: Installing an Open Source OS on your tablet or phone

Feb 24 2018 12:30
Feb 24 2018 16:30
Feb 24 2018 12:30
Feb 24 2018 16:30
Infoxchange, 33 Elizabeth St. Richmond

Installing an Open Source OS on your tablet or phone

Andrew Pam will demonstrate how to install LineageOS, previously known as CyanogenMod and based on the Android Open Source Project, on tablets and phones.  Feel free to bring your own tablets and phones and have a go, but please ensure you back them up if there is anything you still need stored on them!

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

February 24, 2018 - 12:30

read more


CryptogramFriday Squid Blogging: Squid Pin

There's a squid pin on Kickstarter.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Rondam RamblingsYes, code is data, but that's not what makes Lisp cool

There has been some debate on Hacker News lately about what makes Lisp cool, in particular about whether the secret sauce is homo-iconicity, or the idea that "code is data", or something else.  I've read through a fair amount of the discussion, and there is a lot of misinformation and bad pedagogy floating around.  Because this is a topic that is near and dear to my heart, I thought I'd take a

CryptogramNew National Academies Report on Crypto Policy

The National Academies has just published "Decrypting the Encryption Debate: A Framework for Decision Makers." It looks really good, although I have not read it yet.

Not much news or analysis yet. Please post any links you find in the comments, and I will summarize them here.

Planet Linux AustraliaOpenSTEM: Australia at the Olympics

The modern Olympic games were started by Frenchman Henri de Baillot-Latour to promote international understanding. The first games of the modern era were held in 1896 in Athens, Greece. Australia has competed in all the Olympic games of the modern era, although our participation in the first one was almost by chance. Of course, the […]

Worse Than FailureError'd: Preparing for the Future

George B. wrote, "Wait, so is it done...or not done?"


George B. (different George, but is in good company) is seeing nearly the same thing with Crash Plan Pro where the backup is done ...maybe.


"I swear, that's the last time that I'm flying with Icarus Airlines" Allison V. writes.


"The best I can figure, someone wanted to see what the simulation app would do if executed in some far flung future where months don't matter and nothing makes any sense," writes M.C.


Joel C. wrote "I can't help it - Next time my train is late, I'm going to immediately think that it's because someone didn't click to dismiss a popup."


"I'm not sure what this means, but I guess it's to point out that there are website buttons, and then there are buttons on the website," Brian R. wrote.


[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.


Cory DoctorowDo We Need a New Internet?

I was one of the interview subjects on an episode of BBC’s Tomorrow’s World called Do We Need a New Internet? (MP3); it’s a fascinating documentary, including some very thoughtful commentary from Edward Snowden.

Cory DoctorowThe 2018 Locus Poll is open: choose your favorite science fiction of 2017!

Following the publication of its editorial board’s long-list of the best science fiction of 2017, science fiction publishing trade-journal Locus now invites its readers to vote for their favorites in the annual Locus Award. I’m honored to have won this award in the past, and doubly honored to see my novel Walkaway on the short list, and in very excellent company indeed.

While you’re thinking about your Locus List picks, you might also use the list as an aide-memoire in picking your nominees for the Hugo Awards.

Krebs on SecurityNew EU Privacy Law May Weaken Security

Companies around the globe are scrambling to comply with new European privacy regulations that take effect a little more than three months from now. But many security experts are worried that the changes being ushered in by the rush to adhere to the law may make it more difficult to track down cybercriminals and less likely that organizations will be willing to share data about new online threats.

On May 25, 2018, the General Data Protection Regulation (GDPR) takes effect. The law, enacted by the European Parliament, requires technology companies to get affirmative consent for any information they collect on people within the European Union. Organizations that violate the GDPR could face fines of up to four percent of global annual revenues.

In response, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — is poised to propose changes to the rules governing how much personal information Web site name registrars can collect and who should have access to the data.

Specifically, ICANN has been seeking feedback on a range of proposals to redact information provided in WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges (IP addresses).

Under current ICANN rules, domain name registrars should collect and display a variety of data points when someone performs a WHOIS lookup on a given domain, such as the registrant’s name, address, email address and phone number. (Most registrars offer a privacy protection service that shields this information from public WHOIS lookups; some registrars charge a nominal fee for this service, while others offer it for free).

In a bid to help domain registrars comply with the GDPR regulations, ICANN has floated several proposals, all of which would redact some of the registrant data from WHOIS records. Its mildest proposal would remove the registrant’s name, email, and phone number, while allowing self-certified 3rd parties to request access to said data at the approval of a higher authority — such as the registrar used to register the domain name.

The most restrictive proposal would remove all registrant data from public WHOIS records, and would require legal due process (such as a subpoena or court order) to reveal any information supplied by the domain registrant.

ICANN’s various proposed models for redacting information in WHOIS domain name records.

The full text of ICANN’s latest proposed models (from which the screenshot above was taken) can be found here (PDF). A diverse ICANN working group made up of privacy activists, technologists, lawyers, trademark holders and security experts has been arguing about these details since 2016. For the curious and/or intrepid, the entire archive of those debates up to the current day is available at this link.


To drastically simplify the discussions into two sides, those in the privacy camp say WHOIS records are being routinely plundered and abused by all manner of ne’er-do-wells, including spammers, scammers, phishers and stalkers. In short, their view seems to be that the availability of registrant data in the WHOIS records causes more problems than it is designed to solve.

Meanwhile, security experts are arguing that the data in WHOIS records has been indispensable in tracking down and bringing to justice those who seek to perpetrate said scams, spams, phishes and….er….stalks.

Many privacy advocates seem to take a dim view of any ICANN system by which third parties (and not just law enforcement officials) might be vetted or accredited to look at a domain registrant’s name, address, phone number, email address, etc. This sentiment is captured in public comments made by the Electronic Frontier Foundation‘s Jeremy Malcolm, who argued that — even if such information were only limited to anti-abuse professionals — this also wouldn’t work.

“There would be nothing to stop malicious actors from identifying as anti-abuse professionals – neither would want to have a system to ‘vet’ anti-abuse professionals, because that would be even more problematic,” Malcolm wrote in October 2017. “There is no added value in collecting personal information – after all, criminals are not going to provide correct information anyway, and if a domain has been compromised then the personal information of the original registrant isn’t going to help much, and its availability in the wild could cause significant harm to the registrant.”

Anti-abuse and security experts counter that there are endless examples of people involved in spam, phishing, malware attacks and other forms of cybercrime who include details in WHOIS records that are extremely useful for tracking down the perpetrators, disrupting their operations, or building reputation-based systems (such as anti-spam and anti-malware services) that seek to filter or block such activity.

Moreover, they point out that the overwhelming majority of phishing is performed with the help of compromised domains, and that the primary method for cleaning up those compromises is using WHOIS data to contact the victim and/or their hosting provider.

Many commentators observed that, in the end, ICANN is likely to proceed in a way that covers its own backside, and that of its primary constituency — domain registrars. Registrars pay a fee to ICANN for each domain a customer registers, although revenue from those fees has been falling of late, forcing ICANN to make significant budget cuts.

Some critics of the WHOIS privacy effort have voiced the opinion that the registrars generally view public WHOIS data as a nuisance issue for their domain registrant customers and an unwelcome cost-center (from being short-staffed to field a constant stream of abuse complaints from security experts, researchers and others in the anti-abuse community).

“Much of the registrar market is a race to the bottom, and the ability of ICANN to police the contractual relationships in that market effectively has not been well-demonstrated over time,” commenter Andrew Sullivan observed.

In any case, sources close to the debate tell KrebsOnSecurity that ICANN is poised to recommend a WHOIS model loosely based on Model 1 in the chart above.

Specifically, the system that ICANN is planning to recommend, according to sources, would ask registrars and registries to display just the domain name, city, state/province and country of the registrant in each record; the public email addresses would be replaced by a form or message relay link that allows users to contact the registrant. The source also said ICANN plans to leave it up to the registries/registrars to apply these changes globally or only to natural persons living in the European Economic Area (EEA).

In addition, sources say non-public WHOIS data would be accessible via a credentialing system to identify law enforcement agencies and intellectual property rights holders. However, it’s unlikely that such a system would be built and approved before the May 25, 2018 effectiveness date for the GDPR, so the rumor is that ICANN intends to propose a self-certification model in the meantime.

ICANN spokesman Brad White declined to confirm or deny any of the above, referring me instead to a blog post published Tuesday evening by ICANN CEO Göran Marby. That post does not, however, clarify which way ICANN may be leaning on the matter.

“Our conversations and work are on-going and not yet final,” White wrote in a statement shared with KrebsOnSecurity. “We are converging on a final interim model as we continue to engage, review and assess the input we receive from our stakeholders and Data Protection Authorities (PDAs).”

But with the GDPR compliance deadline looming, some registrars are moving forward with their own plans on WHOIS privacy. GoDaddy, one of the world’s largest domain registrars, recently began redacting most registrant data from WHOIS records for domains that are queried via third-party tools. And it seems likely that other registrars will follow GoDaddy’s lead.


For my part, I can say without hesitation that few resources are as critical to what I do here at KrebsOnSecurity than the data available in the public WHOIS records. WHOIS records are incredibly useful signposts for tracking cybercrime, and they frequently allow KrebsOnSecurity to break important stories about the connections between and identities behind various cybercriminal operations and the individuals/networks actively supporting or enabling those activities. I also very often rely on WHOIS records to locate contact information for potential sources or cybercrime victims who may not yet be aware of their victimization.

In a great many cases, I have found that clues about the identities of those who perpetrate cybercrime can be found by following a trail of information in WHOIS records that predates their cybercriminal careers. Also, even in cases where online abusers provide intentionally misleading or false information in WHOIS records, that information is still extremely useful in mapping the extent of their malware, phishing and scamming operations.

Anyone looking for copious examples of both need only to search this Web site for the term “WHOIS,” which yields dozens of stories and investigations that simply would not have been possible without the data currently available in the global WHOIS records.

Many privacy activists involved in to the WHOIS debate have argued that other data related to domain and Internet address registrations — such as name servers, Internet (IP) addresses and registration dates — should also be considered private information. My chief concern if this belief becomes more widely held is that security companies might stop sharing such information for fear of violating the GDPR, thus hampering the important work of anti-abuse and security professionals.

This is hardly a theoretical concern. Last month I heard from a security firm based in the European Union regarding a new Internet of Things (IoT) botnet they’d discovered that was unusually complex and advanced. Their outreach piqued my curiosity because I had already been working with a researcher here in the United States who was investigating a similar-sounding IoT botnet, and I wanted to know if my source and the security company were looking at the same thing.

But when I asked the security firm to share a list of Internet addresses related to their discovery, they told me they could not do so because IP addresses could be considered private data — even after I assured them I did not intend to publish the data.

“According to many forums, IPs should be considered personal data as it enters the scope of ‘online identifiers’,” the researcher wrote in an email to KrebsOnSecurity, declining to answer questions about whether their concern was related to provisions in the GDPR specifically.  “Either way, it’s IP addresses belonging to people with vulnerable/infected devices and sharing them may be perceived as bad practice on our end. We consider the list of IPs with infected victims to be private information at this point.”

Certainly as the Internet matures and big companies develop ever more intrusive ways to hoover up data on consumers, we also need to rein in the most egregious practices while giving Internet users more robust tools to protect and preserve their privacy. In the context of Internet security and the privacy principles envisioned in the GDPR, however, I’m worried that cybercriminals may end up being the biggest beneficiaries of this new law.

CryptogramElection Security

Good Washington Post op-ed on the need to use voter-verifiable paper ballots to secure elections, as well as risk-limiting audits.

Worse Than FailureIt's Called Abstraction, and It's a Good Thing

Steven worked for a company that sold “big iron” to big companies, for big bucks. These companies didn’t just want the machines, though, they wanted support. They wanted lots of support. With so many systems, processing so many transactions, installed at so many customer sites, Steven’s company needed a better way to analyze when things went squirrelly.

Thus was born a suite of applications called “DICS”- the Diagnostic Investigation Console System. It was, at its core, a processing pipeline. On one end, it would reach out to a customer’s site and download log files. The log files would pass through a series of analytic steps, and eventually reports would come out the other end. Steven mostly worked on the reporting side of things.

While working on reports, he’d sometimes hear about hiccups in the downloader portion of the pipeline, but as it was “not his circus, not his monkeys”, he didn’t pry too deeply. At least, he didn’t until one day, when his boss knocked on his cubicle divider.

“Hey, Steven. You know Perl, right?”

“Uh… sure.”

“And you’ve worked with XML files, right?”

“I… yes?”

“Great. Bob’s leaving. You’re going to need to take over the downloader portion of DICS. Talk to him ASAP. Great, thanks!”

Perl gets a reputation for being a “write only language”, which is at least partially undeserved. Bob was quite sensitive about that reputation, so he stressed, “I’ve worked really, really hard to keep the code as clean and clear as possible. Everything in the design is object oriented.”

Bob wasn’t kidding. Everything was wrapped up as a class. Everything. It was so class-happy it made the Spring framework jealous. JEE consultants would look at it and say, “Whoa, maybe slow down with the classes there.” A UML diagram of the architecture would drain ten printers worth of toner. The config file was stored in XML, and just for parsing out that file and storing the results, Bob had written 25 different classes, some as small as three lines. All in all, the whole downloader weighed in at about 5,000 lines of Perl code.

In the whirlwind tour, Steven asked Bob about the complexity. “It’s not complex. Each class is extremely simple. Well, aside from the config file wrapper, but it needs to have lots of methods because it has lots of data! There are so many fields in the XML file, and I needed to create getters and setters for them all! That way we can have Data Abstraction! That’s important! Data Abstraction is how we keep this project maintainable. What if the XML file format changes? It’s happened, you know. This will make it easy to keep our code in sync!”

Steven marveled at Bob’s ability to pronounce “data abstraction” as if it were in bold face, and resolved to touch the downloader script as little as possible. That resolution failed pretty much a week after Bob left, when the script fell down in production, leaving the DICS pipeline empty. Steven had to roll up his sleeves and get hands on with the code.

Now, one of Perl’s selling points is its rich library. While CPAN may have its own issues as a package manager, if you want to do something like parse an XML file, there’s a library that does it. There’s a dozen libraries that’ll do it. And they all follow a vaguely Perl-idiom, and instead of classes, they’ll favor associative arrays. That way, when you want to get something like the contents of the ip_addr tag from the config file, you could write code like this:

$ip_addr = $config->{hosts}[$n]{ip_addr}

This makes it easy to understand how the structure of the XML file relates to the Perl data structure, but that kind of mapping means that there isn’t any Data Abstraction, and thus was utterly the wrong approach. Instead, everything was done as a getter/setter method.

$ip_addr = $Config_object->host($n)->get_addr();

That doesn’t look too different, perhaps, but the devil is in the details. First, 90% of the getters were “thin”, so get_addr might look something like this:

sub get_addr { return $self->{Addr}; }

That raises questions about the value of these getters/setters for fetching config values, but the bigger problem was this: there was nothing in the config file called “Addr”. Does this method return the IP address? Or a string in the from “$ip_addr:$port”? Or maybe even an array, like [$ip_addr, $port].

Throughout the whole API, it was a bit of a crapshoot as to what any given method might return. And as for checking the documentation- they’d created a system that provided Data Abstraction, they didn’t need documentation, did they?

To track any given getter back to the actual field in the XML file it was getting, Steven had to trace through half a dozen different classes. It was frustrating and tedious, and Steven had half a mind to just throw the whole thing out and start over, consequences be damned. When he saw the “Translation” subsystem, he decided that it really did need to be thrown out, entirely.

You see, Bob’s goal with Data Abstraction was to make it so that, if the XML file changed, it would be easy to adapt the code. But the code was a mess. So when the XML file did change a few years back, Bob couldn’t update the config handling classes in any way that worked. So he did the next best thing- he wrote a “translation” module that would, using regular expressions, convert the new-style XML files back into the old-style XML files. Then his config-file classes could load and parse the old-style files.

Steven sums it up perfectly:

Bob’s classes weren’t data abstraction. It was just… data abstracturbation.

When Steven was done reimplementing Bob's work, he had about 500 lines of code, and the downloader stopped failing every few days.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!


Sociological ImagesWhat’s Trending? Feeling the Love

Valentine’s Day is upon us, but in a world of hookups and breakups many people are concerned about the state of romance. Where do Americans actually stand on sex and relationships? We took a look at some trends from the General Social Survey. They highlight an important point: while Americans are more accepting of things like divorce and premarital sex, that doesn’t necessarily mean that both are running rampant in society.

For example, since the mid 1970s, Americans have become much more accepting of sex before marriage. Today more than half of respondents say it isn’t wrong at all.

However, these attitudes don’t necessarily mean people are having more sex. Younger Americans today actually report having no sexual partners more frequently than people of the same age in earlier surveys.

And what about marriage? Americans are more accepting of divorce now, with more saying a divorce should be easier to obtain.

But again, this doesn’t necessarily mean everyone is flying the coop. While self-reported divorce rates had been on the rise since the mid 1970s, they have largely leveled off in recent years.

It is important to remember that for core social practices like love and marriage, we are extra susceptible to moral panics when faced with social change. These trends show how changes in attitudes don’t always line up with changes in behavior, and they remind us that sometimes we can save the drama for the rom-coms.

Inspired by demographic facts you should know cold, “What’s Trending?” is a post series at Sociological Images featuring quick looks at what’s up, what’s down, and what sociologists have to say about it.

Ryan Larson is a graduate student from the Department of Sociology, University of Minnesota – Twin Cities. He studies crime, punishment, and quantitative methodology. He is a member of the Graduate Editorial Board of The Society Pages, and his work has appeared in Poetics, Contexts, and Sociological Perspectives.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

CryptogramCan Consumers' Online Data Be Protected?

Everything online is hackable. This is true for Equifax's data and the federal Office of Personal Management's data, which was hacked in 2015. If information is on a computer connected to the Internet, it is vulnerable.

But just because everything is hackable doesn't mean everything will be hacked. The difference between the two is complex, and filled with defensive technologies, security best practices, consumer awareness, the motivation and skill of the hacker and the desirability of the data. The risks will be different if an attacker is a criminal who just wants credit card details ­ and doesn't care where he gets them from ­ or the Chinese military looking for specific data from a specific place.

The proper question isn't whether it's possible to protect consumer data, but whether a particular site protects our data well enough for the benefits provided by that site. And here, again, there are complications.

In most cases, it's impossible for consumers to make informed decisions about whether their data is protected. We have no idea what sorts of security measures Google uses to protect our highly intimate Web search data or our personal e-mails. We have no idea what sorts of security measures Facebook uses to protect our posts and conversations.

We have a feeling that these big companies do better than smaller ones. But we're also surprised when a lone individual publishes personal data hacked from the infidelity site, or when the North Korean government does the same with personal information in Sony's network.

Think about all the companies collecting personal data about you ­ the websites you visit, your smartphone and its apps, your Internet-connected car -- and how little you know about their security practices. Even worse, credit bureaus and data brokers like Equifax collect your personal information without your knowledge or consent.

So while it might be possible for companies to do a better job of protecting our data, you as a consumer are in no position to demand such protection.

Government policy is the missing ingredient. We need standards and a method for enforcement. We need liabilities and the ability to sue companies that poorly secure our data. The biggest reason companies don't protect our data online is that it's cheaper not to. Government policy is how we change that.

This essay appeared as half of a point/counterpoint with Priscilla Regan, in a CQ Researcher report titled "Privacy and the Internet."

Worse Than FailureCodeSOD: All the Rest Have Thirty One…

Aleksei received a bunch of notifications from their CI system, announcing a build failure. This was interesting, because no code had changed recently, so what was triggering the failure?

        private BillingRun CreateTestBillingRun(int billingRunGroupId, DateTime? billingDate, int? statusId)
            return new BillingRun
                BillingRunGroupId = billingRunGroupId,
                PeriodStart = new DateTime(DateTime.Today.Year, DateTime.Today.Month, 1),
                BillingDate = billingDate ?? new DateTime(DateTime.Today.Year, DateTime.Today.Month, 15),
                CreatedDate = new DateTime(DateTime.Today.Year, DateTime.Today.Month, 30),
                ItemsPreparedDate = new DateTime(2017, 4, 7),
                CompletedDate = new DateTime(2017, 4, 8),
                DueDate = new DateTime(DateTime.Today.Year, DateTime.Today.Month, 13),
                StatusId = statusId ?? BillingRunStatusConsts.Completed,
                ErrorCode = "ERR_CODE",
                Error = "Full error description",
                ModifiedOn = new DateTime(2017, 1, 1)

Take a look at the instantiation of CreatedDate. I imagine the developer’s internal monologue went something like this:

Okay, the Period Start is the beginning of the month, the Billing Date is the middle of the month, and Created Date is the end of the month. Um… okay, well, beginning is easy. That’s the 1st. Phew. Okay, but the middle of the month. That’s hard. Oh, wait, wait a second! It’s billing, so I bet the billing department has a day they always send out the bills. Let me send an email to Steve in billing… oh, look at that. It’s always the 15th. Great. Boy. This programming stuff is easy. Whew. Okay, so now the end of the month. This one’s tricky, because months have different lengths, sometimes 30 days, and sometimes 31. Let me ask Steve again, if they have any specific requirements there… oh, look at that. They don’t really care so long as it’s the last day or two of the month. Great. I’ll just use 30, then. Good thing there aren’t any months with a shorter length.
Y’know, I vaguely remember reading a thing that said tests should always use the same values, so that every run tests exactly the same combination of inputs. I think I saved a bookmark to read it later. Should I read it now? No! I should commit this code, let the CI build run, and then mark the requirement as complete.
Boy, this programming stuff is easy.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Sky CroeserMotherhood, hope, and 76% less snark

Oh, hi.

I had that baby I was growing in my last post. She’s an amazing little person. She’s learned to clap her hands in the last week, and I am full of wonder and delight. She’s been sick, and I fretted for hours about her rash. (Should I call the doctor? Should I not? Is it a purple rash? Is it getting worse.)

I’m back at work, sitting in my office, relieved to have time to read and write and teach, and missing her fiercely. I feel this all at once: the relief of time and space away, and the missing. I think about her all the time, but also get bored by the way motherhood enfolds me.

At home, we walk in endless circles around the house as she holds out a hand for mine, demands the other hand, then drags me off to open cupboards or visit each room in turn. (At the same time, I love to see her do this: so clearly show me what she wants, so clearly refuse if I put my right hand in her left, or give her only one hand.)


Motherhood has changed me, and I don’t know how I feel about that. (I don’t have much time to work out how I feel about anything.) It is almost physically painful to think of parents losing children to war or violence. Of wanting to feed a hungry child and not being able to. I have the luxury of being able to look away, to take a break from imagining these scenes.

For the last few months the change to my work has been in the time and energy available. Everything needs to be broken up into smaller, more digestible chunks, to manage in nap times and evenings and while so very tired most of the time.

As I finished my undergraduate, I decided to focus on researching movements that gave me hope. Imperfect, complex movements with many flaws, but nevertheless full of people trying to change things for the better. I wanted, and want, to believe that we have the potential to change this. That hungry children can be fed, that we can look after our neighbours, that we can resist and fight back against tides of hatred and fear.

Last year, I found myself writing a presentation and a book chapter that shifted to focusing on the flaws in these movements. I was tired, and I got snarky and impatient with the imperfection of activists (particularly white men) who didn’t listen and try to define what counts as ‘radical’ and what doesn’t. I still feel that impatience, but that work was depressing. The snark of it was satisfying, but I’m not sure of the use of it and frankly I am subject to many of the same critiques.

As I try to find my way back into research and writing, I’m trying to recommit to finding threads of hope. Critique is important, especially the critiques I need to listen to from the margins of academia and activism: of white women’s role in feminism(s), of settler societies, of academic power structures. In my own writing I want to be finding materials to stitch into alternatives. I want to be finding spaces where my voice can be useful, rather than just adding more noise.

And it’s a terrible cliche, but the urgency of it comes through when I look at this tiny person and imagine other parents doing the same, hoping for safety and flourishing and care for these wonders we are trying to nourish.



Krebs on SecurityMicrosoft Patch Tuesday, February 2018 Edition

Microsoft today released a bevy of security updates to tackle more than 50 serious weaknesses in Windows, Internet Explorer/Edge, Microsoft Office and Adobe Flash Player, among other products. A good number of the patches issued today ship with Microsoft’s “critical” rating, meaning the problems they fix could be exploited remotely by miscreants or malware to seize complete control over vulnerable systems — with little or no help from users.

February’s Patch Tuesday batch includes fixes for at least 55 security holes. Some of the scarier bugs include vulnerabilities in Microsoft Outlook, Edge and Office that could let bad guys or bad code into your Windows system just by getting you to click on a booby trapped link, document or visit a compromised/hacked Web page.

As per usual, the SANS Internet Storm Center has a handy rundown on the individual flaws, neatly indexing them by severity rating, exploitability and whether the problems have been publicly disclosed or exploited.

One of the updates addresses a pair of serious vulnerabilities in Adobe Flash Player (which ships with the latest version of Internet Explorer/Edge). As KrebsOnSecurity warned last week, there are active attacks ongoing against these Flash vulnerabilities.

Adobe is phasing out Flash entirely by 2020, but most of the major browsers already take steps to hobble Flash. And with good reason: It’s a major security liability. Chrome also bundles Flash, but blocks it from running on all but a handful of popular sites, and then only after user approval.

For Windows users with Mozilla Firefox installed, the browser prompts users to enable Flash on a per-site basis. Through the end of 2017 and into 2018, Microsoft Edge will continue to ask users for permission to run Flash on most sites the first time the site is visited, and will remember the user’s preference on subsequent visits.

The latest standalone version of Flash that addresses these bugs is for Windows, Mac, Linux and Chrome OS. But most users probably would be better off manually hobbling or removing Flash altogether, since so few sites actually require it still. Disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

People running Adobe Reader or Acrobat also need to update, as Adobe has shipped new versions of these products that fix at least 39 security holes. Adobe Reader users should know there are alternative PDF readers that aren’t so bloated or full of security issues. Sumatra PDF is a good, lightweight alternative.

Experience any issues, glitches or problems installing these updates? Sound off about it in the comments below.

TEDNew podcast alert: WorkLife with Adam Grant, a TED original, premieres Feb. 28

Adam Grant to Explore the Psychology of Unconventional Workplaces as Host of Upcoming New TED Original Podcast “WorkLife”

Organizational psychologist, professor, bestselling author and TED speaker Adam Grant is set to host a new TED original podcast series titled WorkLife with Adam Grant, which will explore unorthodox work cultures in search of surprising and actionable lessons for improving listeners’ work lives.

Beginning Wednesday, February 28, each weekly episode of WorkLife will center around one extraordinary workplace—from an award-winning TV writing team racing against the clock, to a sports team whose culture of humility propelled it to unexpected heights. In immersive interviews that take place in both the field and the studio, Adam brings his observations to vivid life – and distills useful insights in his friendly, accessible style.

“We spend a quarter of our lives in our jobs. This show is about making all that time worth your time,” says Adam, the bestselling author of OriginalsGive and Take, and Option B with Sheryl Sandberg. “In WorkLife, we’ll take listeners inside the minds of some fascinating people in some truly unusual workplaces, and mix in fresh social science to reveal how we can lead more creative, meaningful, and generous lives at work.”

Produced by TED in partnership with Pineapple Street Media and Transmitter Media, WorkLife is TED’s first original podcast created in partnership with a TED speaker. Its immersive, narrative format is designed to offer audiences a new way to explore TED speaker ideas in depth. Adam’s talks “Are you a giver or a taker?” and “The surprising habits of original thinkers” have together been viewed more than 11 million times in the past two years.

The show marks TED’s latest effort to test new content formats beyond the nonprofit’s signature first-person TED talk. Other recent TED original content experiments include Sincerely, X, an audio series featuring talks delivered anonymously;  Small Thing Big Idea, a Facebook Watch video series about everyday designs that changed the world; and the Indian prime-time live-audience television series TED Talks India: Nayi Soch, hosted by Bollywood star and TED speaker Shah Rukh Khan.

“We’re aggressively developing and testing a number of new audio and video programs that support TED’s mission of ‘Ideas Worth Spreading,’” said TED head of media and WorkLife co-executive producer Colin Helms. “In every case, our speakers and their ideas remain the focus, but with fresh formats, styles and lengths, we can reach and appeal to even more curious audiences, wherever they are.”

WorkLife debuts Wednesday, February 28 on Apple Podcasts, the TED Android app, or wherever you like to listen to podcasts. Season 1 features eight episodes, roughly 30 minutes each, plus two bonus episodes. It’s sponsored by Accenture, Bonobos, JPMorgan Chase & Co., and Warby Parker. New episodes will be made available every Wednesday.

CryptogramJumping Air Gaps

Nice profile of Mordechai Guri, who researches a variety of clever ways to steal data over air-gapped computers.

Guri and his fellow Ben-Gurion researchers have shown, for instance, that it's possible to trick a fully offline computer into leaking data to another nearby device via the noise its internal fan generates, by changing air temperatures in patterns that the receiving computer can detect with thermal sensors, or even by blinking out a stream of information from a computer hard drive LED to the camera on a quadcopter drone hovering outside a nearby window. In new research published today, the Ben-Gurion team has even shown that they can pull data off a computer protected by not only an air gap, but also a Faraday cage designed to block all radio signals.

Here's a page with all the research results.

BoingBoing post.

Worse Than FailureBudget Cuts

Xavier was the head of a 100+ person development team. Like many enterprise teams, they had to support a variety of vendor-specific platforms, each with their own vendor-specific development environment and its own licensing costs. All the licensing costs were budgeted for at year’s end, when Xavier would submit the costs to the CTO. The approval was a mere formality, ensuring his team would have everything they needed for another year.

Unfortunately, that CTO left to pursue another opportunity. Enter Greg, a new CTO who joined the company from the financial sector. Greg was a penny-pincher on a level that would make the novelty coin-smasher you find at zoos and highway rest-stops jealous. Greg started cutting costs left and right immediately. When the time came for budgeting development tool licensing, Greg threw down the gauntlet on Xavier’s “wild” spending.

Alan Rickman, in Galaxy Quest, delivering the line, 'By Grabthar's Hammer, what a savings' while looking like his soul is dying forever. "By Grabthar's Hammer, what a savings."

“Have a seat, X-man,” Greg offered, in a faux-friendly voice. “Let’s get to the point. I looked at your proposal for all of these tools, your team supposedly ‘needs’. $40,000 is absurd! Do you think we print money? If your team were any good,, they should be able to do everything they need without these expensive, gold-plated programs!”

Xavier was taken aback by Greg’s brashness, but he was prepared for a fight. “Greg, these tools are vital to our development efforts. There are maybe a few products we could do without, but most of them are absolutely required. Even the more ‘optional’ ones, like our refactoring and static analysis tools, they save us money and time and improve code quality. Not having them would be more expensive than the license.”

Greg scowled and tented his fingers. “There is no chance I’m approving this as it stands. Go back and figure out what you can do without. If you don’t cut this cost down, I’ll find an easier way to reduce expenses… like by cutting bonuses… or staff.”

Xavier spent the next few days having an extensive tool review with his lead developers. Many of the vendor-specific tools had no alternative, but there were a few third party tools they could do without, or use an open-source equivalent. Across the team of 100+ developers, the net cost savings would be $4,000, or 10%.

Xavier didn’t expect that to make Greg happy, but it was the best they could do. The following morning, Xavier presented his findings in Greg’s office, and it went smoother than expected. “Listen, X. I want this cost down even more, but we’re running out of time to approve this year’s budget. Since I did so much work cutting costs in other ways, I’ll submit this to finance. But enjoy your last year of all these fancy tools! Next year, things will be different!”

Xavier was relieved he didn’t have to fight further. Perhaps, over the next year, he could further demonstrate the necessity of their tooling. With the budget resolved, Xavier had some much-overdue vacation time. He had saved up enough PTO to spend a month in the Australian Outback. Development tools and budgets would be the furthest thing from his mind.

Three great weeks in the land down under were enhanced by being mostly cut off from communications from anyone in the company. During a trip through a town with cell phone reception, Xavier decided to check his voicemail, to make sure the sky wasn’t falling. Dave, his #2 in command, had left an urgent message two days prior.

“Xavier!” Dave shouted on the other end. “You need to get back here soon. Greg never paid the invoices for anything in our stack. We’re sitting here with a huge pile of unlicensed stuff. We’ve been racking up unlicensed usage and support costs, and Greg is going to flip when he sees our monthly statements.” With deep horror, Dave added, “One of the licenses he didn’t pay was for Oracle!”

Xavier reluctantly left the land of dingoes and wallabies to head back home. He arrived just about the same time the first vendor calls demanding payment did. The costs from just three weeks of unlicensed usage of enterprise software was astronomical. Certainly more than just buying the licenses would have been in the first place. Xavier scheduled a meeting with Greg to decide what to do next.

The following Monday, the dreaded meeting was on. “Sit,” Greg said. “I have some good news, and some bad news. The good news is that I’ve found a way to pay these ridiculous charges your team racked up.” Xavier leaned forward in his chair, eager to learn how Greg had pulled it off. “The bad news is that I’ve identified a redundant position- yours.”

Xavier slumped into his chair.

Greg continued. “While you were gone, I realized we were in quite capable hands with Dave, and his salary is quite a bit lower than yours. Coincidentally, the original costs and these ridiculous penalties add up to an amount just a little less than your annual salary. I guess you’re getting your wish: the development team can keep the tools you insist they need to do their jobs. It seems you were right about saving money in the long run, too.”

Xavier left Greg’s office, stunned. On his way out for the last time, he stopped by Dave to congratulate him on the new promotion.

“Oh,” Dave said, sourly, “it’s not a promotion. They’re just eliminating your position. What, you think Greg would give me a raise?”

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Don MartiTwo visions of GDPR

As far as I can tell, there are two sets of ambitious predictions about GDPR.

One is the VRM vision. Doc Searls writes, on ProjectVRM:

I am sure Google, Facebook and lesser purveyors of advertising online will find less icky ways to stay in business; but it is becoming clear that next May 25, when the GDPR goes into full effect, will be an extinction-level event for tracking-based advertising (aka adtech) as a business model.

Big impact? Not so fast. There's also a "business as usual" story, and that one, you'll find at Digital Advertising Consent.

Our complex ecosystem of companies must cooperate more closely than ever before to meet the transparency and consent requirements of European data protection law.

According to the adtech firms, well, maybe there will be more Bürokratie, more pointless dialogs that users have to click through, and one more line item, "GDPR compliance", to come out of the publisher's share, of course, but the second vision of GDPR is essentially just adtech/adfraud as usual. Upgrade to the new version of OpenRTB, and move along, nothing to see here.

Personally, I'm not buying either one of these GDPR visions. Because, just for fun and also because reasons, I run my own mail server.

And every little decision I have to make about how to configure the damn thing is based on playing a game with email spammers. Regulation is a part of my complete breakfast, but it's not the whole story.

The government doesn't give you freedom from spam. You have to take it for yourself, one filtering rule at a time. Or, do what most people do, and find a company that does it for you, but it has to be a company that you trust with your information.

A mail sender's decision to comply, or not comply, with some regulation is a bit of information. That feeds into the software that makes the final decision: inbox, spam folder, or reject. When a spam message complies with the regulations of some country, my mail server doesn't say, "Oh, wow, compliant! I can skip all the other checks and send this one straight to the inbox!" It uses the regulation compliance along with other information to make that decision.

So whatever extra consent forms that surveillance marketers are required to send by GDPR? They're not the final decision on What The User Must See. They're just data, coming over the network.

Some of that data will be interpreted to mean that this request is an obvious mismatch with how the user chooses to share their info. The user might not even see those consent forms, or the browser might pop up a notification:

4 requests to do creepy shit, that's obviously against your preferences, already denied. Isn't this the best browser ever?

(No, I don't write copy for browser notifications. But you get the idea.)

Browsers that implement tracking protection might end up with a feature where they detect requests for permission to do things that the user has already said no to—by turning on tracking protection in the first place—and auto-deny them.

Legit email senders had to learn "deliverability," the art and science of making legit mail look legit so that it can get past email spam filters. Legit advertisers will have to learn that users aren't identical and spherical, users choose tools to implement their data sharing preferences, and that regulatory compliance is only part of the job.

Should web browsers adopt Google’s new selective ad blocking tech?


Content recommendation services Outbrain and Taboola are no longer a guaranteed source of revenue for digital publishers

CryptogramCabinet of Secret Documents from Australia

This story of leaked Australian government secrets is unlike any other I've heard:

It begins at a second-hand shop in Canberra, where ex-government furniture is sold off cheaply.

The deals can be even cheaper when the items in question are two heavy filing cabinets to which no-one can find the keys.

They were purchased for small change and sat unopened for some months until the locks were attacked with a drill.

Inside was the trove of documents now known as The Cabinet Files.

The thousands of pages reveal the inner workings of five separate governments and span nearly a decade.

Nearly all the files are classified, some as "top secret" or "AUSTEO", which means they are to be seen by Australian eyes only.

Yes, that really happened. The person who bought and opened the file cabinets contacted the Australian Broadcasting Corp, who is now publishing a bunch of it.

There's lots of interesting (and embarassing) stuff in the documents, although most of it is local politics. I am more interested in the government's reaction to the incident: they're pushing for a law making it illegal for the press to publish government secrets it received through unofficial channels.

"The one thing I would point out about the legislation that does concern me particularly is that classified information is an element of the offence," he said.

"That is to say, if you've got a filing cabinet that is full of classified information ... that means all the Crown has to prove if they're prosecuting you is that it is classified ­ nothing else.

"They don't have to prove that you knew it was classified, so knowledge is beside the point."


Many groups have raised concerns, including media organisations who say they unfairly target journalists trying to do their job.

But really anyone could be prosecuted just for possessing classified information, regardless of whether they know about it.

That might include, for instance, if you stumbled across a folder of secret files in a regular skip bin while walking home and handed it over to a journalist.

This illustrates a fundamental misunderstanding of the threat. The Australian Broadcasting Corp gets their funding from the government, and was very restrained in what they published. They waited months before publishing as they coordinated with the Australian government. They allowed the government to secure the files, and then returned them. From the government's perspective, they were the best possible media outlet to receive this information. If the government makes it illegal for the Australian press to publish this sort of material, the next time it will be sent to the BBC, the Guardian, the New York Times, or Wikileaks. And since people no longer read their news from newspapers sold in stores but on the Internet, the result will be just as many people reading the stories with far fewer redactions.

The proposed law is older than this leak, but the leak is giving it new life. The Australian opposition party is being cagey on whether they will support the law. They don't want to appear weak on national security, so I'm not optimistic.

EDITED TO ADD (2/8): The Australian government backed down on that new security law.

EDITED TO ADD (2/13): Excellent political cartoon.

CryptogramPoor Security at the UK National Health Service

The Guardian is reporting that "every NHS trust assessed for cyber security vulnerabilities has failed to meet the standard required."

This is the same NHS that was debilitated by WannaCry.

EDITED TO ADD (2/13): More news.

And don't think that US hospitals are much better.

Cory DoctorowThe Man Who Sold the Moon, Part 04 [FIXED]

Here’s part four of my reading (MP3) (part three, part two, part one) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.



Sociological ImagesWhat’s That Fact? A Tricky Graph on Terror

The Star Tribune recently ran an article about a new study from George Washington University tracking cases of Americans who traveled to join jihadist groups in Syria and Iraq since 2011. The print version of the article was accompanied by a graph showing that Minnesota has the highest rate of cases in the study. TSP editor Chris Uggen tweeted the graph, noting that this rate represented a whopping seven cases in the last six years.

Here is the original data from the study next to the graph that the paper published:

(Click to Enlarge)

Social scientists often focus on rates when reporting events, because it make cases easier to compare. If one county has 300 cases of the flu, and another has 30,000, you wouldn’t panic about an epidemic in the second county if it had a city with many more people. But relying on rates to describe extremely rare cases can be misleading. 

For example, the data show this graph misses some key information. California and Texas had more individual cases than Minnesota, but their large populations hide this difference in the rates. Sorting by rates here makes Minnesota look a lot worse than other states, while the number of cases is not dramatically different. 

As far as I can tell, this chart only appeared in the print newspaper photographed above and not on the online story. If so, this chart only went to print audiences. Today we hear a lot of concern about the impact of “filter bubbles,” especially online, and the spread of misleading information. What concerns me most about this graph is how it shows the potential impact of offline filter bubbles in local communities, too.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

Krebs on SecurityDomain Theft Strands Thousands of Web Sites

Newtek Business Services Corp. [NASDAQ:NEWT], a Web services conglomerate that operates more than 100,000 business Web sites and some 40,000 managed technology accounts, had several of its core domain names stolen over the weekend. The theft shut off email and stranded Web sites for many of Newtek’s customers.

An email blast Newtek sent to customers late Saturday evening made no mention of a breach or incident, saying only that the company was changing domains due to “increased” security. A copy of that message can be read here (PDF).

In reality, three of their core domains were hijacked by a Vietnamese hacker, who replaced the login page many Newtek customers used to remotely manage their Web sites (webcontrolcenter[dot]com) with a live Web chat service. As a result, Newtek customers seeking answers to why their Web sites no longer resolved correctly ended up chatting with the hijacker instead.

The PHP Web chat client that the intruder installed on Webcontrolcenter[dot]com, a domain that many Newtek customers used to manage their Web sites with the company. The perpetrator can be seen in this chat using the name “admin.” Click to enlarge.

In a follow-up email sent to customers 10 hours later (PDF), Newtek acknowledged the outage was the result of a “dispute” over three domains, webcontrolcenter[dot]com, thesba[dot]com, and crystaltech[dot]com.

“We strongly request that you eliminate these domain names from all your corporate or personal browsers, and avoid clicking on them,” the company warned its customers. “At this hour, it has become apparent that as a result over the dispute for these three domain names, we do not currently have control over the domains or email coming from them.”

The warning continued: “There is an unidentified third party that is attempting to chat and may engage with clients when visiting the three domains. It is imperative that you do not communicate or provide any sensitive data at these locations.”

Newtek did not respond to requests for comment.

Domain hijacking is not a new problem, but it can be potentially devastating to the victim organization. In control of a hijacked domain, a malicious attacker could seamlessly conduct phishing attacks to steal personal information, or use the domain to foist malicious software on visitors.

Newtek is not just a large Web hosting firm: It aims to be a one-stop shop for almost any online service a small business might need. As such, it’s a mix of very different business units rolled up into one since its founding in 1998, including lending solutions, HR, payroll, managed cloud solutions, group health insurance and disaster recovery solutions.

“NEWT’s tentacles go deep into their client’s businesses through providing data security, human resources, employee benefits, payments technology, web design and hosting, a multitude of insurance solutions, and a suite of IT services,” reads a Sept. 2017 profile of the company at SeekingAlpha, a crowdsourced market analysis publication.

Newtek’s various business lines. Source: Newtek.

Reached via the Web chat client he installed at webcontrolcenter[dot]com, the person who claimed responsibility for the hijack said he notified Newtek five days ago about a “bug” he found in the company’s online operations, but that he received no reply.

A Newtek customer who resells the company’s products to his clients said he had to spend much of the weekend helping clients regain access to email accounts and domains as a result of the incident. The customer, who asked to remain anonymous, said he was shocked that Newtek made little effort to convey the gravity of the hijack to its customers — noting that the company’s home page still makes no mention of the incident.

“They also fail to make it clear that any data sent to any host under the domain could be recorded (email passwords, web credentials, etc.) by the attacker,” he said. “I’m floored at how bad their communication was to their users. I’m not surprised, but concerned, that they didn’t publish the content in the emails directly on their website.”

The source said that at a minimum Newtek should have expired all passwords immediately and required resets through non-compromised hosts.

“And maybe put a notice about this on their home page instead of relying on email, because a lot of my customers can’t get email right now as a result of this,” the source said.

There are a few clues that suggest the perpetrator of these domain hijacks is indeed being truthful about both his nationality and that he located a bug in Newtek’s service. Two of the hijacked domains were moved to a Vietnamese domain registrar (

This individual gave me an email address to contact him at — — although he has so far not responded to questions beyond promising to reply in Vietenamese. The email is tied to two different Vietnamese-language social networking profiles.

A search at Domaintools indicates that this address is linked to the registration records for four domains, including one (giakiemnew[dot]com) that was recently hosted on a dedicated server operated by Newtek’s legacy business unit Crystaltek [full disclosure: Domaintools is an advertiser on this site]. Recall that Crystaltek[dot]com was among the three hijacked domains.

In addition, the domain giakiemnew[dot]com was registered through Newtek Technology Services, a domain registration service offered by Newtek. This suggests that the perpetrator was in fact a customer of Newtek, and perhaps did discover a vulnerability while using the service.

CryptogramInternet Security Threats at the Olympics

There are a lot:

The cybersecurity company McAfee recently uncovered a cyber operation, dubbed Operation GoldDragon, attacking South Korean organizations related to the Winter Olympics. McAfee believes the attack came from a nation state that speaks Korean, although it has no definitive proof that this is a North Korean operation. The victim organizations include ice hockey teams, ski suppliers, ski resorts, tourist organizations in Pyeongchang, and departments organizing the Pyeongchang Olympics.

Meanwhile, a Russia-linked cyber attack has already stolen and leaked documents from other Olympic organizations. The so-called Fancy Bear group, or APT28, began its operations in late 2017 --­ according to Trend Micro and Threat Connect, two private cybersecurity firms­ -- eventually publishing documents in 2018 outlining the political tensions between IOC officials and World Anti-Doping Agency (WADA) officials who are policing Olympic athletes. It also released documents specifying exceptions to anti-doping regulations granted to specific athletes (for instance, one athlete was given an exception because of his asthma medication). The most recent Fancy Bear leak exposed details about a Canadian pole vaulter's positive results for cocaine. This group has targeted WADA in the past, specifically during the 2016 Rio de Janeiro Olympics. Assuming the attribution is right, the action appears to be Russian retaliation for the punitive steps against Russia.

A senior analyst at McAfee warned that the Olympics may experience more cyber attacks before closing ceremonies. A researcher at ThreatConnect asserted that organizations like Fancy Bear have no reason to stop operations just because they've already stolen and released documents. Even the United States Department of Homeland Security has issued a notice to those traveling to South Korea to remind them to protect themselves against cyber risks.

One presumes the Olympics network is sufficiently protected against the more pedestrian DDoS attacks and the like, but who knows?

EDITED TO ADD: There was already one attack.

Worse Than FailureCoded Smorgasbord: If It's Stupid and It Works

On a certain level, if code works, it can only be so wrong. For today, we have a series of code blocks that work… mostly. Despite that, each one leaves you scratching your head, wondering how, exactly this happened.

Lisa works at a web dev firm that just picked up a web app from a client. They didn’t have much knowledge about what it was or how it worked beyond, “It uses JQuery?”

Well, they’re technically correct:

if ($(document.getElementById("really_long_id_of_client_side_element")).checked) {
    $(document.getElementById("xxxx1")).css({ "background-color": "#FFFFFF", "color": "Black" });
    $(document.getElementById("xxxx2")).css({ "background-color": "#FFFFFF", "color": "Black" });
    $(document.getElementById("xxxx3")).css({ "background-color": "#FFFFFF", "color": "Black" });
    $(document.getElementById("xxxx4")).css({ "background-color": "#FFFFFF", "color": "Black" });

In this case, they’re ignoring the main reason people use jQuery- the ability to easily and clearly fetch DOM elements with CSS selectors. But they do use the css function as intended, giving them an object-oriented way to control styles. Then again, one probably shouldn’t set style properties directly from JS anyway, that’s what CSS classes are for. Then again, why mix #FFFFFF and Black, when you could use white or #000000

Regardless, it does in fact use JQuery.

Dave A was recently trying to debug a test in Ruby, and found this unique construct:

if status == status = 1 || status = 2 || status = 3
  @msg.stubs(:is_reply?).returns true
  @msg.stubs(:is_reply?).returns false

This is an interesting case of syntactically correct nonsense that looks incorrect. status = 1 returns a 1, a “truthy” value, thus short circuiting the || operator. In this code, if status is undefined, it returns true and sets status equal to 1. The rest of the time it returns false and sets status equal to 1.

What the developer meant to do was check if status was 1, 2 or 3, e.g. if status == 1 || status == 2…, or, to use a more Ruby idiom: if [1, 2, 3].include? status. Still, given the setup for the test, the code actually worked until Dave changed the pre-conditions.

Meanwhile, Leonardo Scur came across this JavaScript reinvention of an array:

tags = {
  "tags": {
    "0": {"id": "asdf"},
    "1": {"id": "1234"},
    "2": {"id": "etc"}
  "tagsCounter": 3,
  // … below this are reimplementations of common array methods built to work on `tags`

This was part of a trendy front-end framework he was using, and it’s obvious that arrays indexed by integers are simply too mainstream. Strings are where it’s at.

This library is in wide use, meant to add simple tagging widgets to an AngularJS application. It also demonstrates a strange way to reinvent the array.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Cory DoctorowHey, Australia and New Zealand, I’m coming to visit you!

I’m about to embark on a tour of Australia and New Zealand to support my novel Walkaway, with stops in Perth, Melbourne, Sydney, Adelaide, and Wellington! I really hope you’ll come out and say hello!

Perth: Feb 24-25, Perth Festival

Melbourne: Feb 27: An expansive conversation about the imperfect present and foreseeable future with CS Pascat, St Kilda Town Hall, 19h

Melbourne: Feb 28: How do writers get paid?, Wheeler Centre, 1815h

Sydney: Mar 1: What should we do about democracy?, City Recital Hall, 1930h

Adelaide: Mar 4-6: Adelaide Festival

Wellington: Mar 9-11: Writers and Readers Week

Wellington: Mar 12: NetHui one-day event on copyright


Don MartiTeam A vs. Team B

Let's run a technical challenge on the Internet. Team A vs. Team B.

Team A gets to work where they want, when they want. Team B has to work in an open-plan office, with people walking behind them, talking on the phone, doing all that annoying office stuff.

Members of Team A get paid for successful work within weeks or months. Members of Team B get a base salary that they have to spend on rent in an expensive location, but just might get paid extra for successful work in four years.

Team A will let anyone try to join, and those who aren't successful have to drop out quickly. Team B will only let members who are a "good cultural fit" join, and it takes a while to get rid of an unsuccessful member.

Team A can deploy unproven work for real-world testing, using infrastructure that they get for free on the Internet. Team B can only deploy their work when production-ready, on infrastructure they have to pay for.

If Team A breaks the rules, the penalty is that they have to spend a little money to register new domain names. If Team B breaks the rules, they risk lengthy regulatory and/or legal consequences.

Team A scores a win any time they can beat whoever is the weakest member of Team B at that time. Team B can only score a win when they can consistently defeat all of the most active members of Team A.

Team A is adfraud.

Why is so much marketing money being bet on Team B?


Rondam RamblingsA Multilogue on Free Will

[Inspired by this comment thread.] The Tortoise is standing next to a railroad track when Achilles, an ancient Greek warrior, happens by.  In the distance, a train whistle sounds. Tortoise: Greetings, friend Achilles.  You have impeccable timing.  I could use your assistance. Achilles: Hello, Mr. T.  Always happy to help.  What seems to be the trouble? Tortoise: Look there. Achilles: Why, it

Don MartiFOSDEM videos

Check it out. The videos from the Mozilla room at FOSDEM are up, and here's me, talking about bug futures.

All FOSDEM videos

And, yes, the video link Just Works. Bonus link to some background on that: The Fight For Patent-Unencumbered Media Codecs Is Nearly Won by Robert O'Callahan

Another bonus link: FOSDEM video project, including what those custom boxes do.


CryptogramCalling Squid "Calamari" Makes It More Appetizing

Research shows that what a food is called affects how we think about it.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Google AdsenseAdSense now supports Tamil

Continuing our commitment to support more languages and encourage content creation on the web, we’re excited to announce the addition of Tamil, a language spoken by millions of Indians, to the family of AdSense supported languages.

AdSense provides an easy way for publishers to monetize the content they create in Tamil, and help advertisers looking to connect with a Tamil-speaking audience with relevant ads.

To start monetizing your Tamil content website with Google AdSense:

  1. Check the AdSense program policies and make sure your website is compliant.
  2. Sign up for an AdSense account
  3. Add the AdSense code to start displaying relevant ads to your users.

Welcome to AdSense! Sign Up now.

Posted by: The AdSense Internationalization Team

CryptogramLiving in a Smart Home

In "The House that Spied on Me," Kashmir Hill outfits her home to be as "smart" as possible and writes about the results.

Worse Than FailureError'd: Whatever Happened to January 2nd?

"Skype for Business is trying to tell me something...but I'm not sure exactly what," writes Jeremy W.


"I was looking for a tactile switch. And yes, I absolutely do want an operating switch," writes Michael B.


Chris D. wrote, "While booking a hair appointment online, I found that the calendar on the website was a little confused as to how calendars work."


"Don't be fooled by the image on the left," wrote Dan D., "If you get caught in the line of fire, you will assuredly get soaked!"


Jonathan G. writes, "My local bar's Facebook ad shows that, depending on how the viewer frames it, even an error message can look appealing."


"I'll have to check my calendar - I may or may not have plans on the Nanth," wrote Brian.


[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Planet Linux AustraliaOpenSTEM: Australia Day in the early 20th century

Australia Day and its commemoration on 26 January, has long been a controversial topic. This year has seen calls once again for the date to be changed. Similar calls have been made for a long time. As early as 1938, Aboriginal civil rights leaders declared a “Day of Mourning” to highlight issues in the Aboriginal […]


Krebs on SecurityU.S. Arrests 13, Charges 36 in ‘Infraud’ Cybercrime Forum Bust

The U.S. Justice Department announced charges on Wednesday against three dozen individuals thought to be key members of ‘Infraud,” a long-running cybercrime forum that federal prosecutors say cost consumers more than a half billion dollars. In conjunction with the forum takedown, 13 alleged Infraud members from the United States and six other countries were arrested.

A screenshot of the Infraud forum, circa Oct. 2014. Like most other crime forums, it had special sections dedicated to vendors of virtually every kind of cybercriminal goods or services imaginable. Click to enlarge.

Started in October 2010, Infraud was short for “In Fraud We Trust,” and collectively the forum referred to itself as the “Ministry of Fraudulently [sic] Affairs.” As a mostly English-language fraud forum, Infraud attracted nearly 11,000 members from around the globe who sold, traded and bought everything from stolen identities and credit card accounts to ATM skimmers, botnet hosting and malicious software.

“Today’s indictment and arrests mark one of the largest cyberfraud enterprise prosecutions ever undertaken by the Department of Justice,” said John P. Cronan, acting assistant attorney general of the Justice Department’s criminal division. “As alleged in the indictment, Infraud operated like a business to facilitate cyberfraud on a global scale.”

The complaint released by the DOJ lists 36 Infraud members — some only by their hacker nicknames, others by their alleged real names and handles, and still others just as “John Does.” Having been a fairly regular lurker on Infraud over the past seven years who has sought to independently identify many of these individuals, I can say that some of these names and nick associations sound accurate but several do not.

The government says the founder and top member of Infraud was Svyatoslav Bondarenko, a hacker from Ukraine who used the nicknames “Rector” and “Helkern.” The first nickname is well supported by copies of the forum obtained by this author several years back; indeed, Rector’s profile listed him an administrator, and Rector can be seen on countless Infraud discussion threads vouching for sellers who had paid the monthly fee to advertise their services in “sticky” threads on the forum.

However, I’m not sure the Helkern association with Bondarenko is accurate. In December 2014, just days after breaking the story about the theft of some 40 million credit and debit cards from retail giant Target, KrebsOnSecurity posted a lengthy investigation into the identity of “Rescator” — the hacker whose cybercrime shop was identified as the primary vendor of cards stolen from Target.

That story showed that Rescator changed his nickname from Helkern after Helkern’s previous cybercrime forum (Darklife) got massively hacked, and it presented clues indicating that Rescator/Helkern was a different Ukrainian man named Andrey Hodirevski. For more on that connection, see Who’s Selling Cards from Target.

Also, Rescator was a separate vendor on Infraud, and there are no indications that I could find suggesting that Rector and Rescator were the same people. Here is Rescator’s most recent sales thread for his credit card shop on Infraud — dated almost a year after the Target breach. Notice the last comment on that thread alleges that Rescator had recently been arrested and that his shop was being run by law enforcement officials: 

Another top administrator of Infraud used the nickname “Stells.” According to the Justice Department, Stells’ real name is Sergey Medvedev. The government doesn’t describe his exact role, but it appears to have been administering the forum’s escrow service (see screenshot below).

Most large cybercrime forums have an escrow service, which holds the buyer’s virtual currency until forum administrators can confirm the seller has consummated the transaction acceptably to both parties. The escrow feature is designed to cut down on members ripping one another off — but it also can add considerably to the final price of the item(s) for sale.

In April 2016, Medvedev would take over as the “admin and owner” of Infraud, after he posted a note online saying that Bondarenko had gone missing, the Justice Department said.

One defendant in the case, a well-known vendor of stolen credit and debit cards who goes by the nickname “Zo0mer,” is listed as a John Doe. But according to a New York Times story from 2006, Zo0mer’s real name is Sergey Kozerev, and he hails from St. Petersburg, Russia.

The indictments also list two other major vendors of stolen credit and debit cards: hackers who went by the nicknames “Unicc” and “TonyMontana” (the latter being a reference to the fictional gangster character played by Al Pacino in the 1983 movie Scarface). Both hackers have long operated and operate to this day their own carding shops:

Unicc shop, which sells stolen credit card data as well as Social Security numbers and other consumer information that can be used for identity theft.

The government says Unicc’s real name is Andrey Sergeevich Novak. TonyMontana is listed in the complaint as John Doe #1.

TonyMontana’s carding shop.

Perhaps the most successful vendor of skimming devices made to be affixed to ATMs and fuel pumps was a hacker known on Infraud and other crime forums as “Rafael101.” Several of my early stories about new skimming innovations came from discussions with Rafael in which this author posed as an interested buyer and asked for videos, pictures and technical descriptions of his skimming devices.

A confidential source who asked not to be named told me a few years back that Rafael had used the same password for his skimming sales accounts on multiple competing cybercrime forums. When one of those forums got hacked, it enabled this source to read Rafael’s emails (Rafael evidently used the same password for his email account as well).

The source said the emails showed Rafael was ordering the parts for his skimmers in bulk from Chinese e-commerce giant Alibaba, and that he charged a significant markup on the final product. The source said Rafael had the packages all shipped to a Jose Gamboa in Norwalk, Calif — a suburb of Los Angeles. Sure enough, the indictment unsealed this week says Rafael’s real name is Jose Gamboa and that he is from Los Angeles.

A private message from the skimmer vendor Rafael101, from on a competing cybercrime forum ( in 2012.

The Justice Department says the arrests in this case took place in Australia, France, Italy, Kosovo, Serbia, the United Kingdom and the United States. The defendants face a variety of criminal charges, including identity theft, bank fraud, wire fraud and money laundering. A copy of the indictment is available here.

CryptogramWater Utility Infected by Cryptocurrency Mining Software

A water utility in Europe has been infected by cryptocurrency mining software. This is a relatively new attack: hackers compromise computers and force them to mine cryptocurrency for them. This is the first time I've seen it infect SCADA systems, though.

It seems that this mining software is benign, and doesn't affect the performance of the hacked computer. (A smart virus doesn't kill its host.) But that's not going to always be the case.

Worse Than FailureCodeSOD: I Take Exception

We've all seen code that ignores errors. We've all seen code that simply rethrows an exception. We've all seen code that wraps one exception for another. The submitter, Mr. O, took exception to this exceptionally exceptional exception handling code.

I was particularly amused by the OutOfMemoryException handler that allocates another exception object, and if it fails, another layer of exception trapping catches that and attempts to allocate yet another exception object. if that fails, it doesn't even try. So that makes this an exceptionally unexceptional exception handler?! (ouch, my head hurts)

It contains a modest amount of fairly straightforward code to read config files and write assorted XML documents. And it handles exceptions in all of the above ways.

You might note that the exception handling code was unformatted, unaligned and substantially larger than the code it is attempting to protect. To help you out, I've stripped out the fairly straightforward code being protected, and formatted the exception handling code to make it easier to see this exceptional piece of code (you may need to go full-screen to get the full impact).

After all, it's not like exceptions can contain explanatory text, or stack context information...

namespace HotfolderMerger {
  public class Merger : IDisposable {
    public Merger() {
      try {
          object section = ConfigurationManager.GetSection("HFMSettings/DataSettings");
          if (section == null) throw new MergerSetupException();
          _settings = (DataSettings)section;
      } catch (MergerSetupException) {
      } catch (ConfigurationErrorsException ex){
        throw new MergerSetupException("Error in configuration", ex);
      } catch (Exception ex) {
        throw new MergerSetupException("Unexpected error while loading configuration",ex);

    // A whole bunch of regex about as complex as this one...
    private readonly Regex _fileNameRegex = new Regex(@"^(?<System>[A-Za-z0-9]{1,10})_(?<DesignName>[A-Za-z0-9]{1,})_(?<DocumentID>\d{1,})_(?<FileTimeUTC>\d{1,})(_(?<BAMID>\d+))?\.(?<extension>\w{0,3})$");

    public void MergeFiles() {
      try {
          foreach (FileElement filElement in _settings.Filelist) {
            // Lots of declarations here...
            foreach (FileInfo fi in hotfolder.GetFiles()) {
              try {
                  // 35 lines of innocuous code..
              } catch (ArgumentException ex) {
                throw new BasisException(ex, int.Parse(ErrorCodes.MergePreRunArgumentException),     ErrorMessages.MergePreRunArgumentException);
              } catch (ConfigurationException ex) {
                throw new BasisException(ex, int.Parse(ErrorCodes.MergePreRunConfigurationException),ErrorMessages.MergePreRunConfigurationException);
              } catch (Exception ex) {
                throw new UnexpectedMergerException("Unexpected exception while setting up for merge!", ex);
              try {
                  // 23 lines of StreamReader code to load some XML from a file...
              } catch (OutOfMemoryException ex) {
                // OP: so if we're out of memory, how is this new exception going to be allocated? 
                //     Maybe in the wrapping "try/catch Exception" - which allocates a new UnexpectedMergerException object??? Oh, wait...
                throw new BasisException(  ex,int.Parse(ErrorCodes.MergeRunOutOfMemoryException),   ErrorMessages.MergeRunOutOfMemoryException);
              } catch (ConfigurationException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunConfigurationException),ErrorMessages.MergeRunConfigurationException);
              } catch (FormatException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunFormatException),       ErrorMessages.MergeRunFormatException);
              } catch (ArgumentException ex) { 
                throw new BasisException(    ex, int.Parse(ErrorCodes.MergeRunArgumentException),   ErrorMessages.MergeRunArgumentException);
              } catch (SecurityException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunSecurityException),     ErrorMessages.MergeRunSecurityException);
              } catch (IOException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunIOException),           ErrorMessages.MergeRunIOException);
              } catch (NotSupportedException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunNotSupportedException), ErrorMessages.MergeRunNotSupportedException);
              } catch (Exception ex) {
                throw new UnexpectedMergerException("Unexpected exception while merging!", ex);
            // ...
      } catch (UnexpectedMergerException) {
      } catch (BasisException ex) {
      } catch (Exception ex) {
        throw new UnexpectedMergerException("Unexpected error while attempting to parse settings prior to merge", ex);

    private static void prepareNewMergeFile(ref XmlTextWriter xtw, string filename, int numfiles) {
      if (string.IsNullOrEmpty(filename))
         throw new BasisException(    int.Parse(ErrorCodes.MergeSetupNullReferenceException),       ErrorMessages.MergeSetupNullReferenceException, "filename parameter was null or empty");
      try {
          // Use XmlTextWriter to concatenate ~30 lines of canned XML...
      } catch (InvalidOperationException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupInvalidOperationException),     ErrorMessages.MergeSetupInvalidOperationException);
      } catch (ArgumentException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupArgumentException),             ErrorMessages.MergeSetupArgumentException);
      } catch (IOException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupIOException),                   ErrorMessages.MergeSetupIOException);
      } catch (UnauthorizedAccessException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupUnauthorizedAccessException),   ErrorMessages.MergeSetupUnauthorizedAccessException);
      } catch (SecurityException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupSecurityException),             ErrorMessages.MergeSetupSecurityException);
      } catch (Exception ex) {
        throw new UnexpectedMergerException("Unexpected exception while setting up for merge!", ex);

    private void closeMergeFile(ref XmlTextWriter xtw, ref List<FileInfo> filesComplete, string filename, double i) {
      if (xtw == null)
         throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeCleanupNullReferenceException, "xtw ref parameter was null");
      if (filesComplete == null)
         throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeSetupNullReferenceException,   "filesComplete ref parameter was null");
      if (string.IsNullOrEmpty(filename))
         throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeSetupNullReferenceException,   "filename parameter was null or empty");

      try {
          // ~ 30 lines of XmlTextWriter, StreamWriter and File IO...
      } catch (ArgumentException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupArgumentException),           ErrorMessages.MergeCleanupArgumentException);
      } catch (InvalidOperationException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupInvalidOperationException),   ErrorMessages.MergeCleanupInvalidOperationException);
      } catch (UnauthorizedAccessException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupUnauthorizedAccessException), ErrorMessages.MergeCleanupUnauthorizedAccessException);
      } catch (IOException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupIOException),                 ErrorMessages.MergeCleanupIOException);
      } catch (NullReferenceException ex) {
        throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeCleanupNullReferenceException, "unknown exception details");
      } catch (NotSupportedException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupNotSupportedException),       ErrorMessages.MergeCleanupNotSupportedException);
      } catch (MergerException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupMergerException),             ErrorMessages.MergeCleanupMergerException);
      } catch (SecurityException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupSecurityException),           ErrorMessages.MergeCleanupSecurityException);
      } catch (Exception ex) {
        throw new UnexpectedMergerException("Unexpected exception while merging!", ex);
[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet Linux AustraliaRussell Coker: Thinkpad X1 Carbon

I just bought a Thinkpad X1 Carbon to replace my Thinkpad X301 [1]. It cost me $289 with free shipping from an eBay merchant which is a great deal, a new battery for the Thinkpad X301 would have cost about $100.

It seems that laptops aren’t depreciating in value as much as they used to. Grays Online used to reliably have refurbished Thinkpads with manufacturer’s warranty selling for about $300. Now they only have IdeaPads (a cheaper low-end line from Lenovo) at good prices, admittedly $100 to $200 for an IdeaPad is a very nice deal if you want a cheap laptop and don’t need something too powerful. But if you want something for doing software development on the go then you are looking at well in excess of $400. So I ended up buying a second-hand system from an eBay merchant.


I was quite excited to read the specs that it has an i7 CPU, but now I have it I discovered that the i7-3667U CPU scores 3990 according to passmark ( [2]. While that is much better than the U9400 in the Thinkpad X301 that scored 968, it’s only slightly better than the i5-2520M in my Thinkpad T420 that scored 3582 [3]. I bought the Thinkpad T420 in August 2013 [4], I had hoped that Moore’s Law would result in me getting a system at least twice as fast as my last one. But buying second-hand meant I got a slower CPU. Also the small form factor of the X series limits the heat dissipation and therefore limits the CPU performance.


Thinkpads have traditionally had the best keyboards, but they are losing that advantage. This system has a keyboard that feels like an Apple laptop keyboard not like a traditional Thinkpad. It still has the Trackpoint which is a major feature if you like it (I do). The biggest downside is that they rearranged the keys. The PgUp/PgDn keys are now by the arrow keys, this could end up being useful if you like the SHIFT-PgUp/SHIFT-PgDn combinations used in the Linux VC and some Xterms like Konsole. But I like to keep my keys by the home keys and I can’t do that unless I use the little finger of my right hand for PgUp/PgDn. They also moved the Home, End, and Delete keys which is really annoying. It’s not just that the positions are different to previous Thinkpads (including X series like the X301), they are different to desktop keyboards. So every time I move between my Thinkpad and a desktop system I need to change key usage.

Did Lenovo not consider that touch typists might use their products?

The keyboard moved the PrtSc key, and lacks ScrLk and Pause keys, but I hardly ever use the PrtSc key, and never use the other 2. The lack of those keys would only be of interest to people who have mapped them to useful functions and people who actually use PrtSc. It’s impractical to have a key as annoying to accidentally press as PrtSc between the Ctrl and Alt keys.

One significant benefit of the keyboard in this Thinkpad is that it has a backlight instead of having a light on the top of the screen that shines on the keyboard. It might work better than the light above the keyboard and looks much cooler! As an aside I discovered that my Thinkpad X301 has a light above the keyboard, but the key combination to activate it sometimes needs to be pressed several times.


X1 Carbon 1600*900
T420 1600*900
T61 1680*1050
X301 1440*900

Above are the screen resolutions for all my Thinkpads of the last 8 years. The X301 is an anomaly as I got it from a rubbish pile and it was significantly older than Thinkpads usually are when I get them. It’s a bit disappointing that laptop screen resolution isn’t increasing much over the years. I know some people have laptops with resolutions as high as 2560*1600 (as high as a high end phone) it seems that most laptops are below phone resolution.

Kogan is currently selling the Agora 8+ phone new for $239, including postage that would still be cheaper than the $289 I paid for this Thinkpad. There’s no reason why new phones should have lower prices and higher screen resolutions than second-hand laptops. The Thinkpad is designed to be a high-end brand, other brands like IdeaPad are for low end devices. Really 1600*900 is a low-end resolution by today’s standards, 1920*1080 should be the minimum for high-end systems. Now I could have bought one of the X series models with a higher screen resolution, but most of them have the lower resolution and hunting for a second hand system with the rare high resolution screen would mean missing the best prices.

I wonder if there’s an Android app to make a phone run as a second monitor for a Linux laptop, that way you could use a high resolution phone screen to display data from a laptop.

This display is unreasonably bright by default. So bright it hurt my eyes. The xbacklight program doesn’t support my display but the command “xrandr –output LVDS-1 –brightness 0.4” sets the brightness to 40%. The Fn key combination to set brightness doesn’t work. Below a brightness of about 70% the screen looks grainy.


This Thinkpad has a 180G SSD that supports contiguous reads at 500MB/s. It has 8G of RAM which is the minimum for a usable desktop system nowadays and while not really fast the CPU is fast enough. Generally this is a nice system.

It doesn’t have an Ethernet port which is really annoying. Now I have to pack a USB Ethernet device whenever I go anywhere. It also has mini-DisplayPort as the only video connector, as that is almost never available at a conference venue (VGA and HDMI are the common ones) I’ll have to pack an adaptor when I give a lecture. It also only has 2 USB ports, the X301 has 3. I know that not having HDMI, VGA, and Ethernet ports allows designing a thinner laptop. But I would be happier with a slightly thicker laptop that has more connectivity options. The Thinkpad X301 has about the same mass and is only slightly thicker and has all those ports. I blame Apple for starting this trend of laptops lacking IO options.

This might be the last laptop I own that doesn’t have USB-C. Currently not having USB-C is not a big deal, but devices other than phones supporting it will probably be released soon and fast phone charging from a laptop would be a good feature to have.

This laptop has no removable battery. I don’t know if it will be practical to replace the battery if the old one wears out. But given that replacing the battery may be more than the laptop is worth this isn’t a serious issue. One significant issue is that there’s no option to buy a second battery if I need to have it run without mains power for a significant amount of time. When I was travelling between Australia and Europe often I used to pack a second battery so I could spend twice as much time coding on the plane. I know it’s an engineering trade-off, but they did it with the X301 and could have done it again with this model.


This isn’t a great laptop. The X1 Carbon is described as a flagship for the Thinkpad brand and the display is letting down the image of the brand. The CPU is a little disappointing, but it’s a trade-off that I can deal with.

The keyboard is really annoying and will continue to annoy me for as long as I own it. The X301 managed to fit a better keyboard layout into the same space, there’s no reason that they couldn’t have done the same with the X1 Carbon.

But it’s great value for money and works well.

Geek FeminismBringing the blog to a close

We’re bringing the Geek Feminism blog to a close.

First, some logistics; then some reasons and reminiscences; then, some thanks.


The site will still be up for at least several years, barring Internet catastrophe. We won’t post to it anymore and comments will be closed, but we intend to keep the archives up and available at their current URLs, or to have durable redirects from the current URLs to the archive.

This doesn’t affect the Geek Feminism wiki, which will keep going.

There’s a Twitter feed and a Facebook page; after our last blog post, we won’t post to those again.

We don’t have a definite date yet for when we’ll post for the last time. It’ll almost certainly be this year.

I might add to this, or post in the comments, to add stuff. And this isn’t the absolute last post on the blog; it’d be nice to re-run a few of our best-of posts, for instance, like the ones Tim Chevalier linked to here. We’re figuring that out.

Reasons and reminiscences

Alex Bayley and a bunch of their peers — myself included — started posting on this blog in 2009. We coalesced around feminist issues in scifi/fantasy fandom, open culture projects like Wikipedia, gaming, the sciences, the tech industry and open source software development, Internet culture, and so on. Alex gave a talk at Open Source Bridge 2014 about our history to that point, and our meta tag has some further background on what we were up to over those years.

You’ve probably seen a number of these kinds of volunteer group efforts end. People’s lives shift, our priorities change as we adapt to new challenges, and so on. And we’ve seen the birth or growth of other independent media; there are quite a lot of places to go, for a feminist take on the issues I mentioned. For example:

We did some interesting, useful, and cool stuff for several years; I try to keep myself from dwelling too much in the sad half of “bittersweet” by thinking of the many communities that have already been carrying on without waiting for us to pass any torches.


Thanks of course to all our contributors, past and present, and those who provided the theme, logo, and technical support and built or provided infrastructure, social and digital and financial, for this blog. Thanks to our readers and commenters. Thanks to everyone who did neat stuff for us to write about. And thanks to anyone who used things we said to go make the world happier.

More later; thanks.

Sociological ImagesBeyond Racial Binaries: How ‘White’ Latinos Can Experience Racism

Recent reports indicated that FEMA was cuttingand then not cutting—hurricane relief aid to Puerto Rico. When Donald Trump recently slandered Puerto Ricans as lazy and too dependent on aid after Hurricane Maria, Fox News host Tucker Carlson stated that Trump’s criticism could not be racist because “Puerto Rico is 75 percent white, according to the U.S. Census.”

Photo Credit: Coast Guard News, Flickr CC

This statement presents racism as a false choice between nonwhite people who experience racism and white people who don’t. It ignores the fact that someone can be classed as white by one organization but treated as non-white by another, due to the way ‘race’ is socially constructed across time, regions and social contexts.

Whiteness for Puerto Ricans is a contradiction. Racial labels that developed in Puerto Rico were much more fluid than on the U.S. mainland, with at least twenty categories. But the island came under U.S. rule at the height of American nativism and biological racism, which relied on a dichotomy between a privileged white race and a stigmatized black one that was designed to protect the privileges of slavery and segregation. So the U.S. portrayed the islanders with racist caricatures in cartoons like this one:

Clara Rodriguez has shown how Puerto Ricans who migrated to the mainland had to conform to this white-black duality that bore no relation to their self-identifications. The Census only gave two options, white or non-white, so respondents who would have identified themselves as “indio, moreno, mulato, prieto, jabao, and the most common term, trigueño (literally, ‘wheat-colored’)” chose white by default, simply to avoid the disadvantage and stigma of being seen as black bodied.

Choosing the white option did not protect Puerto Ricans from discrimination. Those who came to the mainland to work in agriculture found themselves cast as ‘alien labor’ despite their US citizenship. When the federal government gave loans to white home buyers after 1945, Puerto Ricans were usually excluded on zonal grounds, being subjected to ‘redlining’ alongside African Americans. Redlining was also found to be operating on Puerto Rico itself in the insurance market as late as 1998, suggesting it may have even contributed to the destitution faced by islanders after natural disasters.

The racist treatment of Puerto Ricans shows how it is possible to “be white” without white privilege. There have been historical advantages in being “not black” and “not Mexican”, but they have not included the freedom to seek employment, housing and insurance without fear of exclusion or disadvantage. When a hurricane strikes, Puerto Rico finds itself closer to New Orleans than to Florida.

An earlier version of this post appeared at History News Network

Jonathan Harrison, PhD, is an adjunct Professor in Sociology at Florida Gulf Coast University, Florida SouthWestern State College and Hodges University whose PhD was in the field of racism and antisemitism.

(View original at


Worse Than FailureCodeSOD: How To Creat Socket?

JR earned a bit of a reputation as the developer who could solve anything. Like most reputations, this was worse than it sounded, and it meant he got the weird heisenbugs. The weirdest and the worst heisenbugs came from Gerry, a developer who had worked for the company for many, many years, and left behind many, many landmines.

Once upon a time, in those Bad Old Days, Gerry wrote a C++ socket-server. In those days, the socket-server would crash any time there was an issue with network connectivity. Crashing services were bad, so Gerry “fixed” it. Whatever Gerry did fell into the category of “good enough”, but it had one problem: after any sort of network hiccup, the server wouldn’t crash, but it would take a very long time to start servicing requests again. Long enough that other processes would sometime fail. It was infrequent enough that the bug had stuck around for years, but finally, someone wanted Something Done™.

JR got Something Done™, and he did it by looking at the CreatSocket method, buried deep in a "God" class of 15,000 lines.

void UglyClassThatDidEverything::CreatSocket() {
    while (true) {
                try {
                        m_pSocket = new Socket((ip + ":55043").c_str());
                        if (m_pSocket != null) {
                                //"Creat socket");
                        } else {
                                //"Creat socket failed");
                                // usleep(1000);
                                // sleep(1);
                } catch (...) {
                    if (m_pSocket == null) {
                                //"Creat socket failed");

The try portion of the code provides an… interesting take on handling socket creation. Create a socket, and grab a handle. If you don’t get a socket for some reason, sleep for 5 seconds, and then the infinite while loop means that it’ll try again. Eventually, this will hopefully get a socket. It might take until the heat death of the universe, or at least until the half-created-but-never-cleaned-up sockets consume all the filehandles on the OS, but eventually.

Unless of course, there’s an exception thrown. In that case, we drop down into the catch, where we sleep for 5 seconds, and then call CreatSocket recursively. If that succeeds, we still have that extra call to sleep which guarantees a little nap, presumably to congratulate ourselves for finally creating a socket.

JR had a simple fix for this code: burn it to the ground and replace it with a more normal approach to creating sockets. Unfortunately, management was a bit gun-shy about making any major changes to Gerry’s work. That recursive call might be more important than anyone imagined.

JR had a simpler, if stupider fix: remove the final call to sleep(5) after creating the socket in the exception handler. It wouldn’t make this code any less terrible, but it would mean that it wouldn’t spend all that time waiting to proceed even after it had created a socket, thus solving the initial problem: that it takes a long time to recover after failure.

Unfortunately, management balked at removing a line of code. “It wouldn’t be there if it weren’t important. Instead of removing it, can you just comment it out?”

JR commented it out, closed VIM, and hoped never to touch this service again.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

CryptogramSensitive Super Bowl Security Documents Left on an Airplane

A CNN reporter found some sensitive -- but, technically, not classified -- documents about Super Bowl security in the front pocket of an airplane seat.


TEDThe Big Idea: How to find and hire the best employees

So, you want to hire the best employee for the job? Or perhaps you’re the employee looking to be hired. Here’s some counterintuitive and hyper-intuitive advice that could get the right foot in the door.

Expand your definition of the “right” resume

Here’s the hypothetical situation: a position opens up at your company, applications start rolling in and qualified candidates are identified. Who do you choose? Person A: Ivy League, flawless resume, great recommendations — or Person B: state school, fair amount of job hopping, with odd jobs like cashier and singing waitress thrown in the mix. Both are qualified — but have you already formed a decision?

Well, you might want to take a second look at Person B.

Human resources executive Regina Hartley describes these candidates as “The Silver Spoon” (Person A), the one who clearly had advantages and was set up for success, and “The Scrapper” (Person B), who had to fight tremendous odds to get to the same point.

“To be clear, I don’t hold anything against the Silver Spoon; getting into and graduating from an elite university takes a lot of hard work and sacrifice,” she says. But if it so happens that someone’s whole life has been engineered toward success, how will that person handle the low times? Do they seem like they’re driven by passion and purpose?


Take this resume. This guy never finishes college. He job-hops quite a bit, goes on a sojourn to India for a year, and to top it off, he has dyslexia. Would you hire this guy? His name is Steve Jobs.

That’s not to say every person who has a similar story will ultimately become Steve Jobs, but it’s about extending opportunity to those whose lives have resulted in transformation and growth. Companies that are committed to diversity and inclusive practices tend to support Scrappers and outperform their peers. According to DiversityInc, a study of their top 50 companies for diversity outperformed the S&P 500 by 25 percent.

(Check out Regina’s TED Talk: Why the best hire might not have the perfect resume for more advice and a fantastic suggested reading list full of helpful resources.)

Shake up the face-to-face time

Once you choose candidates to meet in-person, scrap that old hand-me-down list of interview questions — or if you can’t simply toss them, think about adding a couple more.

TED Ideas interview questions

Generally, these conversations ping-pong between two basic questions: one of competency or a one of character. To identify the candidates who have substance and not just smarts, business consultant Anthony Tjan recommends that interviewers ask these five questions to illuminate not just skills and abilities, but intrinsic values and personality traits too.

  1. What are the one or two traits from your parents that you most want to ensure you and your kids have for the rest of your life? A rehearsal is not the result you want. This question calls for a bit more thought on the applicant’s end and sheds light on the things they most value. After hearing the person’s initial response, Tjan says you should immediately follow up with “Can you tell me more?” This is essential if you want to elicit an answer with real depth and substance.
  2. What is 25 times 25? Yes, it sounds ridiculous but trust us — the math adds up. How people react under real-time pressure, and their response can show you how they’ll approach challenging or awkward situations. “It’s about whether they can roll with the embarrassment and discomfort and work with me. When a person is in a job, they’re not always going to be in situations that are in their alley,” he says.
  3. Tell me about three people whose lives you positively changed. What would they say if I called them tomorrow? If a person can’t think of single person, that may say a lot for the role you’re trying to fill. Organizations need employees who can lift each other up. When a person is naturally inclined toward compassionate mentorship, it can have a domino effect in an institution.
  4. After an interview, ask yourself (and other team members, if relevant) “Can I imagine taking this person home with me for the holidays?” This may seem overly personal (because, yes it is), but you’ll most likely trigger a gut reaction.
  5. After an interview, ask security or the receptionist: “How was the candidate’s interaction with you?” How a person treats someone they don’t feel they need to impress is important and telling. It speaks to whether they act with compassion and openness and view others as equals.

(Maybe ask them if they played a lot of Dungeons & Dragons in their life?)

The New York Times’ Adam Bryant suggests getting away from the standard job interview entirely. Reject the played-out choreography — the conference room, the resume, the “Where do you want to be in five years?” — and feel free to shake it up. Instead, get up and move about to observe how they behave in (and out of) the workplace wild.

Take them on a tour of the office (if you can’t take them out for a meal), he proposes, and if you feel so inclined, introduce them to some colleagues. Shake off that stress, walk-and-talk (as TED speaker Nilofer Merchant also advises) and most important, pay attention!

Are they curious about how everything happens? Do they show interest in what your colleagues do? These markers could be the difference between someone you work with and someone you want to work with. Monster has a series of good questions to asks yourself post-meeting potential candidates.

Ultimately, Tjan and Bryant seem to agree, the art of the interview is a tricky but not impossible balance to strike.

Hire for your company’s values, not its internal culture

Culture fit is important, of course, but it can also be used as a shield. The bottom line is hire for diversity — in all its forms.

There’s a chance you may be tired of reading about diversity and inclusion, that you get the point and we don’t need to keep addressing it. Well, tough. Suck it up. Because we do need to talk about it until there’s literally no need to talk about, until this fundamental issue becomes an overarching non-issue (and preferably before we all sink into the sea). This is a concept that can’t just exist in science fictional universes.

Example A: a sci-fi universe featuring a group of people that could be seen working together in a non-fictional universe.


MIT Media Lab director Joi Ito and writer Jeff Howe explain that the best way to prepare for a future of unknown complexity is to build on the strength of our differences. Race, gender,  sexual orientation, socioeconomic background and disciplinary training are all important, as are life experiences that produce cognitive diversity (aka different ways of thinking).

Thanks to an increasing body of research, diversity is becoming a strategic imperative for schools, firms and other institutions. It may be good politics and good PR and, depending on an individual’s commitment to racial and gender equity, good for the soul, say Ito and Howe. But in an era in which your challenges are likely to feature maximum complexity as well, it’s simply good management — which marks a striking departure from an age when diversity was presumed to come at the expense of ability.

As TED speaker Mellody Hobson (TED Talk: Color blind or color brave?) says: “I’m actually asking you to do something really simple.  I’m asking you to look at the people around you purposefully and intentionally. Invite people into your life who don’t look like you, don’t think like you, don’t act like you, don’t come from where you come from, and you might find that they will challenge your assumptions.”

So, in conclusion, go out and hire someone and give them the opportunity to change the world. Or at least, give them the opportunity to prove that they have the wherewithal to change something for the better.



Krebs on SecurityWould You Have Spotted This Skimmer?

When you realize how easy it is for thieves to compromise an ATM or credit card terminal with skimming devices, it’s difficult not to inspect or even pull on these machines when you’re forced to use them personally — half expecting something will come detached. For those unfamiliar with the stealth of these skimming devices and the thieves who install them, read on.

Police in Lower Pottsgrove, PA are searching for a pair of men who’ve spent the last few months installing card and PIN skimmers at checkout lanes inside of Aldi supermarkets in the region. These are “overlay” skimmers, in that they’re designed to be installed in the blink of an eye just by placing them over top of the customer-facing card terminal.

The top of the overlay skimmer models removed from several Aldi grocery story locations in Pennsylvania over the past few months.

The underside of the skimmer hides the brains of this little beauty, which is configured to capture the personal identification number (PIN) of shoppers who pay for their purchases with a debit card. This likely describes a great number of loyal customers at Aldi; the discount grocery chain only in 2016 started accepting credit cards, and previously only took cash, debit cards, SNAP, and EBT cards.

The underside of this skimmer found at Aldi is designed to record PINs.

The Lower Pottsgrove police have been asking local citizens for help in identifying the men spotted on surveillance cameras installing the skimming devices, noting that multiple victims have seen their checking accounts cleaned out after paying at compromised checkout lanes.

Local police released the following video footage showing one of the suspects installing an overlay skimmer exactly like the one pictured above. The man is clearly nervous and fidgety with his feet, but the cashier can’t see his little dance and certainly doesn’t notice the half second or so that it takes him to slip the skimming device over top of the payment terminal.

I realize a great many people use debit cards for everyday purchases, but I’ve never been interested in assuming the added risk and so pay for everything with cash or a credit card. Armed with your PIN and debit card data, thieves can clone the card and pull money out of your account at an ATM. Having your checking account emptied of cash while your bank sorts out the situation can be a huge hassle and create secondary problems (bounced checks, for instance).

The Lower Pottsgrove Police have been admonishing people for blaming Aldi for the incidents, saying the thieves are extremely stealthy and that this type of crime could hit virtually any grocery chain.

While Aldi payment terminals in the United States are capable of accepting more secure chip-based card transactions, the company has yet to enable chip payments (although it does accept mobile contactless payment methods such as Apple Pay and Google Pay). This is important because these overlay skimmers are designed to steal card data stored on the magnetic stripe when customers swipe their cards.

However, many stores that have chip-enabled terminals are still forcing customers to swipe the stripe instead of dip the chip.

Want to learn more about self-checkout skimmers? Check out these other posts:

How to Spot Ingenico Self-Checkout Skimmers

Self-Checkout Skimmers Go Bluetooth

More on Bluetooth Ingenico Overlay Skimmers

Safeway Self-Checkout Skimmers Up Close

Skimmers Found at Wal-Mart: A Closer Look

Worse Than FailureFor Want of a CR…

A few years ago I was hired as an architect to help design some massive changes to a melange of existing systems so a northern foreign bank could meet some new regulatory requirements. As a development team, they gave me one junior developer with almost a year of experience. There were very few requirements and most of it would be guesswork to fill in the blanks. OK, typical Wall Street BS.

Horseshoe nails, because 'for want of a nail, the shoe was lost…

The junior developer was, well, junior, but bright, and he remembered what you taught him, so there was a chance we could succeed.

The setup was that what little requirements there were would come from the Almighty Project Architect down to me and a few of my peers. We would design our respective pieces in as generic a way as possible, and then oversee and help with the coding.

One day, my boss+1 has my boss have the junior guy develop a web service; something the guy had never done before. Since I was busy, it was deemed unnecessary to tell me about it. The guy Googled a bit and put something together. However, he was unsure of how the response was sent back to the browser (e.g.: what sort of line endings to use) and admitted he had questions. Our boss said not to worry about it and had him install it on the dev server so boss+1 could demo it to users.

Demo time came, and the resulting output lines needed an extra newline between them to make the output look nice.

The boss+1 was incensed and started telling the users and other teams that our work was crap, inferior and not to be trusted.


When this got back to me, I went to have a chat with him about a) going behind my back and leaving me entirely out of the loop, b) having a junior developer do something in an unfamiliar technology and then deploying it without having someone more experienced even look at it, c) running his mouth with unjustified caustic comments ... to the world.

He was not amused and informed us that the work should be perfect every time! I pointed out that while everyone strives for just that, that it was an unreasonable response, and doesn't do much to foster team morale or cooperation.

This went back and forth for a while until I decided that this idiot simply wasn't worth my time.

A few days later, I hear one of my peers having the same conversation with our boss+1. A few days later, someone else. Each time, the architect had been bypassed and some junior developer missed something; it was always some ridiculous trivial facet of the implementation.

I got together with my peers and discussed possibly instituting mandatory testing - by US - to prevent them from bypassing us to get junior developers to do stuff and then having it thrown into a user-visible environment. We agreed, and were promptly overruled by boss+1. Apparently, all programmers, even juniors, were expected to produce perfect code (even without requirements) every time, without exception, and anyone who couldn't cut it should be exposed as incompetent.

We just shot each other the expected Are you f'g kidding me? looks.

After a few weeks of this, we had all had enough of the abuse and went to boss+2, who was totally disinterested.

We all found other jobs, and made sure to bring the better junior devs with us.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!