Planet Russell


CryptogramApple to Store Encryption Keys in China

Apple is bowing to pressure from the Chinese government and storing encryption keys in China. While I would prefer it if it would take a stand against China, I really can't blame it for putting its business model ahead of its desires for customer privacy.

Two more articles.

Worse Than FailureCodeSOD: The Part Version

Once upon a time, there was a project. Like most projects, it was understaffed, under-budgeted, under-estimated, and under the gun. Death marches ensued, and 80 hour weeks became the norm. The attrition rate was so high that no one who was there at the start of the project was there at the end of the project. Like the Ship of Theseus, each person was replaced at least once, but it was still the same team.

Eric wasn’t on that team. He was, however, a consultant. When the project ended and nothing worked, Eric got called in to fix it. And then called back to fix it some more. And then called back to implement new features. And called back…

While diagnosing one problem, Eric stumbled across the method getPartVersions. A part number was always something like “123456–1”, where the first group of numbers were the part number itself, and the portion after the “-” was the version of that part.

So, getPartVersions, then, should be something like:

String getPartVersions(String part) {
    //sanity checks omitted
    return part.split("-")[1];

The first hint that things weren’t implemented in a sane way was the method’s signature:

    private List<Integer> getPartVersions(final String searchString)

Why was it returning a list? The calling code always used the first element in the list, and the list was always one element long.

    private List<Integer> getPartVersions(final String searchString) {
        final List<Integer> partVersions = new ArrayList<>();
        if (StringUtils.indexOfAny(searchString, DELIMITER) != -1) {
            final String[] splitString = StringUtils.split(searchString, DELIMITER);
            if (splitString != null && splitString.length > 1) {
                //this is the partIdentifier, we make it empty it so it will not be parsed as a version
                splitString[0] = "";
                for (String s : splitString) {
                    s = s.trim();
                    try {
                        if (s.length() <= 2) {
                    } catch (final NumberFormatException ignored) {
                        //Do nothing probably not an partVersion
        return partVersions;

A part number is always in the form “{PART}-{VERSION}”. That is what the variable searchString should contain. So, they do their basic sanity checks- is there a dash there, does it split into two pieces, etc. Even these sanity checks hint at a WTF, as StringUtils obviously is just wrappers around built-in string functions.

Things get really odd, though, with this:

                splitString[0] = "";
                for (String s : splitString) //…

Throw away the part number, then iterate across the entire series of strings we made by splitting. Check the length- if it’s less than or equal to two, it must be the part version. Parse it into an integer and put it in the list. The real “genius” element of this code is that since the first entry in the splitString array is set to an empty string, Integer.parseInt will throw an exception, thus ensuring we don’t accidentally put the part number in our list.

I’ve personally written methods that have this sort of tortured logic, and given what Eric tells us about the history of the project, I suspect I know what happened here. This method was written before the requirement it fulfilled was finalized. No one, including the business users, actually knew the exact format or structure of a part number. The developer got five different explanations, which turned out to be wrong in 15 different ways, and implemented a compromise that just kept getting tweaked until someone looked at the results and said, “Yes, that’s right.” The dev then closed out the requirement and moved onto the next one.

Eric left the method alone: he wasn’t being paid to refactor things, and too much downstream code depended on the method signature returning a List<Integer>.

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

Planet DebianJan Wagner: Deploying a (simple) docker container system

When a small platform for shipping containers is needed, not speaking about Kubernets or something, you have a couple of common things you might want to deploy at first.

Usual things that I have to roll out everytime deloying such a platform:

Bootstraping docker and docker-compose

Most services are build upon multiple containers. A useful tool for doing this is for example docker-compose where you can describe your whole 'application'. So we need to deploy it beside docker itself.

Deploying Watchtower

An essential operational part is to keep you container images up to date.

Watchtower is an application that will monitor your running Docker containers and watch for changes to the images that those containers were originally started from. If watchtower detects that an image has changed, it will automatically restart the container using the new image.

Deploying http(s) reverse proxy Træfik

If you want to provide multiple (web)services on port 80 and 443, you have to think about how this should be solved. Usually you would use a http(s) reverse proxy, there are many of software implementations available.
The challenging part in such an environment is that services may appear and disappear frequently. (Re)-configuration of the proxy service it the gap that needs to be closed.

Træfik (pronounced like traffic) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease [...] to manage its configuration automatically and dynamically.

Træfik has many interesting features for example 'Let's Encrypt support (Automatic HTTPS with renewal)'.


Planet DebianJohn Goerzen: Emacs #1: Ditching a bunch of stuff and moving to Emacs and org-mode

I’ll admit it. After over a decade of vim, I’m hooked on Emacs.

I’ve long had this frustration over how to organize things. I’ve followed approaches like GTD and ZTD, but things like email or large files are really hard to organize.

I had been using Asana for tasks, Evernote for notes, Thunderbird for email, a combination of ikiwiki and some other items for a personal knowledge base, and various files in an archive directory on my PC. When my new job added Slack to the mix, that was finally the last straw.

A lot of todo-management tools integrate with email — poorly. When you want to do something like “remind me to reply to this in a week”, a lot of times that’s impossible because the tool doesn’t store the email in a fashion you can easily reply to. And that problem is even worse with Slack.

It was right around then that I stumbled onto Carsten Dominik’s Google Talk on org-mode. Carsten was the author of org-mode, and although the talk is 10 years old, it is still highly relevant.

I’d stumbled across org-mode before, but each time I didn’t really dig in because I had the reaction of “an outliner? But I need a todo list.” Turns out I was missing out. org-mode is all that.

Just what IS Emacs? And org-mode?

Emacs grew up as a text editor. It still is, and that heritage is definitely present throughout. But to say Emacs is an editor would be rather unfair.

Emacs is something more like a platform or a toolkit. Not only do you have source code to it, but the very configuration is a program, and there are hooks all over the place. It’s as if it was super easy to write a Firefox plugin. A couple lines, and boom, behavior changed.

org-mode is very similar. Yes, it’s an outliner, but that’s not really what it is. It’s an information organization platform. Its website says “Your life in plain text: Org mode is for keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system.”


If you’ve ever read productivity guides based on GTD, one of the things they stress is effortless capture of items. The idea is that when something pops into your head, get it down into a trusted system quickly so you can get on with what you were doing. org-mode has a capture system for just this. I can press C-c c from anywhere in Emacs, and up pops a spot to type my note. But, critically, automatically embedded in that note is a link back to what I was doing when I pressed C-c c. If I was editing a file, it’ll have a link back to that file and the line I was on. If I was viewing an email, it’ll link back to that email (by Message-Id, no less, so it finds it in any folder). Same for participating in a chat, or even viewing another org-mode entry.

So I can make a note that will remind me in a week to reply to a certain email, and when I click the link in that note, it’ll bring up the email in my mail reader — even if I subsequently archived it out of my inbox.

YES, this is what I was looking for!

The tool suite

Once you’re using org-mode, pretty soon you want to integrate everything with it. There are browser plugins for capturing things from the web. Multiple Emacs mail or news readers integrate with it. ERC (IRC client) does as well. So I found myself switching from Thunderbird and mairix+mutt (for the mail archives) to mu4e, and from xchat+slack to ERC.

And wouldn’t you know it, I liked each of those Emacs-based tools better than the standalone they replaced.

A small side tidbit: I’m using OfflineIMAP again! I even used it with GNUS way back when.

One Emacs process to rule them

I used to use Emacs extensively, way back. Back then, Emacs was a “large” program. (Now my battery status applet literally uses more RAM than Emacs). There was this problem of startup time back then, so there was a way to connect to a running Emacs process.

I like to spawn programs with Mod-p (an xmonad shortcut to a dzen menubar, but Alt-F2 in more traditional DEs would do the trick). It’s convenient to not run several emacsen with this setup, so you don’t run into issues with trying to capture to a file that’s open in another one. The solution is very simple: I created a script, named it em, and put it on my path. All it does is this:

exec emacsclient -c -a "" "$@"

It creates a new emacs process if one doesn’t already exist; otherwise, it uses what you’ve got. A bonus here: parameters such as -nw work just fine, so it really acts just as if you’d typed emacs at the shell prompt. It’s a suitable setting for EDITOR.

Up next…

I’ll be talking about my use of, and showing off configurations for:

  • org-mode, including syncing between computers, capturing, agenda and todos, files, linking, keywords and tags, various exporting (slideshows), etc.
  • mu4e for email, including multiple accounts, bbdb integration
  • ERC for IRC and IM

Planet DebianRenata D'Avila: Woman. Not in tech.

Thank you, Livia Gabos, for helping me to improve this article by giving me feedback on it.

Before I became an intern with Outreachy, my Twitter bio read: "Woman. Not in tech." Well, if you didn't get the picture, let me explain that meant.

It all began with a simple request I received almost an year ago:

Hey, do you want to join our [company] event and give a talk about being a women in tech?

I don't have a job in the tech industry. So, yes, while society does put me in the 'woman' column, I have to admit it's a little hard to give a talk about being 'in tech' when I'm not 'in tech'.

What I can talk about, though, it's about all the women who are not in tech. The many, many friends I have who come to Women in Tech events and meetings, who reach out to me by e-mail, Twitter or even in person, who are struggling to get into tech.

I can talk about the only other girl in my class who, besides me, managed to get an internship. And how we both only got the position because we had passed a written exam about informatics, instead of going through usual channels such as referrals, CV analysis or interviews.

I can talk about the women who are seen as lazy, or that they just don't get it the lessons in tech courses because they don't have the same background and the same amount of time available to study or to do homework at home as their male peers do, since they have to take care of relatives, take care of children, take care of the housework for their family, most of the times while working in one or two jobs just to be able to study.

I can talk about the women and about the mothers who after many years being denied the possibility for a tech career are daring to change paths, but are denied junior positions in favor of younger men who "can be trained on the job" and have "so much more willingness to learn".

I can talk about the women who are seen as uninterested in one or more FLOSS technologies because they don't contribute to said technology, since the men in FLOSS projects have continuously failed in engage and - most importantly - keep them included (but maybe that's just because women lack role models).

A screenshot of the proposal made by the Brazilian community for DebConf19. Even though it lists a lot of women in tech groups, the all-male organizing team says "There is an expectation that the coming of women DDs may spark the interest of these female students by Debian." Even though there are so many Women in Tech communities in Curitiba, as listed above, the all-male 'core team' of the local Debian community itself couldn't find a single woman to work with them for the DebConf proposal. Go figure.

I can talk about the many women I met not at tech conferences, but at teachers' conferences, that have way more experience with computers and programming than I. Women who after years working on the field have given up IT to become teachers, not because it was their lifelong dream, but because they didn't feel comfortable and well-integrated in a male-dominated full-of-mysoginistic-ideals tech industry. Because it was - and is - almost impossible for them to break the glass ceiling.

I can even talk about all the women who are lesbians that a certain community of Women In Tech could not find when they wanted someone to write an article about 'being homossexual in tech' to be published right on Brazil's Lesbian Visibility Day, so they had to go and ask a gay man to talk about his own experience. Well, it seems like those women aren't "in tech" either.

Tokenization can be especially apparent when the lone person in a minority group is not only asked to speak for the group, but is consistently asked to speak about being a member of that group. Geek Feminism - Tokenism

The things is, a lot of people don't want to hear any those stories. Companies in particular only want token women from outside the company (because, let's face it, most tech companies can't find the talent within) who will come up to the stage and inspire other women saying what a great experience it is to be in tech - and that "everyone should try it too!".

"Don't talk about diversity unless you're also commited to inclusion." Naomi Ceder

I do believe all women should try and get knowledge about tech and that is what I work towards. We shouldn't have to rely only on the men in our life to get things done with our computers or our cell phones or our digital life.

But to tell other women they should get into the tech industry? I guess not.

After all, who am I to tell other women they should come to tech - and to stay in tech - when I know we are bound to face all this?


For Brazilian women not in tech, I'm organizing a crowdfunding campaign to get at least five of them the opportunity to attend MiniDebConf in Curitiba, Parana, in April. None of these girls can afford the trip and they don't have a company to sponsor them. If you are willing to help, please get in touch or check this link: Women in MiniDebConf.

More on the subject:

Planet DebianBenjamin Mako Hill: XORcise

XORcise (ɛɡ.zɔʁ.siz) verb 1. To remove observations from a dataset if they satisfy one of two criteria, but not both. [e.g., After XORcising adults and citizens, only foreign children and adult citizens were left.]

Krebs on SecurityBot Roundup: Avalanche, Kronos, NanoCore

It’s been a busy few weeks in cybercrime news, justifying updates to a couple of cases we’ve been following closely at KrebsOnSecurity. In Ukraine, the alleged ringleader of the Avalanche malware spam botnet was arrested after eluding authorities in the wake of a global cybercrime crackdown there in 2016. Separately, a case that was hailed as a test of whether programmers can be held accountable for how customers use their product turned out poorly for 27-year-old programmer Taylor Huddleston, who was sentenced to almost three years in prison for making and marketing a complex spyware program.

First, the Ukrainian case. On Nov. 30, 2016, authorities across Europe coordinated the arrest of five individuals thought to be tied to the Avalanche crime gang, in an operation that the FBI and its partners abroad described as an unprecedented global law enforcement response to cybercrime. Hundreds of malicious web servers and hundreds of thousands of domains were blocked in the coordinated action.

The global distribution of servers used in the Avalanche crime machine. Source:

The alleged leader of the Avalanche gang — 33-year-old Russian Gennady Kapkanov — did not go quietly at the time. Kapkanov allegedly shot at officers with a Kalashnikov assault rifle through the front door as they prepared to raid his home, and then attempted to escape off of his 4th floor apartment balcony. He was later released, after police allegedly failed to file proper arrest records for him.

But on Monday Agence France-Presse (AFP) reported that Ukrainian authorities had once again collared Kapkanov, who was allegedly living under a phony passport in Poltav, a city in central Ukraine. No word yet on whether Kapkanov has been charged, which was supposed to happen Monday.

Kapkanov’s drivers license. Source:


Lawyers for Taylor Huddleston, a 27-year-old programmer from Hot Springs, Ark., originally asked a federal court to believe that the software he sold on the sprawling hacker marketplace Hackforums — a “remote administration tool” or “RAT” designed to let someone remotely administer one or many computers remotely — was just a benign tool.

The bad things done with Mr. Huddleston’s tools, the defendant argued, were not Mr. Huddleston’s doing. Furthermore, no one had accused Mr. Huddleston of even using his own software.

The Daily Beast first wrote about Huddleston’s case in 2017, and at the time suggested his prosecution raised questions of whether a programmer could be held criminally responsible for the actions of his users. My response to that piece was “Dual-Use Software Criminal Case Not So Novel.

Photo illustration by Lyne Lucien/The Daily Beast

The court was swayed by evidence that yes, Mr. Huddleston could be held criminally responsible for those actions. It sentenced him to 33 months in prison after the defendant acknowledged that he knew his RAT — a Remote Access Trojan dubbed “NanoCore RAT” — was being used to spy on webcams and steal passwords from systems running the software.

Of course Huddleston knew: He didn’t market his wares on some Craigslist software marketplace ad, or via video promos on his local cable channel: He marketed the NanoCore RAT and another software licensing program called Net Seal exclusively on Hackforums[dot]net.

This sprawling, English language forum has a deep bench of technical forum discussions about using RATs and other tools to surreptitiously record passwords and videos of “slaves,” the derisive term for systems secretly infected with these RATs.

Huddleston knew what many of his customers were doing because many NanoCore users also used Huddleston’s Net Seal program to keep their own RATs and other custom hacking tools from being disassembled or “cracked” and posted online for free. In short: He knew what programs his customers were using Net Seal on, and he knew what those customers had done or intended to do with tools like NanoCore.

The sentencing suggests that where you choose to sell something online says a lot about what you think of your own product and who’s likely buying it.

Daily Beast author Kevin Poulsen noted in a July 2017 story that Huddleston changed his tune and pleaded guilty. The story pointed to an accompanying plea in which Huddleston stipulated that he “knowingly and intentionally aided and abetted thousands of unlawful computer intrusions” in selling the program to hackers and that he “acted with the purpose of furthering these unauthorized computer intrusions and causing them to occur.”


Bleeping Computer’s Catalin Cimpanu observes that Huddleston’s case is similar to another being pursued by U.S. prosecutors against Marcus “MalwareTech” Hutchins, the security researcher who helped stop the spread of the global WannaCry ransomware outbreak in May 2017. Prosecutors allege Hutchins was the author and proprietor of “Kronos,” a strain of malware designed to steal online banking credentials.

Marcus Hutchins, just after he was revealed as the security expert who stopped the WannaCry worm. Image:

On Sept. 5, 2017, KrebsOnSecurity published “Who is Marcus Hutchins?“, a breadcrumbs research piece on the public user profiles known to have been wielded by Hutchins. The data did not implicate him in the Kronos trojan, but it chronicles the evolution of a young man who appears to have sold and published online quite a few unique and powerful malware samples — including several RATs and custom exploit packs (as well as access to hacked PCs).

MalwareTech declined to be interviewed by this publication in light of his ongoing prosecution. But Hutchins has claimed he never had any customers because he didn’t write the Kronos trojan.

Hutchins has pleaded not guilty to all four counts against him, including conspiracy to distribute malicious software with the intent to cause damage to 10 or more affected computers without authorization, and conspiracy to distribute malware designed to intercept protected electronic communications.

Hutchins said through his @MalwareTechBlog account on Twitter Feb. 26 that he wanted to publicly dispute my Sept. 2017 story. But he didn’t specify why other than saying he was “not allowed to.”

MWT wrote: “mrw [my reaction when] I’m not allowed to debunk the Krebs article so still have to listen to morons telling me why I’m guilty based on information that isn’t even remotely correct.”

Hutchins’ tweet on Feb. 26, 2018.

According to a story at BankInfoSecurity, the evidence submitted by prosecutors for the government includes:

  • Statements made by Hutchins after he was arrested.
  • A CD containing two audio recordings from a county jail in Nevada where he was detained by the FBI.
  • 150 pages of Jabber chats between the defendant and an individual.
  • Business records from Apple, Google and Yahoo.
  • Statements (350 pages) by the defendant from another internet forum, which were seized by the government in another district.
  • Three to four samples of malware.
  • A search warrant executed on a third party, which may contain some privileged information.

The case against Hutchins continues apace in Wisconsin. A scheduling order for pretrial motions filed Feb. 22 suggests the court wishes to have a speedy trial that concludes before the end of April 2018.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #148

Here's what happened in the Reproducible Builds effort between Sunday February 18 and Saturday February 24 2018:

Logo and Outreachy/GSoC

Reproducible work in other projects

There were a number of blog posts related to reproducible builds published this week:

Development and fixes in Debian key packages

Norbert Preining added calls to dh_stripnondeterminism to a number of TexLive packages which should let them become reproducible in Debian (#886988).

"Y2K-bug reloaded"

As part of the work on reproducible builds for openSUSE, Bernhard M. Wiedemann built packages 15 years in the future and discovered a widespread systematic errors in how Perl's Time::Local functions are used.

This affected a diverse set of software - including git and our strip-nondeterminism (via Archive::Zip)

grep was run on 16,896 tarballs in openSUSE's devel:languages:perl project and 102 of them contained timegm or timelocal calls. Of those, over 30 were problematic and some more need to be analyzed:

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

60 package reviews have been added, 32 have been updated and 30 have been removed in this week, adding to our knowledge about identified issues.

Two new toolchain issue types have been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (41)
  • Andreas Beckmann (1)
  • Boyuan Yang (1) development


This week's edition was written by Bernhard M. Wiedemann, kpcyrd, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianNorbert Preining: CafeOBJ 1.5.7 released

Yesterday we have released CafeOBJ 1.5.7 with lots of changes concerning the inductive theorem prover CITP, as well as fixes to make CafeOBJ work with current SBCL. The documentation has gained a few more documents (albeit in Japanese), please see Documentation pages for the full list. The reference manual has been updated and is available as PDF, Html, or Wiki.


To quote from our README:

CafeOBJ is a new generation algebraic specification and programming language. As a direct successor of OBJ, it inherits all its features (flexible mix-fix syntax, powerful typing system with sub-types, and sophisticated module composition system featuring various kinds of imports, parameterised modules, views for instantiating the parameters, module expressions, etc.) but it also implements new paradigms such as rewriting logic and hidden algebra, as well as their combination.


Binary packages for Linux, MacOS, and Windows are already available, both in 32 and 64 bit and based on Allegro CL and SBCL (with some exceptions). All downloads can be found at the CafeOBJ download page. The source code can also be found on the download page, or directly from here: cafeobj-1.5.7.tar.gz.

The CafeOBJ Debian package is already updated.

Macports file has also been updated, please see the above download/install page for details how to add our sources to your macport.

Bug reports

If you find a bug, have suggestions, or complains, please open an issue at the Github issue page.

For other inquiries, please use

TEDFollow your dreams without fear: 4 questions with Zubaida Bai

Cartier and TED believe in the power of bold ideas to empower local initiatives to have global impact. To celebrate Cartier’s dedication to launching the ideas of female entrepreneurs into concrete change, TED has curated a special session of talks around the theme “Bold Alchemy” for the Cartier Women’s Initiative Awards, featuring a selection of favorite TED speakers.

Leading up to the session, TED talked with women’s health advocate and TED Fellow Zubaida Bai about what inspires her work to improve the health and livelihoods of women worldwide.

TED: Tell us who you are.
Zubaida Bai: I am a women’s health advocate, a mother, a designer and innovator of health and livelihood solutions for underserved women and girls. I’ve traveled to the poorest communities in the world, listened compassionately to women and observed their challenges and indignities. As an entrepreneur and thought leader, I’m putting my passion into a movement that will address market failures, break taboos, and elevate the health of women and girls as a core topic in the world.

TED: What’s a bold move you’ve made in your career?
ZB: The decision I made with my husband and co-founder to make our company a for-profit venture. We wanted to prove that the poor are not poor in mind, and if you offer them a quality product that they need, and can afford, they will buy it. We also wanted to show that our business mode — serving the bottom of the pyramid — was scalable. Being a social sustainable enterprise is tough, especially if you serve women and children. But relying on non-profit donations especially for women’s health comes with a price. And that price is often an endless cycle of fundraising that makes it hard to create jobs and economically lift up the very communities being served. We are proud that every woman in our facilities in Chennai receives healthcare in addition to her salary.

TED: Tell us about a woman who inspires you.
ZB: My mother. She worked very hard under social constraints in India that were not favorable towards women. She was always working side jobs and creating small enterprises to help keep our family going, and I learned a lot from her. She also pushed me and believed in me and always created opportunities for me that she was denied and didn’t have access to.

TED: If you could go back in time, what would you tell your 18-year-old self?
ZB: To believe in your true potential. To follow your dreams without fear, as success is believing in your dreams and having the courage to pursue them — not the end result.

The private TED session at Cartier takes place April 26 in Singapore. It will feature talks from a diverse range of global leaders, entrepreneurs and change-makers, exploring topics ranging from the changing global workforce to maternal health to data literacy, and it will include a performance from the only female double violinist in the world.

TEDYou are here for a reason: 4 questions with Halla Tómasdóttir

Cartier and TED believe in the power of bold ideas to empower local initiatives to have global impact. To celebrate Cartier’s dedication to launching the ideas of female entrepreneurs into concrete change, TED has curated a special session of talks around the theme “Bold Alchemy” for the Cartier Women’s Initiative Awards, featuring a selection of favorite TED speakers.

Leading up to the session, TED talked with financier, entrepreneur and onetime candidate for president of Iceland, Halla Tómasdóttir, about what influences, inspires and drives her to be bold.

TED: Tell us who you are.
Halla Tómasdóttir: I think of myself first and foremost as a change catalyst who is passionate about good leadership and a gender-balanced world. My leadership career started in corporate America with Mars and Pepsi Cola, but since then I have served as an entrepreneur, educator, investor, board director, business leader and presidential candidate. I am married, a proud mother of two teenagers and a dog and am perhaps best described by the title given to me by the New Yorker: “A Living Emoji of Sincerity.”

TED: What’s a bold move you’ve made in your career?
HT: I left a high-profile position as the first female CEO of the Iceland Chamber of Commerce to become an entrepreneur with the vision to incorporate feminine values into finance. I felt the urge to show a different way in a sector that felt unsustainable to me, and I longed to work in line with my own values.

TED: Tell us about a woman who inspires you.
HT: The women of Iceland inspired me at an early age, when they showed incredible courage, solidarity and sisterhood and “took the day off” (went on a strike) and literally brought the country to its knees — as nothing worked when women didn’t do any work. Five years later, Iceland was the first country in the world to democratically elect a woman as president. I was 11 years old at the time, and her leadership has inspired me ever since. Her clarity on what she cares about and her humble way of serving those causes is truly remarkable.

TED: If you could go back in time, what would you tell your 18-year-old self?
HT: I would say: Halla, just be you and know that you are enough. People will frequently tell you things like: “This is the way we do things around here.” Don’t ever take that as a valid answer if it doesn’t feel right to you. We are not here to continue to do more of the same if it doesn’t work or feel right anymore. We are here to grow, ourselves and our society. You are here for a reason: make your life and leadership matter.

The private TED session at Cartier takes place April 26 in Singapore. It will feature talks from a diverse range of global leaders, entrepreneurs and change-makers, exploring topics ranging from the changing global workforce to maternal health to data literacy, and it will include a performance from the only female double violinist in the world.

CryptogramCellebrite Unlocks iPhones for the US Government

Forbes reports that the Israeli company Cellebrite can probably unlock all iPhone models:

Cellebrite, a Petah Tikva, Israel-based vendor that's become the U.S. government's company of choice when it comes to unlocking mobile devices, is this month telling customers its engineers currently have the ability to get around the security of devices running iOS 11. That includes the iPhone X, a model that Forbes has learned was successfully raided for data by the Department for Homeland Security back in November 2017, most likely with Cellebrite technology.


It also appears the feds have already tried out Cellebrite tech on the most recent Apple handset, the iPhone X. That's according to a warrant unearthed by Forbes in Michigan, marking the first known government inspection of the bleeding edge smartphone in a criminal investigation. The warrant detailed a probe into Abdulmajid Saidi, a suspect in an arms trafficking case, whose iPhone X was taken from him as he was about to leave America for Beirut, Lebanon, on November 20. The device was sent to a Cellebrite specialist at the DHS Homeland Security Investigations Grand Rapids labs and the data extracted on December 5.

This story is based on some excellent reporting, but leaves a lot of questions unanswered. We don't know exactly what was extracted from any of the phones. Was it metadata or data, and what kind of metadata or data was it.

The story I hear is that Cellebrite hires ex-Apple engineers and moves them to countries where Apple can't prosecute them under the DMCA or its equivalents. There's also a credible rumor that Cellebrite's mechanisms only defeat the mechanism that limits the number of password attempts. It does not allow engineers to move the encrypted data off the phone and run an offline password cracker. If this is true, then strong passwords are still secure.

Worse Than Failure-0//

In software development, there are three kinds of problems: small, big and subtle. The small ones are usually fairly simple to track down; a misspelled label, a math error, etc. The large ones usually take longer to find; a race condition that you just can't reproduce, an external system randomly feeding you garbage, and so forth.

Internet word cloud

The subtle problems are an entirely different beast. It can be as simple as somebody entering 4321 instead of 432l (432L), or similar with 'i', 'l', '1', '0' and 'O'. It can be an interchanged comma and period. It can be something more complex, such as an unsupported third party library that throws back errors for undefined conditions, but randomly provides so little information as to be useful to neither user nor developer.

Brujo B encountered such a beast back in 2003 in a sub-equatorial bank that had been especially fond of VB6. This bank had tried to implement standards. In particular, they wanted all of their error messages to appear consistently for their users. To this end, they put a great deal of time and effort into building a library to display error messages in a consistent format. Specifically:


An example error message might be:

  File Not Found - 127 / File 'your.file' could not be found / FileImporter

Unfortunately, the designers of this routine could not compensate for all of the third party tools and libraries that did NOT set some/most/all of those variables. This led to interesting presentations of errors to both users and developers:

  - 34 / Network Connection Lost /
  Unauthorized - 401 //

Crystal Reports was particularly unhelpful, in that it refused to populate any field from which error details could be obtained, leading to the completely unhelpful:


...which could only be interpreted as Something really bad happened, but we don't know what that is and you have no way to figure that out. It didn't matter what Brujo and peers did. Everything that they tried to cajole Crystal Reports into giving context information failed to varying degrees; they could only patch specific instances of errors; but the Ever-Useless™ -0// error kept popping up to bite them in the arse.

After way too much time trying to slay the beast, they gave up, accepted it as one of their own and tried their best to find alternate ways of figuring out what the problems were.

Several years after moving on to saner pastures, Brujo returned to visit old friends. On the wall they had added a cool painting with many words that "describe the company culture". Layered in were management approved words, like "Trust" and "Loyalty". Some were more specific in-jokes, names of former employees, or references to big achievements the organization had made.

One of them was -0//

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

Don MartiWhat I don't get about Marketing

I want to try to figure out something I still don't understand about Marketing.

First, read this story by Sarah Vizard at Marketing Week: Why Google and Facebook should heed Unilever’s warnings.

All good points, right?

With the rise of fake news and revelations about how the Russians used social platforms to influence both the US election and EU referendum, the need for change is pressing, both for the platforms and for the advertisers that support them.

We know there's a brand equity crisis going on. Brand-unsafe placements are making mainstream brands increasingly indistinguishable from scams. So the story makes sense so far. But here's what I don't get.

For the call to action to work, Unilever really needs other brands to rally round but these have so far been few and far between.

Other brands? Why?

If brands are worth anything, they can at least help people tell one product apart from another.

Think Small VW ad

Saying that other brands need to participate in saving Unilever's brands from the three-ring shitshow of brand-unsafe advertising is like saying that Volkswagen really needs other brands to get into simple layouts and natural-sounding copy just because Volkswagen's agency did.

Not everybody has to make the same stuff and sell it the same way. Brands being different from each other is a good thing. (Right?)

generic food

Sometimes a problem on the Internet isn't a "let's all work together" kind of problem. Sometimes it's an opportunity for one brand to get out ahead of another.

What if every brand in a category kept on playing in the trash fire except one?

Planet Linux AustraliaLev Lafayette: Drupal "Access denied" Message

It happens rarely enough, but on occasion (such as an upgrade to a database system (e.g., MySQL, MariaDB) or system version of a web-scripting language (e.g., PHP), you can end up with one's Drupal site failing to load, displaying only the error message similar to:

PDOException: SQLSTATE[HY000] [1044] Access denied for user 'username'@'localhost' to database 'database' in lock_may_be_available() (line 167 of /website/includes/

read more

Planet DebianRenata D'Avila: Working with git branches was the best decision I made

This is a short story about how chosing to use git branches saved me from some trouble.

How did I decide to use a new branch?

Up until certain point, I was just commiting all the code (and notes) I wrote into the master branch, which is the default for Git. No big deal, if I broke something, I could just go back and revert one or more commits and it would be okay.

It got to the point, though, that I would have send the code to people other than the mentors who had been seeing me breaking things: I would have to submit it for other developers to test it. While a broken code was the ideal for my mentors to see and help me in figuring out how to fix it, that wouldn't be useful for people seeing it in production and giving me feedback and suggestions for improvement. I had to send to them a good code that worked, and I had to do that while I worked on the last functionality needed, which is the recurrence rule for events.

After working on the recurrence rule for a few hours I realized that, since it wasn't really functional yet, I couldn't simply commit it on top of the rest of the code that had already been commited/published. Sure, I could have commented the function and commited the code that way, but it would be just too much trouble to have to comment/uncomment it every time I would work on that part.

That is how I chose to create a new git branch instead and starting commiting there the changes I had made.

I created a new branch called "devel" and asked git to change to it:

git checkout -b devel

Then, I did a "git status" to check everything was as expected and the branch had been changed to devel:

git status
renata@debian:~/foss_events$ git status
On branch devel

Next: staging the files and creating the commit:

git add --all .

Because the branch was created on my local machine, if I straight out try to just push the code upstream, it will give an error, because there is no "devel" branch on Github yet. So let's give some arguments to the git push command, asking it to set an upstream branch in the origin, which will receive the code:

git push --set-upstream origin devel

How did this save me from some trouble?

Simply because, before I sent the code to the moin-devel list, I decided to clean out the repository of the old and repeated code... by deleting those files. I wanted to do that so anyone who came to try it out would be able to spot the macro easily and not to worry whether the other files had to be installed anywhere for the code to work.

Once I had deleted those files using rm -r on the command line and right before I commited, I did a "git status" to check if the delete action had been recorded... that was when I noticed that I was still on the devel branch, and not on the master branch, where I wanted this change to take place! My development branch should stay the mess it was because it was stuff I used to try out.

I had used "rm -r", though, so how did I get those files back to the devel branch? I mean, the easy way, not the downloading-the-repo-again way.

Simple! I would have to discard the changes (deletes) I had made on the devel branch and change to the master branch to do it again:


git checkout -f

This will throw away the changes. Then, it's possible to move back to the master branch:

git checkout master

And, yup, I'm totally writing this article for future reference to myself.

Read more


Planet DebianNorbert Preining: Debian updates: CafeOBJ, Calibre, Html5-parser, LaTeXML, TeX Live

The last few days have seen a somehow quite unusual frenzy of uploads to Debian from my side. Mostly due to the fact that while doing my tax declaration (btw, a huge pain here in Japan) I needed some spare time and dedicated them to long overdue package maintenance work as well as some new request.

So what has happened (in alphabetic order):

  • CafeOBJ has been out of Debian/testing due to build errors on some platforms for quite some time. We (upstream) have fixed these incompatibilities which arose in minor version changes of SBCL, very disappointing. Anyway, we hope that the latest release of CafeOBJ will soon enter Debian/testing again.
  • Calibre gets the usual 2-3 weekly updates following upstream. Not much to report here – and in fact also not much work. There are a few items I still want to fix, in particular Rar support, but the maintainer of unrar, Martin Meredith is completely unresponsive, although I have submitted patches to reinstantiate the shared library. That means that CBR support etc is still missing from Debian’s Calibre.
  • Html5-parser is a support library for Calibre, which saw an update which I have finally packaged. I haven’t had any complications with the previous version, though.
  • LaTeXML hasn’t been updated in nearly 3 years in Debian, despite the fact that a new upstream is available for quite some time. I got contacted by upstream about this, and realized I had contact with the maintainer of LaTeXML back in 2015. He isn’t using and maintaining LaTeXML anymore, and kindly agreed that I take it over under the Debian TeX Maintainers umbrella. So I have updated the packaging for the new release.
  • TeX Live got the usual monthly update I reported about the other day.

I thought with all that done I can rest a bit and concentrate on my bread-job (software R&D engineer) or my sweets-job (Researcher in Mathematical Logic), but out of the blue a RC bug of the TeX Live packages just flew in. That will be another evening.


CryptogramE-Mail Leaves an Evidence Trail

If you're going to commit an illegal act, it's best not to discuss it in e-mail. It's also best to Google tech instructions rather than asking someone else to do it:

One new detail from the indictment, however, points to just how unsophisticated Manafort seems to have been. Here's the relevant passage from the indictment. I've bolded the most important bits:

Manafort and Gates made numerous false and fraudulent representations to secure the loans. For example, Manafort provided the bank with doctored [profit and loss statements] for [Davis Manafort Inc.] for both 2015 and 2016, overstating its income by millions of dollars. The doctored 2015 DMI P&L submitted to Lender D was the same false statement previously submitted to Lender C, which overstated DMI's income by more than $4 million. The doctored 2016 DMI P&L was inflated by Manafort by more than $3.5 million. To create the false 2016 P&L, on or about October 21, 2016, Manafort emailed Gates a .pdf version of the real 2016 DMI P&L, which showed a loss of more than $600,000. Gates converted that .pdf into a "Word" document so that it could be edited, which Gates sent back to Manafort. Manafort altered that "Word" document by adding more than $3.5 million in income. He then sent this falsified P&L to Gates and asked that the "Word" document be converted back to a .pdf, which Gates did and returned to Manafort. Manafort then sent the falsified 2016 DMI P&L .pdf to Lender D.

So here's the essence of what went wrong for Manafort and Gates, according to Mueller's investigation: Manafort allegedly wanted to falsify his company's income, but he couldn't figure out how to edit the PDF. He therefore had Gates turn it into a Microsoft Word document for him, which led the two to bounce the documents back-and-forth over email. As attorney and blogger Susan Simpson notes on Twitter, Manafort's inability to complete a basic task on his own seems to have effectively "created an incriminating paper trail."

If there's a lesson here, it's that the Internet constantly generates data about what people are doing on it, and that data is all potential evidence. The FBI is 100% wrong that they're going dark; it's really the golden age of surveillance, and the FBI's panic is really just its own lack of technical sophistication.

Krebs on SecurityUSPS Finally Starts Notifying You by Mail If Someone is Scanning Your Snail Mail Online

In October 2017, KrebsOnSecurity warned that ne’er-do-wells could take advantage of a relatively new service offered by the U.S. Postal Service that provides scanned images of all incoming mail before it is slated to arrive at its destination address. We advised that stalkers or scammers could abuse this service by signing up as anyone in the household, because the USPS wasn’t at that point set up to use its own unique communication system — the U.S. mail — to alert residents when someone had signed up to receive these scanned images.

Image: USPS

The USPS recently told this publication that beginning Feb. 16 it started alerting all households by mail whenever anyone signs up to receive these scanned notifications of mail delivered to that address. The notification program, dubbed “Informed Delivery,” includes a scan of the front of each envelope destined for a specific address each day.

The Postal Service says consumer feedback on its Informed Delivery service has been overwhelmingly positive, particularly among residents who travel regularly and wish to keep close tabs on any bills or other mail being delivered while they’re on the road. It has been available to select addresses in several states since 2014 under a targeted USPS pilot program, but it has since expanded to include many ZIP codes nationwide. U.S. residents can find out if their address is eligible by visiting

According to the USPS, some 8.1 million accounts have been created via the service so far (Oct. 7, 2017, the last time I wrote about Informed Delivery, there were 6.3 million subscribers, so the program has grown more than 28 percent in five months).

Roy Betts, a spokesperson for the USPS’s communications team, says post offices handled 50,000 Informed Delivery notifications the week of Feb. 16, and are delivering an additional 100,000 letters to existing Informed Delivery addresses this coming week.

Currently, the USPS allows address changes via the USPS Web site or in-person at any one of more than 35,000 USPS retail locations nationwide. When a request is processed, the USPS sends a confirmation letter to both the old address and the new address.

If someone already signed up for Informed Delivery later posts a change of address request, the USPS does not automatically transfer the Informed Delivery service to the new address: Rather, it sends a mailer with a special code tied to the new address and to the username that requested the change. To resume Informed Delivery at the new address, that code needs to be entered online using the account that requested the address change.

A review of the methods used by the USPS to validate new account signups last fall suggested the service was wide open to abuse by a range of parties, mainly because of weak authentication and because it is not easy to opt out of the service.

Signing up requires an eligible resident to create a free user account at, which asks for the resident’s name, address and an email address. The final step in validating residents involves answering four so-called “knowledge-based authentication” or KBA questions.

The USPS told me it uses two ID proofing vendors: Lexis Nexisand, naturally, recently breached big three credit bureau Equifax — to ask the magic KBA questions, rotating between them randomly.

KrebsOnSecurity has assailed KBA as an unreliable authentication method because so many answers to the multiple-guess questions are available on sites like Spokeo and Zillow, or via social networking profiles.

It’s also nice when Equifax gives away a metric truckload of information about where you’ve worked, how much you made at each job, and what addresses you frequented when. See: How to Opt Out of Equifax Revealing Your Salary History for how much leaks from this lucrative division of Equifax.

All of the data points in an employee history profile from Equifax will come in handy for answering the KBA questions, or at least whittling away those that don’t match salary ranges or dates and locations of the target identity’s previous addresses.

Once signed up, a resident can view scanned images of the front of each piece of incoming mail in advance of its arrival. Unfortunately, anyone able to defeat those automated KBA questions from Equifax and Lexis Nexis — be they stalkers, jilted ex-partners or private investigators — can see who you’re communicating with via the Postal mail.

Maybe this is much ado about nothing: Maybe it’s just a reminder that people in the United States shouldn’t expect more than a post card’s privacy guarantee (which in can leak the “who” and “when” of any correspondence, and sometimes the “what” and “why” of the communication). We’d certainly all be better off if more people kept that guarantee in mind for email in addition to snail mail. At least now the USPS will deliver your address a piece of paper letting you know when someone signs up to look at those W’s in your snail mail online.

Planet DebianJo Shields: EOL notification – Debian 7, Ubuntu 12.04

Mono packages will no longer be built for these ancient distribution releases, starting from when we add Ubuntu 18.04 to the build matrix (likely early to mid April 2018).

Unless someone with a fat wallet screams, and throws a bunch of money at Azure, anyway.

Planet DebianAndrea Veri: Adding reCAPTCHA v2 support to Mailman

As a follow-up to the reCAPTCHA v1 post published back in 2014 here it comes an updated version for migrating your Mailman instance off from version 1 (being decommissioned on the 31th of March 2018) to version 2. The original python-recaptcha library was forked into and made compatible with reCAPTCHA version 2.

The relevant changes against the original library can be resumed as follows:

  1. Added ‘version=2’ against displayhtml, load_scripts functions
  2. Introduce the v2submit (along with submit to keep backwards compatibility) function to support reCAPTCHA v2
  3. The updated library is backwards compatible with version 1 to avoid unexpected code breakages for instances still running version 1

The required changes are located on the following files:


---	2018-02-26 14:56:48.000000000 +0000
+++ /usr/lib/mailman/Mailman/Cgi/	2018-02-26 14:08:34.000000000 +0000
@@ -31,6 +31,7 @@
 from Mailman import i18n
 from Mailman.htmlformat import *
 from Mailman.Logging.Syslog import syslog
+from recaptcha.client import captcha
 # Set up i18n
 _ = i18n._
@@ -244,6 +245,10 @@
     replacements[''] = mlist.FormatFormStart('listinfo')
     replacements[''] = mlist.FormatBox('fullname', size=30)
+    # Captcha
+    replacements[''] = captcha.displayhtml(mm_cfg.RECAPTCHA_PUBLIC_KEY, use_ssl=True, version=2)
+    replacements[''] = captcha.load_script(version=2)
     # Do the expansion.
     doc.AddItem(mlist.ParseTags('listinfo.html', replacements, lang))
     print doc.Format()


---	2018-02-26 14:56:38.000000000 +0000
+++ /usr/lib/mailman/Mailman/Cgi/	2018-02-26 14:08:18.000000000 +0000
@@ -32,6 +32,7 @@
 from Mailman.UserDesc import UserDesc
 from Mailman.htmlformat import *
 from Mailman.Logging.Syslog import syslog
+from recaptcha.client import captcha
 SLASH = '/'
 ERRORSEP = '\n\n<p>'
@@ -165,6 +166,17 @@
     _('There was no hidden token in your submission or it was corrupted.'))
             results.append(_('You must GET the form before submitting it.'))
+    # recaptcha
+    captcha_response = captcha.v2submit(
+        cgidata.getvalue('g-recaptcha-response', ""),
+        mm_cfg.RECAPTCHA_PRIVATE_KEY,
+        remote,
+    )
+    if not captcha_response.is_valid:
+        results.append(_('Invalid captcha: %s' % captcha_response.error_code))
     # Was an attempt made to subscribe the list to itself?
     if email == mlist.GetListEmail():
         syslog('mischief', 'Attempt to self subscribe %s: %s', email, remote)


--- listinfo.html	2018-02-26 15:02:34.000000000 +0000
+++ /usr/lib/mailman/templates/en/listinfo.html	2018-02-26 14:18:52.000000000 +0000
@@ -3,7 +3,7 @@
     <TITLE><MM-List-Name> Info Page</TITLE>
+    <MM-Recaptcha-Script> 
   <BODY BGCOLOR="#ffffff">
@@ -116,6 +116,11 @@
+      <tr>
+        <td>Please fill out the following captcha</td>
+        <td><mm-recaptcha-javascript></TD>
+      </tr>
+      <tr>
 	<td colspan="3">

The updated RPMs are being rolled out to Fedora, EPEL 6 and EPEL 7. In the meantime you can find them here.

If Mailman complains about not being able to load recaptcha.client follow these steps:

cd /usr/lib/mailman/pythonlib
ln -s /usr/lib/python2.6/site-packages/recaptcha/client recaptcha

And then on {subscribe,listinfo}.py:

import recaptcha

Planet DebianMartín Ferrari: Report from SnowCamp #2

Snow! After a lovely car journey through the Alps yesterday, I had a good sleep and I am now in the airport waiting to fly back to Dublin.

I think most attendees will agree that the SnowCamp was a success; I was certainly sad to leave.. It always feels too short!

After my first report, I spent a few hours on fixing a long-standing bug in the KGB bot, which caused it to take several minutes to sync channels and start emitting notifications. I also used for the first time the salsa merge requests feature! The next release of the bot will include this patch and take just a few seconds to be up and running.

I also worked on another RC bug opened on a package I had fixed only the day before, due to another test failure: #891356: golang-google-cloud FTBFS: FAIL; which I have finished and uploaded a few minutes ago.

Finally, I had some more talks which can't be reported upon, and then the Camp was over :-(


Cory DoctorowPodcast: The Man Who Sold the Moon, Part 05

Here’s part five of my reading (MP3) (part four, part three, part two, part one) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.


Worse Than FailureCodeSOD: Waiting for the Future

One of the more interesting things about human psychology is how bad we are at thinking about the negative consequences of our actions if those consequences are in the future. This is why the death penalty doesn’t deter crime, why we dump massive quantities of greenhouse gases into the atmosphere, and why the Y2K bug happened in the first place, and why we’re going to do it again when every 32-bit Unix system explodes in 2038. If the negative consequence happens well after the action which caused it, humans ignore the obvious cause and effect and go on about making problems that have to be fixed later.

Fran inherited a bit of technical debt. Specifically, there’s an auto-numbered field in the database. Due to their business requirements, when the field hits 999,999, it needs to wrap back around to 000,001. Many many years ago, the original developer “solved” that problem thus:

function getstan($callingMethod = null)

    $sequence = 1;

    // get insert id back
    $rs = db()->insert("sequence", array(
        'processor' => 'widgetsinc',
        'RID'       => $this->data->RID,
        'method'    => $callingMethod,
        'card'      => $this->data->cardNumber
    ), false, false);
    if ($rs) { // if query succeeded...
        $sequence = $rs;
        if ($sequence > 999999) {
            db()->q("delete from sequence where processor='widgetsinc'");
                array('processor' => 'widgetsinc', 'RID' => $this->data->RID, 'card' => $this->data->cardNumber), false,
            $sequence = 1;

    return (substr(str_pad($sequence, 6, "0", STR_PAD_LEFT), -6));

The sequence table uses an auto-numbered column. They insert a row into the table, which returns the generated ID used. If that ID is greater than 999,999, they… delete the old rows. They then insert a new row. Then they return “000001”.

Unfortunately, sequences don’t work this way in MySQL, or honestly any other database. They keep counting up unless you alter or otherwise reset the sequence. So, the counter keeps ticking up, and this method keeps deleting the old rows and returning “000001”. The original developer almost certainly never tested what this code does when the counter breaks 999,999, because that day was so far out into the future that they could put off the problem.

Speaking of putting off solving problems, Fran also adds:

For the past 2 years this function has been returning 000001 and is starting to screw up reports.

Broken for at least two years, but only now is it screwing up reports badly enough that anyone wants to do anything to fix it.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Planet DebianNorbert Preining: Disappointing visitors

I recently realized that one of my blog post on Writing Japanese in LaTeX gets a disproportionate number of visitors. It turned out that most of them are not really interested in LaTeX, but more in Latex …

This is a screenshot of one of the search engines ( that point to my blog. I still cannot grasp how I made it to the top of the list, though 😉 Maybe I should open a different business, the pay would definitely better than what I get now.

Planet Linux AustraliaOpenSTEM: At Mercy of the Weather

It is the time of year when Australia often experiences extreme weather events. February is renowned as the hottest month and, in some parts of the country, also the wettest month. It often brings cyclones to our coasts and storms, which conversely enough, may trigger fires as lightening strikes the hot, dry bush. Aboriginal people […]


Planet DebianNorbert Preining: Debian/TeX Live 2017.20180225-1

To my big surprise, the big rework didn’t create any havoc at all, not one bug report regarding the change. That is good. OTOH, I took some time off due to various surprising (and sometimes disturbing) things that have happened in the last month, so the next release took a bit longer than expected.

I am giving here the list of new and updated packages extracted from my local tlmgr.log file, but I am having some doubts, the list seems in both cases a bit too long for me 😉 Anyway, it was a very busy month for TeX packages.

We are also moving at high speed to TeX Live 2018. There will be probably one more release of Debian packages before we switch to the 2018 branch, which aligns nicely with the release planning of TeX Live.


New packages

abnt, adigraph, algobox, algolrevived, aligned-overset, alkalami, amscls-doc, authorarchive, axodraw2, babel-azerbaijani, beamertheme-saintpetersburg, beilstein, bib2gls, biblatex-enc, biblatex-oxref, biochemistry-colors, blowup, bredzenie, bxcalc, bxjaprnind, bxorigcapt, bxtexlogo, cesenaexam, cheatsheet, childdoc, cje, cm-mf-extra-bold, cmsrb, coelacanth, collection-plaingeneric, combofont, context-handlecsv, crossreftools, ctan-o-mat, currency, dejavu-otf, dijkstra, draftfigure, ducksay, dviinfox, dynkin-diagrams, endofproofwd, eqnnumwarn, fancyhandout, fetchcls, fixjfm, fontawesome5, fontloader-luaotfload, forms16be, glossaries-finnish, gotoh, graphicxpsd, gridslides, hackthefootline, hagenberg-thesis, hecthese, hithesis, hlist, hyphen-belarusian, ifptex, ifxptex, invoice2, isopt, istgame, jfmutil, knowledge, komacv-rg, ku-template, labelschanged, ladder, latex-mr, latex-refsheet, lccaps, limecv, llncsconf, luapackageloader, lyluatex, maker, marginfit, mathfam256, mathfixs, mcexam, mensa-tex, modernposter, modular, mptrees, multilang, musicography, na-box, na-position, niceframe-type1, nicematrix, notestex, numnameru, octave, outlining, pdfprivacy, pdfreview, pixelart, plex, plex-otf, pm-isomath, poetry, polexpr, pst-antiprism, pst-calculate, pst-dart, pst-geometrictools, pst-poker, pst-rputover, pst-spinner, pst-vehicle, pxufont, rutitlepage, scientific-thesis-cover, scratch, scratchx, sectionbreak, sesstime, sexam, shobhika, short-math-guide, simpleinvoice, simplekv, spark-otf, spark-otf-fonts, spectralsequences, stealcaps, termcal-de, textualicomma, thaienum, thaispec, theatre, thesis-gwu, tikz-feynhand, tikz-karnaugh, tikz-ladder, tikz-layers, tikz-relay, tikz-sfc, tikzcodeblocks, tikzducks, timbreicmc, translator, typewriter, typoaid, uhhassignment, unitn-bimrep, univie-ling, upzhkinsoku, wallcalendar, witharrows, xechangebar, xii-lat, xltabular, xsim, xurl, zebra-goodies, zhlipsum.

Updated packages

ESIEEcv, FAQ-en, GS1, HA-prosper, IEEEconf, IEEEtran, MemoirChapStyles, academicons, achemso, acro, actuarialsymbol, adobemapping, afm2pl, aleph, algorithm2e, amiri, amscls, amsldoc-it, amsmath, amstex, amsthdoc-it, animate, aomart, apa6, appendixnumberbeamer, apxproof, arabi, arabluatex, arara, archaeologie, arsclassica, autosp, awesomebox, babel, babel-english, babel-french, babel-georgian, babel-hungarian, babel-latvian, babel-russian, babel-ukrainian, bangorcsthesis, bangorexam, baskervillef, bchart, beamer, beamerswitch, beebe, besjournals, beuron, bgteubner, biber, biblatex, biblatex-abnt, biblatex-anonymous, biblatex-apa, biblatex-archaeology, biblatex-arthistory-bonn, biblatex-bookinother, biblatex-cheatsheet, biblatex-chem, biblatex-chicago, biblatex-fiwi, biblatex-gb7714-2015, biblatex-gost, biblatex-iso690, biblatex-manuscripts-philology, biblatex-philosophy, biblatex-publist, biblatex-realauthor, biblatex-sbl, biblatex-shortfields, biblatex-source-division, biblatex-trad, biblatex-true-citepages-omit, bibleref, bibletext, bibtex, bibtexperllibs, bibtexu, bidi, bnumexpr, bookcover, bookhands, bpchem, br-lex, bxbase, bxjscls, bxnewfont, bxpapersize, bytefield, c90, callouts, calxxxx-yyyy, catechis, cbfonts-fd, ccicons, cdpbundl, cellspace, changebar, checkcites, chemfig, chemmacros, chemschemex, chet, chickenize, chktex, circuitikz, citeall, cjk-gs-integrate, cjkutils, classicthesis, cleveref, cm, cmexb, cmpj, cns, cochineal, collref, complexity, comprehensive, computational-complexity, context, context-filter, context-fullpage, context-letter, context-title, context-vim, contracard, cooking-units, correctmathalign, covington, cquthesis, crossrefware, cslatex, csplain, csquotes, css-colors, ctan-o-mat, ctanify, ctex, ctie, curves, cweb, cyrillic-bin, cyrplain, datatool, datetime2-bahasai, datetime2-german, datetime2-spanish, datetime2-ukrainian, dccpaper, dejavu-otf, delimset, detex, dnp, doclicense, docsurvey, dox, dozenal, drawmatrix, dtk, dtl, dvi2tty, dviasm, dvicopy, dvidvi, dviljk, dvipdfmx, dvipng, dvipos, dvips, e-french, easyformat, ebproof, eledmac, elements, elpres, elzcards, embrac, emisa, enotez, eplain, epstopdf, eqparbox, esami, etoc, etoolbox, europasscv, euxm, exam, expex, factura, fancyhdr, fancylabel, fbb, fei, feyn, fibeamer, fira, fithesis, fmtcount, fnspe, fontinst, fontname, fontools, fonts-tlwg, fontspec, fonttable, fontware, footnotehyper, forest, fvextra, genealogytree, genmisc, gfsdidot, glossaries, glossaries-extra, glyphlist, gost, graphbox, graphics, graphics-def, graphics-pln, gregoriotex, gsftopk, gtl, guide-to-latex, gustlib, gustprog, halloweenmath, handout, hepthesis, hobby, hvfloat, hvindex, hyperref, hyperxmp, hyph-utf8, hyphen-base, hyphen-churchslavonic, hyphen-german, hyphen-latin, ifluatex, ifplatform, ijsra, impatient-cn, inconsolata, ipaex, iscram, jadetex, japanese-otf, japanese-otf-uptex, jlreq, jmlr, jmn, jsclasses, kantlipsum, karnaugh-map, ketcindy, keyfloat, kluwer, koma-script, komacv, kpathsea, l3build, l3experimental, l3kernel, l3packages, lacheck, lambda, langsci, latex-bin, latex2e-help-texinfo, latex2e-help-texinfo-fr, latex2e-help-texinfo-spanish, latex2man, latex2nemeth, latexbug, latexconfig, latexdiff, latexindent, latexmk, lato, lcdftypetools, leadsheets, leipzig, lettre, libertine, libertinegc, libertinus, libertinust1math, limap, lion-msc, listofitems, lithuanian, lni, lollipop, lt3graph, lua-check-hyphen, lualatex-math, luamplib, luatex, luatexja, luatexko, luatodonotes, luaxml, lwarp, m-tx, macros2e, make4ht, makedtx, makeindex, mandi, manfnt-font, marginnote, markdown, math-into-latex-4, mathpunctspace, mathtools, mcf2graph, media9, metafont, metapost, mex, mfirstuc, mflua, mfnfss, mfware, mhchem, microtype, minted, mltex, morewrites, mpostinl, mptopdf, msu-thesis, multiexpand, musixtex, mwcls, mwe, nddiss, ndsu-thesis, newpx, newtx, newtxtt, nlctdoc, noto, novel, numspell, nwejm, oberdiek, ocgx2, omegaware, oplotsymbl, optidef, ot-tableau, otibet, overlays, overpic, pagecolor, patgen, pdflatexpicscale, pdfpages, pdftex, pdftools, pdfwin, pdfx, perfectcut, pgf, pgfgantt, pgfplots, phfqit, philokalia, phonenumbers, phonrule, pkgloader, placeat, platex, platex-tools, platexcheat, poemscol, polski, polynom, powerdot, prerex, presentations, preview, probsoln, program, ps2pk, pst-barcode, pst-cie, pst-circ, pst-dart, pst-eucl, pst-exa, pst-fit, pst-fractal, pst-func, pst-geo, pst-ghsb, pst-node, pst-ode, pst-ovl, pst-pdf, pst-pdgr, pst-plot, pst-pulley, pst-solarsystem, pst-solides3d, pst-tools, pst2pdf, pstool, pstools, pstricks, pstricks-add, psutils, ptex, ptex-base, ptex-fontmaps, ptex-fonts, ptex2pdf, pxbase, pxchfon, pxjahyper, pxrubrica, pythontex, qpxqtx, quran, ran_toks, randomlist, randomwalk, rec-thy, refenums, reledmac, repere, resphilosophica, revtex4, robustindex, roex, rubik, sasnrdisplay, screenplay-pkg, scsnowman, seetexk, siunitx, skak, skrapport, songs, spreadtab, srbook-mem, stage, struktex, svg, sympytexpackage, synctex, systeme, t1utils, tcolorbox, testidx, tetex, tex, tex-refs, tex4ebook, tex4ht, texconfig, texcount, texdef, texdirflatten, texdoc, texfot, texosquery, texshade, texsis, textgreek, texware, texworks, thalie, thesis-ekf, thuthesis, tie, tikz-kalender, tikz-timing, tikzmark, tikzpeople, tikzsymbols, tlcockpit, tlshell, tocloft, toptesi, tpic2pdftex, tqft, tracklang, translation-biblatex-de, translations, ttfutils, tudscr, tugboat, turabian-formatting, ucharclasses, udesoftec, ulthese, unfonts-core, unfonts-extra, unicode-data, unicode-math, uowthesistitlepage, updmap-map, uplatex, upmethodology, uptex-base, uptex-fonts, variablelm, varsfromjobname, velthuis, visualtikz, vlna, web, widetable, wordcount, xassoccnt, xcharter, xcjk2uni, xcntperchap, xdvi, xecjk, xepersian, xetex, xetexconfig, xetexko, xetexref, xgreek, xii, xindy, xint, xmltex, xmltexconfig, xpinyin, xsavebox, ycbook, yhmath, zhnumber, zxjatype.

Planet Linux AustraliaChris Samuel: Vale Dad

[I’ve been very quiet here for over a year for reasons that will become apparent in the next few days when I finish and publish a long post I’ve been working on for a while – difficult to write, hence the delay]

It’s 10 years ago today that my Dad died, and Alan and I lost the father who had meant so much to both of us. It’s odd realising that it’s over 1/5th of my life since he died, it doesn’t seem that long.

Vale dad, love you…

This item originally posted here:

Vale Dad

Planet DebianDaniel Lange: Debian Gitlab ( tricks

Debian is moving the git hosting from, an instance of Fusionforge, to which is a Gitlab instance.

There is some background reading available on This also has pointers to an import script to ease migration for people that move repositories. It's definitely worth hanging out in #alioth on oftc, too, to learn more about salsa / gitlab in case you have a persistent irc connection.

As of now() salsa has 15,320 projects, 2,655 users in 298 groups.
Alioth has 29,590 git repositories (which is roughly equivalent to a project in Gitlab), 30,498 users in 1,154 projects (which is roughly equivalent a group in Gitlab).

So we currently have 50% of the git repositories migrated. One month after leaving beta. This is very impressive.
As Alioth has naturally accumulated some cruft, Alexander Wirt (formorer) estimates that 80% of the repositories in use have already been migrated.

So it's time to update your local .git/config URLs!

Mehdi Dogguy has written nice scripts to ease handling salsa / gitlab via the (extensive and very well documented) API. Among them is list_projects that gets you nice overview of the projects in a specific group. This is especially true for the "Debian" group that contains the former collab-maint repositories, so source code that can and shall be maintained by Debian Developers collectively.

Finding migrated repositories

Salsa can search quite quickly via the Web UI:✓&search=htop

Salsa search screenshot

but finding the URL to clone the repository from is more clicks and ~4MB of data each time (yeah, the modern web), so

$ curl --silent"htop" | jq .
    "id": 9546,
    "description": "interactive processes viewer",
    "name": "htop",
    "name_with_namespace": "Debian / htop",
    "path": "htop",
    "path_with_namespace": "debian/htop",
    "created_at": "2018-02-05T12:44:35.017Z",
    "default_branch": "master",
    "tag_list": [],
    "ssh_url_to_repo": "",
    "http_url_to_repo": "",
    "web_url": "",
    "avatar_url": null,
    "star_count": 0,
    "forks_count": 0,
    "last_activity_at": "2018-02-17T18:23:05.550Z"

is a bit nicer.

Please notice the git url format is a bit odd, it's either or

Notice the ":" -> "/" after the hostname. Bit me once.

Finding repositories to update

At this time I found it useful to check which of the repositories I have cloned had not yet been updated in the local .git/config:

find ~/debconf ~/my_sources ~/shared -ipath '*.git/config' -exec grep -H 'url.*git\.debian' '{}' \;

Thanks to Jörg Jaspert (Ganneff) the Debconf repositories have all been moved to Salsa now.
Hint: Bug him for his scripts if you need to do complex moves.

Updating the URLs has been an hours work on my side and there is little you can do to speed that up if - as in the Debconf case - teams have used the opportunity to clean up and things are not as easy as using sed -i.

But there is no reason to do this more than once, so for the laptops...

Speeding up migration on multiple devices

rsync -armuvz --existing --include="*/" --include=".git/config" --exclude="*" ~/debconf/ laptop:debconf/

will rsync the .git/config files that you changed to other systems where you keep partial copies.

On these a simple git pull to get up to remote HEAD or using the git_pull_all one-liner from will suffice.

Git short URL

Stefano Rivera (tumbleweed) shared this clever trick:

git config --global url."ssh://".insteadOf salsa:

This way you can git clone salsa:debian/htop.

Planet DebianEnrico Zini: Automatic deploy from gitlab/salsa CI

At SnowCamp I migrated Front Desk-related repositories to Salsa gitlab and worked on setting up Continuous Integration for the web applications I maintain in Debian.

The result is a reusable Django app that integrates with gitlab's webhooks

It is currently working for and I'll soon reuse it for and

The only setup needed on DSA side is to enable systemd linger on the deploy user.

The CI/deploy workflow is this:

  • gitlab runs tests in the CI
  • gitlab notifies pipeline status changes via a webhook
  • when a selected pipeline changes status to success, the application queues a deploy for that shasum by creating a shasum.deploy file in a queue directory
  • a systemd .path unit running as the deploy user triggers when the new file is created and runs deploy as the deploy user

And deploy does this:

  • git fetch
  • abort of the shasum of the head of the deploy branch does not match one of the .deploy files in the queue directory
  • abort if the head of the deploy branch is not signed by a gpg key present in a deploy keyring
  • abort if the head of the deploy branch is not a successor of the currently deployed commit
  • update the working copy
  • run a deploy script
  • remove all .deploy files seen when the script was called
  • send an email to the site admins with a log of the whole deploy process, whether it succeeded or it was aborted

For more details, see the app's

I find it wonderful that we got to a stage where we can have this in Debian, and I am very grateful to all the work that has been done and is being done in setting up and maintaining Salsa.

Planet DebianJunichi Uekawa: My chrome extension became useful for me.

My chrome extension became useful for me. It's nice. chrome.tabs API is a little weird, I want to open and manage tabs and then auto-process some things but executeScript interface is strange. Also I don't know how I would detect page transition without polling.

Planet DebianNicolas Dandrimont: Report from Debian SnowCamp: day 3

[Previously: day 1, day 2]

Thanks to Valhalla and other members of LIFO, a bunch of fine Debian folks have convened in Laveno, on the shores of Lake Maggiore, for a nice weekend of relaxing and sprinting on various topics, a SnowCamp.

As a starter, and on request from Valhalla, please enjoy an attempt at a group picture (unfortunately, missing a few people). Yes, the sun even showed itself for a few moments today!

One of the numerous SnowCamp group pictures

As for today’s activities… I’ve cheated a bit by doing stuff after sending yesterday’s report and before sleep: I reviewed some of Stefano’s dc18 pull requests; I also fixed papered over the debexpo uscan bug.

After keeping eyes closed for a few hours, the day was then spent tickling the python-gitlab module, packaged by Federico, in an attempt to resolve in a generic way.

The features I intend to implement are mostly inspired from jcowgill’s multimedia-cli:

  • per-team yaml configuration of “expected project state” (access level, hooks and other integrations, enablement of issues, merge requests, CI, …)
  • new repository creation (according to a team config or a personal config, e.g. for collab-main the Debian group)
  • audit of project configurations
  • mass-configuration changes for projects

There could also be some use for bits of group management, e.g. to handle the access control of the DebConf group and its subgroups, although I hear Ganneff prefers shell scripts.

My personal end goal is to (finally) do the 3D printer team repository migration, but e.g. the Python team would like to update configuration of all repos to use the new KGB hook instead of irker, so some generic interest in the tool exists.

As the tool has a few dependencies (because I really have better things to do than reimplement another wrapper over the GitLab API) I’m not convinced devscripts is the right place for it to live… We’ll see when I have something that does more than print a list of projects to show!

In the meantime, I have the feeling Stefano has lined up a new batch of DebConf website pull requests for me, so I guess that’s what I’m eating for breakfast “tomorrow”… Stay tuned!

My attendance to SnowCamp is in part made possible by donations to the Debian project. If you want to keep the project going, please consider donating, joining the Debian partners program, or sponsoring the upcoming Debian Conference.


Rondam RamblingsDevin Nunes doesn't realize that he's part of the government

I was reading about the long anticipated release of the Democratic rebuttal to the famous Republican dossier memo.  I've been avoiding writing about this, or any aspect of the Russia investigation, because there is just so much insanity going on there and I didn't want to get sucked into that tar pit.  But I could not let this slide: [O]n Saturday, committee chairman Devin Nunes (R-Calif.)

Planet DebianJohn Goerzen: Remembering Tom Wallis, The System Administrator That Made The World Better

I never asked Tom why he hired me.

I was barely 17 at the time – already a Debian developer, full of more enthusiasm than experience, and Tom offered me a job. It was my first real sysadmin job, and to my delight, I got to work with Unix. For two years, I was the part-time assistant systems administrator for the Computer Science department at Wichita State University. And Tom was my boss, mentor, and old friend. Tom was walking proof that a system administrator can make the world a better place.

That amazing time was two decades ago now. And in the time since, every so often Tom and I would exchange emails. I enjoyed occasionally dropping by his office at work and surprising him.

So it was a shock to get an email this week that Tom had married for the first time at age 54, and passed away four days later due to a boating accident while on his honeymoon.

Tom was a man with a big laugh and an even bigger heart. When I started a Linux Users Group (LUG) on campus, there was Tom – helping to arrange a place to meet, Internet access when we needed it, and gave his evenings to simply be present and a supporter.

I had (and still have) a passion for Free/Open Source software. Linux was just new at the time, and was barely present in the department when I started. I was fortunate that CS was the “little dept. that could” back then, with wonderful people but not a lot of money, so a free operating system helped with a lot of problems. Tom supported me in my enthusiasm to introduce Debian all over the place. His trust meant much, and brought out the best in me.

I learned a lot from Tom, and more than just technology. A state university can be heavily bureaucratic place at times. Tom was friends with every “can-do” person on campus, it seemed, and they all managed to pull through and get things done – sometimes working around policies that were a challenge.

I have sometimes wondered if I am doing enough, making a big enough difference in the world. Does a programmer really make a difference in people’s lives?

Tom Wallis is proof that the answer is yes. From the stories I heard at his funeral today, I can only guess how many other lives he touched.

This week, Tom gave me one final gift: a powerful reminder that sysadmins and developers can make the world a better place, can touch people’s lives. I hope Tom knew how much I appreciated him. If I find a way to make a difference in someone’s life — maybe an intern I’ve hired, or someone I take flying — than I will have found a way to pass on Tom’s gift to another, and I hope I can.


(This penguin was sitting out on the table of memorabilia from Tom today. I remember it from a shelf in his office.)

Planet DebianVincent Bernat: OPL2LPT: an AdLib sound card for the parallel port

The AdLib sound card was the first popular sound card for IBM PC—prior to that, we were pampered by the sound of the PC speaker. Connected to an 8-bit ISA slot, it is powered by a Yamaha YM3812 chip, also known as OPL2. This chip can drive 9 sound channels whose characteristics can be fine tuned through 244 write-only registers.

AdLib sound card

I had one but I am unable to locate it anymore. Models on eBay are quite rare and expensive. It is possible to build one yourself (either Sergey’s one or this faithful reproduction). However, you still need an ISA port. The limitless imagination of some hackers can still help here. For example, you can combine Sergey’s Xi 8088 processor board, with his ISA 8-bit backplane, his Super VGA card and his XT-CF-Lite card to get your very own modernish IBM PC compatible. Alternatively, you can look at the AdLib sound card on a parallel port from Raphaël Assénat.

The OPL2LPT sound card🔗

Recently, the 8-Bit Guy released a video about an AdLib sound card for the parallel port, the OPL2LPT. While current motherboards don’t have a parallel port anymore, it’s easy to add a PCI-Express one. So, I bought a pre-soldered OPL2LPT and a few days later, it was at my doorstep:

OPL2LPT sound card

The expected mode of operation for such a device is to plug it to an ISA parallel port (accessible through I/O port 0x378), load a DOS driver to intercept calls to AdLib’s address and run some AdLib-compatible game. While correctly supported by Linux, the PCI-Express parallel port doesn’t operate like an ISA one. QEMU comes with a parallel port emulation but, due to timing issues, cannot correctly drive the OPL2LPT. However, VirtualBox emulation is good enough.1

On Linux, the OPL2LPT can be programmed almost like an actual AdLib. The following code writes a value to a register:

static void lpt_write(uint8_t data, uint8_t ctrl) {
  ieee1284_write_data(port, data);
  ieee1284_write_control(port, (ctrl | C1284_NINIT) ^ C1284_INVERTED);
  ieee1284_write_control(port,  ctrl                ^ C1284_INVERTED);
  ieee1284_write_control(port, (ctrl | C1284_NINIT) ^ C1284_INVERTED);

void opl_write(uint8_t reg, uint8_t value) {
  lpt_write(reg, C1284_NSELECTIN | C1284_NSTROBE);
  usleep(4); // 3.3 microseconds

  lpt_write(value, C1284_NSELECTIN);

To “natively” use the OPL2LPT, I have modified the following applications:

  • ScummVM, an emulator for classic point-and-click adventure games, including many Lucas­Arts games—patch
  • QEMU, a quick generic emulator—patch with a minimal emulation for timers and hard-coded sleep delays 🙄
  • DOSBox, an x86 emulator bundled with DOSpatch with a complete emulation for timers and a dedicated working thread2

You can compare the results in the following video, with the introduction of Indiana Jones and the Last Crusade, released in 1989:3

  • 0:00, DOSBox with an emulated PC speaker
  • 0:58, DOSBox with an emulated AdLib
  • 1:51, VirtualBox with the OPL2LPT (on an emulated parallel port)
  • 2:42, patched ScummVM with the OPL2LPT (native)
  • 3:33, patched QEMU with the OPL2LPT (native)
  • 4:24, patched DOSBox with the OPL2LPT (native)
  • 5:17, patched DOSBox with an improved OPL3 emulator (Nuked OPL3)
  • 6:10, ScummVM with the CD track (FM Towns version)

I let you judge how good is each option! There are two ways to buy an OPL2LPT: in Europe, from Serdashop or in North America, from the 8-Bit Guy.


Indiana Jones and the Fate of Atlantis🔗

Here is another video featuring Indiana Jones and the Fate of Atlantis, released in 1992, running in DOSBox with the OPL2LPT. It’s the second game using the iMUSE sound system: music is synchronized with actions and transitions are done seamlessly. Quite a feat at the time!

Monkey Island 2🔗

The first game featuring iMuse is Monkey Island 2, released in 1991. The video below displays the first minutes of the game, running in DOSBox with the OPL2LPT.

Notably, at 5:33, when Guybrush is in Woodtick, a small town on Scabb Island, the music plays around a variation of a basic theme with a different instrument for each building without any interruption.

How the videos were recorded🔗

With a VGA adapter, many games use Mode 13h, a 256-color mode with a 320×200 resolution. On a 4:3 display, this mode doesn’t feature square pixels: they are stretched vertically by a factor of 1.2.

The above videos were recorded with FFmpeg (and edited with Blender). It packs a lot of useful filters making it easy to automate video capture. Here is an example:

FONT="font=Monkey Island 1991 refined:
ffmpeg -y \
 -thread_queue_size 64 \
 -f x11grab -draw_mouse 0 -r 30 -s 640x400 -i :0+844,102 \
 -thread_queue_size 64 \
 -f pulse -ac 1 -i default \
 -filter_complex "[0:v]pad=854:400:0:0,
      drawtext=${FONT}:y= 10:text=Indiana Jones 3,
      drawtext=${FONT}:y= 34:text=Intro,
      drawtext=${FONT}:y=148:text=PC speaker,
      [game][vis]overlay=x=640:y=280" \
 -pix_fmt yuv420p -c:v libx264 -qp 0 -preset ultrafast \

The interesting part is the filter_complex argument. The input video is padded from 640×400 to 854×400 as a first step to a 16:9 aspect ratio.4 Using The Secret Font of Monkey Island, some text is added to the right of the video. The result is then scaled to 854×480 to get the final aspect ratio while stretching pixels to the expected 1.2 factor. The video up to this point is sent to a stream named game. As a second step, from the input audio, we build two visualisations: a waveform and a spectrum. They are stacked vertically and the result is a stream named vis. The last step is to overlay the visualisation stream over the gameplay stream.

  1. There is no dialog to configure a parallel port. This needs to be done from the command-line after the instance creation:

    $ VBoxManage modifyvm "FreeDOS (games)" --lptmode1 /dev/parport0
    $ VBoxManage modifyvm "FreeDOS (games)" --lpt1 0x378 7


  2. With QEMU or DOSBox, it should be the responsability of the executing game to respect the required delays for the OPL2 to process the received bytes. However, QEMU doesn’t seem to try to emulate I/O delays while DOSBox seems to not be precise enough. For the later, to overcome this shortcoming, the OPL2LPT is managed from a dedicated thread receiving the writes and ensuring the required delays are met. ↩︎

  3. Indiana Jones and the Last Crusade was the first game I tried after plugging in the brand new AdLib sound card I compelled my parents to buy on a trip to Canada in 1992. At the time, no brick and mortar retailer sold this card in my French city and online purchases (through the Minitel) were limited to consumer goods (like a VHS player). Hard times. 😏 ↩︎

  4. A common method to extend a 4:3 video to a 16:9 aspect ratio without adding black bars is to add a blurred background using the same video as a source. I didn’t do this here but it is also possible with FFmpeg↩︎

Planet DebianDima Kogan: Vnlog integration with feedgnuplot

This is mostly a continuation of the last post, but it's so nice!

As feedgnuplot reads data, it interprets it into separate datasets with IDs that can be used to refer to these datasets. For instance you can pass feedgnuplot --autolegend to create a legend for each dataset, labelling each with its ID. Or you can set specific directives for one dataset but not another: feedgnuplot --style position 'with lines' --y2 temperature would plot the position data with lines, and the temperature data on the second y axis.

Let's say we were plotting a data stream

1 1
2 4
3 9
4 16
5 25

Without --domain this data would be interpreted like this:

  • without --dataid. This stream would be interpreted as two data sets: IDs 0 and 1. There're 5 points in each one
  • with --dataid. This stream would be interpreted as 5 different datasets with IDs 1, 2, 3, 4 and 5. Each of these datasets would contain point point each.

This is a silly example for --dataid, obviously. You'd instead have a dataset like

temperature 34 position 4
temperature 35 position 5
temperature 36 position 6
temperature 37 position 7

and this would mean two datasets: temperature and position. This is nicely flexible because it can be as sparse as you want: each row doesn't need to have one temperature and one position, although in many datasets you would have exactly this. Real datasets are often more complex:

1 temperature1 34 temperature2 35 position 4
2 temperature1 35 temperature2 36
3 temperature1 36 temperature2 33
4 temperature1 37 temperature2 32 position 7

Here the first column could be a domain of sort sort, time for instance. And we have two different temperature sensors. And we don't always get a position report for whatever reason. This works fine, but is verbose, and usually the data is never stored in this way; I'd use awk to convert the data from its original form into this form for plotting. Now that vnlog is a thing, feedgnuplot has direct support for it, and this works like a 3rd way to get dataset IDs: vnlog headers. I'd represent the above like this:

# time temperature1 temperature2 position
1 34 35 4
2 35 36 -
3 36 33 -
4 37 32 7

This would be the working representation; I'd log directly to this format, and work with this data even before plotting it. But I can now plot this directly:

$ < data.vnl 
  feedgnuplot --domain --vnlog --autolegend --with lines 
              --style position 'with points pt 7' --y2 position

I think the command above makes it clear what was intended. It looks like this:


The input data is now much more concise, I don't need a different format for plotting, and the amount of typing has been greatly reduced. And I can do the normal vnlog things. What if I want to plot only temperatures:

$ < data.vnl 
  vnl-filter -p time,temp |
  feedgnuplot --domain --vnlog --autolegend --with lines


Planet DebianMartín Ferrari: Report from SnowCamp #1

As Nicolas already reported, a bunch of Debian folk gathered in the North of Italy for a long weekend of work and socialisation.

Valhalla had the idea of taking the SunCamp concept and doing it in another location, and along with people from LIFO they made it happen. Thanks to all who worked on this!

I arrived late on Wednesday, after a very relaxed car journey from Lyon. Sadly, on Thursday I had to attend some unexpected personal issues, and it was not very productive for Debian work. Luckily, between Friday and today, I managed to get back in track.

I uploaded new versions of Prometheus-related package to stretch-backports, so they are in line with current versions in testing:

  • prometheus-alertmanager, which also provides a fix for #891202: False owner/group for /var/lib/prometheus.
  • python-prometheus-client, carrying some useful updates for users.

I fixed two RC bugs in important Go packages, both caused by the recent upload of Golang 1.10:

  • #890927: golang-golang-x-tools: FTBFS and Debci failure with golang-1.10-go
  • #890938: golang-google-cloud FTBFS: FAIL: TestAgainstJSONEncodingNoTags

I also had useful chats about continuous testing of Go package, and improvements to git-buildpackage to better support our workflow. I plan to try and write some code for it.

Finally, I had some long discussions about joining an important team in Debian, but I can't still report on that :-)


Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main March 2018 Meeting: Unions - Hacking society's operating system

Mar 6 2018 18:30
Mar 6 2018 20:30
Mar 6 2018 18:30
Mar 6 2018 20:30
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Tuesday, March 6, 2018

6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000


Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

March 6, 2018 - 18:30

read more

Planet DebianNicolas Dandrimont: Report from Debian SnowCamp: day 2

[Previously: day 1]

Thanks to Valhalla and other members of LIFO, a bunch of fine Debian folks have convened in Laveno, on the shores of Lake Maggiore, for a nice weekend of relaxing and sprinting on various topics, a SnowCamp.

Today’s pièce de résistance was the long overdue upgrade of the machine hosting to (jessie then) stretch. We’ve spent most of the afternoon doing the upgrades with Mattia.

The first upgrade to jessie was a bit tricky because we had to clean up a lot of cruft that accumulated over the years. I even managed to force an unexpected database restore test 😇. After a few code fixes, and getting annoyed at apache2.4 for ignoring VirtualHost configs that don’t end with .conf (and losing an hour of debugging time in the process…), we managed to restore the functonality of the website.

We then did the stretch upgrade, which was somewhat smooth sailing in comparison… We had to remove some functionality which depended on packages that didn’t make it to stretch: fedmsg, and the SOAP interface. We also noticed that the gpg2 transition completely broke the… “interesting” GPG handling of mentors… An install of gnupg1 later everything should be working as it was before.

We’ve also tried to tackle our current need for a patched FTP daemon. To do so, we’re switching the default upload queue directory from / to /pub/UploadQueue/. Mattia has submitted bugs for dput and dupload, and will upload an updated dput-ng to switch the default. Hopefully we can do the full transition by the next time we need to upgrade the machine.

Known bugs: the uscan plugin now fails to parse the uscan output… But at least it “supports” version=4 now 🙃

Of course, we’re still sorely lacking volunteers who would really care about; the codebase is a pile of hacks upon hacks upon hacks, all relying on an old version of a deprecated Python web framework. A few attempts have been made at a smooth transition to a more recent framework, without really panning out, mostly for lack of time on the part of the people running the service. I’m still convinced things should restart from scratch, but I don’t currently have the energy or time to drive it… Ugh.

More stuff will happen tomorrow, but probably not on See you then!

My attendance to SnowCamp is in part made possible by donations to the Debian project. If you want to keep the project going, please consider donating, joining the Debian partners program, or sponsoring the upcoming Debian Conference.


Sam VargheseJoyce affair: incestuous relationship between pollies and journos needs some exposure

Barnaby Joyce has come (no pun intended) and Barnaby Joyce has gone, but one issue that is intimately connected with the circus that surrounded him for the last three weeks has yet to be subjected to any scrutiny.

And that is the highly incestuous relationship that exists between Australian journalists and politicians and often results in news being concealed from the public.

The Australian media examined the scandal around Deputy Prime Minister Joyce from many angles, ever since a picture of his pregnant mistress, Vikki Campion, appeared on the front page of the The Daily Telegraph.

Various high-profile journalists tried to offer mea culpas to justify their non-reporting of the affair.

This is not the first time that journalists in Canberra have known about newsworthy stories connected to politicians and kept quiet.

In 2005, journalists Michael Brissenden, Tony Wright and Paul Daley were at a dinner with former treasurer Peter Costello at which he told them he had set next April (2006) as the absolute deadline “that is, mid-term,” for John Howard to stand aside; if not, he would challenge him.

Costello was said by Brissenden to have declared that a challenge “will happen then” if “Howard is still there”. “I’ll do it,” he said. He said he was “prepared to go the backbench”. He said he’d “carp” at Howard’s leadership “from the backbench” and “destroy it” until he “won” the leadership.

But the three journalists kept mum about what would have been a big scoop, because Costello’s press secretary asked them not to write the yarn.

There was a great deal of speculation in the run-up to the 2007 election as to whether Howard would step down; one story in July 2006 said there had been an unspoken 1994 agreement between him and Costello to vacate the PM’s seat and make way for Costello to get the top job.

Had the three journalists at that 2005 dinner gone ahead and reported the story — as journalists are supposed to do — it is unlikely that Howard would have been able to carry on as he did. It would have forced Costello to challenge for the leadership or quit. In short, it would have changed the course of politics.

But Brissenden, Daley and Wright kept mum.

In the case of Joyce, it has been openly known since at least April 2017 that he was schtupping Campion. Indeed, the picture of Campion on the front page of the Telegraph indicates she was at least seven months pregnant — later it became known that the baby is due in April — which means Joyce must have been sleeping with her at least from June onwards.

The story was in the public interest, because Joyce and Campion are both paid from the public purse. When their affair became an issue, Joyce had her moved around to the offices of his National Party mates, Matt Canavan and Damian Drum, at salaries that went as high as $190,000. Joyce is also no ordinary politician – he is the deputy prime minister and thus acts as the head of the country whenever the prime minister is out of the country. Thus anything that affects his functioning is of interest to the public as he can make decisions that affect them.

But journalists like Katharine Murphy of the Guardian and Jacqueline Maley of the Sydney Morning Herald kept mum. A female journalist who is not part of this clique, Sharri Markson, broke the story. She was roundly criticised by many who belong the Murphy-Maley school of thinking.

Chris Uhlmann kept mum. So did Malcolm Farr and a host of others like Fran Bailey.

Both Murphy and Maley cited what they called “ethics” to justify keeping mum. But after the story broke, they leapt on it with claws extended. Another journalist, Julia Baird, tried to spin the story as one that showed how a woman in Joyce’s position would have been treated – much worse, was her opinion. She chose former prime minister Julia Gillard as her case study but did not offer the fact that Gillard was also a highly incompetent prime minister and that the flak she earned was also due to this aspect of her character.

Baird once was a columnist for Fairfax’s Weekend magazine and her profile pic in the publication at the time showed her in Sass & Bide jeans – the very business in which her husband was involved. Given that, when she moralises, one needs to take it with a kilo of salt.

But the central point is that, though she has a number of platforms to break a story, Baird never wrote a word about Joyce’s philandering. He promoted himself as a man who espoused family values by being photographed with his wife and four daughters repeatedly. He moralised more times than any other about the sanctity of marriage. Thus, he was fair game. Or so commonsense would dictate.

Why do these journalists and many others keep quiet and try to stay in the good books of politicians? The answer is simple: though the jobs of journalists and public relations people are diametric opposites, journalists have no qualms about crossing the divide because the money in PR is much more.

Salaries are much higher if a journalist gets onto the PR team of a senior politician. And with jobs in journalism disappearing at a rate of knots year after year, journalists like Murphy, Maley and Baird hedge their bets in order to stay in politicians’ good books. Remember Mark Simkin, a competent news reporter at the ABC? He joined the staff of — hold your breath — Tony Abbott when the man was prime minister. Simkin is rarely seen in public these days.

Nobody calls journalists on this deception and fraud. It emboldens them to continue to pose as people who act in the public interest when in reality they are no different from the average worker. Yet they climb on pulpits week after week and pontificate to the masses.

It has been said that journalists are like prostitutes: first, they do it for the fun of it, then they do it for a few friends, and finally they end up doing it for money. You won’t find too many arguments from me about that characterisation.

CryptogramFriday Squid Blogging: The Symbiotic Relationship Between the Bobtail Squid and a Particular Microbe

This is the story of the Hawaiian bobtail squid and Vibrio fischeri.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sociological ImagesDigital Drag?

Screenshot used with permission

As I was scrolling through Facebook a few weeks ago, I noticed a new trend: Several friends posted pictures (via an app) of what they would look like as “the opposite sex.” Some of them were quite funny—my female-identified friends sported mustaches, while my male-identified friends revealed long flowing locks. But my sociologist-brain was curious: What makes this app so appealing? How does it decide what the “opposite sex” looks like? Assuming it grabs the users’ gender from their profiles, what would it do with users who listed their genders as non-binary, trans, or genderqueer? Would it assign them male or female? Would it crash? And, on a basic level, why are my friends partaking in this “game?”

Gender is deeply meaningful for our social world and for our identities—knowing someone’s gender gives us “cues” about how to categorize and connect with that person. Further, gender is an important way our social world is organizedfor better or worse. Those who use the app engage with a part of their own identities and the world around them that is extremely significant and meaningful.

Gender is also performative. We “do” gender through the way we dress, talk, and take up space. In the same way, we read gender on people’s bodies and in how they interact with us. The app “changes people’s gender” by changing their gender performance; it alters their hair, face shape, eyes, and eyebrows. The app is thus a outlet to “play” with gender performance. In other words, it’s a way of doing digital drag. Drag is a term that is often used to refer to male-bodied people dressing in a feminine way (“drag queens”) or female-bodied people dressing in a masculine way (“drag kings”), but all people who do drag do not necessarily fit in this definition. Drag is ultimately about assuming and performing a gender. Drag is increasingly coming into the mainstream, as the popular reality TV series RuPaul’s Drag Race has been running for almost a decade now. As more people are exposed to the idea of playing with gender, we might see more of them trying it out in semi-public spaces like Facebook.

While playing with gender may be more common, it’s not all fun and games. The Facebook app in particular assumes a gender binary with clear distinctions between men and women, and this leaves many people out. While data on individuals outside of the gender binary is limited, a 2016 report from The Williams Institute estimated that 0.6% of the U.S. adult population — 1.4 million people — identify as transgender. Further, a Minnesota study of high schoolers found about 3% of the student population identify as transgender or gender nonconforming, and researchers in California estimate that 6% of adolescents are highly gender nonconforming and 20% are androgynous (equally masculine and feminine) in their gender performances.

The problem is that the stakes for challenging the gender binary are still quite high. Research shows people who do not fit neatly into the gender binary can face serious negative consequences, like discrimination and violence (including at least 28 killings of transgender individuals in 2017 and 4 already in 2018).  And transgender individuals who are perceived as gender nonconforming by others tend to face more discrimination and negative health outcomes.

So, let’s all play with gender. Gender is messy and weird and mucking it up can be super fun. Let’s make a digital drag app that lets us play with gender in whatever way we please. But if we stick within the binary of male/female or man/woman, there are real consequences for those who live outside of the gender binary.

Recommended Readings:

Allison Nobles is a PhD candidate in sociology at the University of Minnesota and Graduate Editor at The Society Pages. Her research primarily focuses on sexuality and gender, and their intersections with race, immigration, and law.

(View original at

Planet DebianBenjamin Mako Hill: “Stop Mang Fun of Me”

Somebody recently asked me if I am the star of quote #75514 (a snippet of online chat from a large collaboratively built collection):

<mako> my letter "eye" stopped worng
<luca> k, too?
<mako> yeah
<luca> sounds like a mountain dew spill
<mako> and comma
<mako> those three
<mako> ths s horrble
<luca> tme for a new eyboard
<luca> 've successfully taen my eyboard apart
       and fxed t by cleanng t wth alcohol
<mako> stop mang fun of me
<mako> ths s a laptop!!

It was me. A circuit on my laptop had just blown out my I, K, ,, and 8 keys. At the time I didn’t think it was very funny.

I no idea anyone had saved a log and had forgotten about the experience until I saw the quote. I appreciate it now so I’m glad somebody did!

This was unrelated to the time that I poured water into two computers in front of 1,500 people and the time that I carefully placed my laptop into a full bucket of water.

Planet DebianGunnar Wolf: Material for my UNL course, «Security in application development», available on GitLab

I have left this blog to linger without much activity... My life has got quite a bit busy. So, I'll try to put some life back here ☺

During the last trimester last year, I was invited as a distance professor to teach «Security in application development» in the «TUSL (Techical Universitary degree on Free Software)» short career taught by the online studies branch of Universidad Nacional del Litoral, based in Santa Fé, Argentina. The career is a three year long program that provides a facilitating, professional, terminal degree according to current Argentinian regulations (that demand people providing professional services on informatics to be "matriculated"). It is not a full Bachelors degree, as it does not allow graduated students to continue with a postgraduate; I have sometimes seen such programs offered as Associate degrees in some USA circles.

Anyway - I am most proud to say I had already a bit of experience giving traditional university courses, but this is my first time actually designing a course that's completely taken in writing; I have distance-taught once, but it was completely video-based, with forums used mostly for student participation.

So, I wrote quite a bit of material for my course. And, not to brag, but I think I did it nicely. The material is completely in Spanish, but some of you might be interested in it. And the most natural venue to share it with is, of course, the TUSL group in GitLab.

The TUSL group is quite interesting; when I made my yearly pilgrimage to Argentina in December, we met and chatted, even had a small conference for students and interested people in the region. I hope to continue to be involved in their efforts.

Anyway, as for my material — Strange as it might seem, I wrote mostly using the Moodle editor. I have been translating my writings to a more flexible Markdown, but you will find parts of it are still just HTML dumps taken with wget (taken as I don't want the course to be cleaned and forgotten!) The repository is split between the reading materials I gave the students (links to external material and to material written by myself) and the activities, where I basically just mirrored/statified the interactions through the forums.

I hope this material is interesting to some of you. And, of course, feel free to fix my errors and send merge requests ☺

Planet Linux AustraliaTim Serong: Strange Bedfellows

The Tasmanian state election is coming up in a week’s time, and I’ve managed to do a reasonable job of ignoring the whole horrible thing, modulo the promoted tweets, the signs on the highway, the junk the major (and semi-major) political parties pay to dump in my letterbox, and occasional discussions with friends and neighbours.

Promoted tweets can be blocked. The signs on the highway can (possibly) be re-purposed for a subsequent election, or can be pulled down and used for minor windbreak/shelter works for animal enclosures. Discussions with friends and neighbours are always interesting, even if one doesn’t necessarily agree. I think the most irritating thing is the letterbox junk; at best it’ll eventually be recycled, at worst it becomes landfill or firestarters (and some of those things do make very satisfying firestarters).

Anyway, as I live somewhere in the wilds division of Franklin, I thought I’d better check to see who’s up for election here. There’s no independents running this time, so I’ve essentially got the choice of four parties; Shooters, Fishers and Farmers Tasmania, Tasmanian Greens, Tasmanian Labor and Tasmanian Liberals (the order here is the same as on the TEC web site; please don’t infer any preference based on the order in which I list parties in this blog post).

I feel like I should be setting party affiliations aside and voting for individuals, but of the sixteen candidates listed, to the best of my knowledge I’ve only actually met and spoken with two of them. Another I noticed at random in a cafe, and I was ignored by a fourth who was milling around with some cronies at a promotional stand out the front of Woolworths in Huonville a few weeks ago. So, party affiliations it is, which leads to an interesting thought experiment.

When you read those four party names above, what things came most immediately to mind? For me, it was something like this:

  • Shooters, Fishers & Farmers: Don’t take our guns. Fuck those bastard Greenies.
  • Tasmanian Greens: Protect the natural environment. Renewable energy. Try not to kill anything. Might collaborate with Labor. Liberals are big money and bad news.
  • Tasmanian Labor: Mellifluous babble concerning health, education, housing, jobs, pokies and something about workers rights. Might collaborate with the Greens. Vehemently opposed to the Liberals.
  • Tasmanian Liberals: Mellifluous babble concerning jobs, health, infrastructure, safety and the Tasmanian way of life, peppered with something about small business and family values. Vehemently opposed to Labor and the Greens.

And because everyone usually automatically thinks in terms of binaries (e.g. good vs. evil, wrong vs. right, one vs. zero), we tend to end up imagining something like this:

  • Shooters, Fishers & Farmers vs. Greens
  • Labor vs. Liberal
  • …um. Maybe Labor and the Greens might work together…
  • …but really, it’s going to be Labor or Liberal in power (possibly with some sort of crossbench or coalition support from minor parties, despite claims from both that it’ll be majority government all the way).

It turns out that thinking in binaries is remarkably unhelpful, unless you’re programming a computer (it’s zeroes and ones all the way down), or are lost in the wilderness (is this plant food or poison? is this animal predator or prey?) The rest of the time, things tend to be rather more colourful (or grey, depending on your perspective), which leads back to my thought experiment: what do these “naturally opposed” parties have in common?

According to their respective web sites, the Shooters, Fishers & Farmers and the Greens have many interests in common, including agriculture, biosecurity, environmental protection, tourism, sustainable land management, health, education, telecommunications and addressing homelessness. There are differences in the policy details of course (some really are diametrically opposed), but in broad strokes these two groups seem to care strongly about – and even agree on – many of the same things.

Similarly, Labor and Liberal are both keen to tell a story about putting the people of Tasmania first, about health, education, housing, jobs and infrastructure. Honestly, for me, they just kind of blend into one another; sure there’s differences in various policy details, but really if someone renamed them Labal and Liberor I wouldn’t notice. These two are the status quo, and despite fighting it out with each other repeatedly, are, essentially, resting on their laurels.

Here’s what I’d like to see: a minority Tasmanian state government formed from a coalition of the Tasmanian Greens plus the Shooters, Fishers & Farmers party, with the Labor and Liberal parties together in opposition. It’ll still be stuck in that irritating Westminster binary mode, but at least the damn thing will have been mixed up sufficiently that people might actually talk to each other rather than just fighting.

Planet DebianAndrew Shadura: How to stop gnome-settings-daemon messing with keyboard layouts

In case you, just like me, want to have a heavily customised keyboard layout configuration, possibly with different layouts on different input devices (I recommend inputplug to make that work), you probably don’t want your desktop environment to mess with your settings or, worse, re-set them to some default from time to time. Unfortunately, that’s exactly what gnome-settings-daemon does by default in GNOME and Unity. While I could modify inputplug to detect that and undo the changes immediately, it turned out this behaviour can be disabled with an underdocumented option:

gsettings set org.gnome.settings-daemon.plugins.keyboard active false

Thanks to Sebastien Bacher for helping me with this two years ago.

Planet DebianJo Shields: Update on MonoDevelop Linux releases

Once upon a time, had two package repositories – one for RPM files, one for Deb files. This, as it turned out, was untenable – just building on an old distribution was insufficient to offer “works on everything” packages, due to dependent library APIs not being necessarily forward-compatible. For example, openSUSE users could not install MonoDevelop, because the versions of libgcrypt, libssl, and libcurl on their systems were simply incompatible with those on CentOS 7. MonoDevelop packages were essentially abandoned as unmaintainable.

Then, nearly 2 years ago, a reprieve – a trend towards development of cross-distribution packaging systems made it viable to offer MonoDevelop in a form which did not care about openSUSE or CentOS or Ubuntu or Debian having incompatible libraries. A release was made using Flatpak (born xdg-app). And whilst this solved a host of distribution problems, it introduced new usability problems. Flatpak means sandboxing, and without explicit support for sandbox escape at the appropriate moment, users would be faced with a different experience than the one they expected (e.g. not being able to P/Invoke libraries in /usr/lib, as the sandbox’s /usr/lib is different).

In 2 years of on-off development (mostly off – I have a lot of responsibilities and this was low priority), I wasn’t able to add enough sandbox awareness to the core of MonoDevelop to make the experience inside the sandbox feel as natural as the experience outside it. The only community contribution to make the process easier was this pull request against DBus#, which helped me make a series of improvements, but not at a sufficient rate to make a “fully Sandbox-capable” version any time soon.

In the interim between giving up on MonoDevelop packages and now, I built infrastructure within our CI system for building and publishing packages targeting multiple distributions (not the multi-distribution packages of yesteryear). And so to today, when recent MonoDevelop .debs and .rpms are or will imminently be available in our Preview repositories. Yes it’s fully installed in /usr, no sandboxing. You can run it as root if that’s your deal.

MonoDevelop on CentOS 6

Where’s the ARM builds?

Where’s the ARM64 builds?

Why aren’t you offering builds for $DISTRIBUTION?

It’s already an inordinate amount of work to support the 10(!) distributions I already do. Especially when, due to an SSL state engine bug in all versions of Mono prior to 5.12, nuget restore in the MonoDevelop project fails about 40% of the time. With 12 (currently) builds running concurrently, the likelihood of a successful publication of a known-good release is about 0.2%. I’m on build attempt 34 since my last packaging fix, at time of writing.

Can this go into my distribution now?

Oh God no. make dist should generate tarballs which at least work now, but they’re very much not distribution-quality. See here.

What about Xamarin Studio/Visual Studio for Mac for Linux?

Probably dead, for now. Not that it ever existed, of course. *cough*. But if it did exist, a major point of concern for making something capital-S-Supportable (VS Enterprise is about six thousand dollars) is being able to offer a trustworthy, integration-tested product. There are hundreds of lines of patches applied to “the stack” in Mac releases of Visual Studio for Mac, Xamarin.Whatever, and Mono. Hundreds to Gtk+2 alone. How can we charge people money for a product which might glitch randomly because the version of Gtk+2 in the user’s distribution behaves weirdly in some circumstances? If we can’t control the stack, we can’t integration test, and if we can’t integration test, we can’t make a capital-P Product. The frustrating part of it all is that the usability issues of MonoDevelop in a sandbox don’t apply to the project types used by Xamarin Studio/VSfM developers. Android development end-to-end works fine. Better than Mac/Windows in some cases, in fact (e.g. virtualization on AMD processors). But because making Gtk#2 apps sucks in MonoDevelop, users aren’t interested. And without community buy-in on MonoDevelop, there’s just no scope for making MonoDevelop-plus-proprietary-bits.

Why does the web stuff not work?

WebkitGtk dropped support for Gtk+2 years ago. It worked in Flatpak MonoDevelop because we built an old WebkitGtk, for use by widgets.

Aren’t distributions talking about getting rid of Gtk+2?

Yes 😬

CryptogramElection Security

I joined a letter supporting the Secure Elections Act (S. 2261):

The Secure Elections Act strikes a careful balance between state and federal action to secure American voting systems. The measure authorizes appropriation of grants to the states to take important and time-sensitive actions, including:

  • Replacing insecure paperless voting systems with new equipment that will process a paper ballot;

  • Implementing post-election audits of paper ballots or records to verify electronic tallies;

  • Conducting "cyber hygiene" scans and "risk and vulnerability" assessments and supporting state efforts to remediate identified vulnerabilities.

    The legislation would also create needed transparency and accountability in elections systems by establishing clear protocols for state and federal officials to communicate regarding security breaches and emerging threats.

Worse Than FailureError'd: Everybody's Invited!

"According to Outlook, it seems that I accidentally invited all of the EU and US citizens combined," writes Wouter.


"Just an array a month sounds like a pretty good deal to me! And I do happen to have some arrays to spare..." writes Rutger W.


Lucas wrote, "VMWare is on the cutting edge! They can support TWICE as much Windows 10 as their competitors!"


"I just wish it was CurrentMonthName so that I could take advantage of the savings!" Ken wrote.


Mark B. "I had no idea that Redboxes were so cultured."


"I'm a little uncomfortable about being connected to an undefined undefined," writes Joel B.


[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Krebs on SecurityChase ‘Glitch’ Exposed Customer Accounts

Multiple customers have reported logging in to their bank accounts, only to be presented with another customer’s bank account details. Chase has acknowledged the incident, saying it was caused by an internal “glitch” Wednesday evening that did not involve any kind of hacking attempt or cyber attack.

Trish Wexler, director of communications for the retail side of JP Morgan Chase, said the incident happened Wednesday evening, for “a pretty limited number of customers” between 6:30 pm  and 9 pm ET who “sporadically during that time while logged in to could see someone else’s account details.”

“We know for sure the glitch was on our end, not from a malicious actor,” Wexler said, noting that Chase is still trying to determine how many customers may have been affected. “We’re going through Tweets from customers and making sure that if anyone is calling us with issues we’re working one on one with customers. If you see suspicious activity you should give us a call.”

Wexler urged customers to “practice good security hygiene” by regularly reviewing their account statements, and promptly reporting any discrepancies. She said Chase is still working to determine the precise cause of the mix-up, and that there have been no reports of JPMC commercial customers seeing the account information of other customers.

“This was all on our side,” Wexler said. “I don’t know what did happen yet but I know what didn’t happen. What happened last night was 100 percent not the result of anything malicious.”

The account mix-up was documented on Wednesday by Fly & Dine, an online publication that chronicles the airline food industry. Fly & Dine included screenshots of one of their writer’s spouses logged into the account of a fellow Chase customer with an Amazon and Chase card and a balance of more than $16,000.

Kenneth White, a security researcher and director of the Open Crypto Audit Project, said the reports he’s seen on Twitter and elsewhere suggested the screwup was somehow related to the bank’s mobile apps. He also said the Chase retail banking app offered an update first thing Thursday morning.

Chase says the oddity occurred for both and users of the Chase mobile app. 

“We don’t have any evidence it was related to any update,” Wexler said.

“There’s only so many kind of logic errors where Kenn logs in and sees Brian’s account,” White said.  “It can be a devil to track down because every single time someone logs in it’s a roll of the dice — maybe they get something in the warmed up cache or they get a new hit. It’s tricky to debug, but this is like as bad as it gets in terms of screwup of the app.”

White said the incident is reminiscent of a similar glitch at online game giant Steam, which caused many customers to see account information for other Steam users for a few hours. He said he suspects the problem was a configuration error someplace within “caching servers,” which are designed to ease the load on a Web application by periodically storing some common graphical elements on the page — such as images, videos and GIFs.

“The images, the site banner, all that’s fine to be cached, but you never want to cache active content or raw data coming back,” White said. “If you’re CNN, you’re probably caching all the content on the homepage. But for a banking app that has access to live data, you never want that to be cached.”

“It’s fairly easy to fix once you identify the problem,” he added. “I can imagine just getting the basics of the core issue [for Chase] would be kind of tricky and might mean a lot of non techies calling your Tier 1 support people.”

Update, 8:10 p.m. ET: Added comment from Chase about the incident affecting both mobile device and Web browser users.


Planet DebianNicolas Dandrimont: Report from Debian SnowCamp: day 1

Thanks to Valhalla and other members of LIFO, a bunch of fine Debian folks have convened in Laveno, on the shores of Lake Maggiore, for a nice weekend of relaxing and sprinting on various topics, a SnowCamp.

This morning, I arrived in Milan at “omfg way too early” (5:30AM, thanks to a 30 minute early (!) night train), and used the opportunity to walk the empty streets around the Duomo while the Milanese .oO(mapreri) were waking up. This gave me the opportunity to take very nice pictures of monuments without people, which is always appreciated!


After a short train ride to Laveno, we arrived at the Hostel at around 10:30. Some people had already arrived the day before, so there already was a hacking kind of mood in the air.  I’d post a panorama but apparently my phone generated a corrupt JPEG 🙄

After rearranging the tables in the common spaces to handle power distribution correctly (♥ Gaffer Tape), we could start hacking!

Today’s efforts were focused on the DebConf website: there were a bunch of pull requests made by Stefano that I reviewed and merged:

I’ve also written a modicum of code.

Finally, I have created the Debian 3D printing team on salsa in preparation for migrating our packages to git. But now is time to do the sleep thing. See you tomorrow?

My attendance to SnowCamp is in part made possible by donations to the Debian project. If you want to keep the project going, please consider donating, joining the Debian partners program, or sponsoring the upcoming Debian Conference.

Planet DebianJonathan Dowland: A Nice looking Blog

I stumbled across this rather nicely-formatted blog by Alex Beal and thought I'd share it. It's a particular kind of minimalist style that I like, because it puts the content first. It reminds me of Mark Pilgrim's old blog.

I can't remember which post in particular I came across first, but the one that I thought I would share was this remarkably detailed personal research project on tracking mood.

That would have been the end of it, but I then stumbled across this great review of "Type Driven Development with Idris", a book by Edwin Brady. I bought this book during the Christmas break but I haven't had much of a chance to deep dive into it yet.

Google AdsenseIntroducing AdSense Auto ads

Finding the time to create great content for your users is an essential part of growing your publishing business. Today we are introducing AdSense Auto ads, a powerful new way to place ads on your site. Auto ads use machine learning to make smart placement and monetization decisions on your behalf, saving you time. Place one piece of code just once to all of your pages, and let Google take care of the rest.
Some of the benefits of Auto ads include:
  • Optimization: Using machine learning, Auto ads show ads only when they are likely to perform well and provide a good user experience.
  • Revenue opportunities: Auto ads will identify any available ad space and place new ads there, potentially increasing your revenue.
  • Easy to use: With Auto ads you only need to place the ad code on your pages once. When you’re ready to use new features and ad formats, simply turn them on and off with the flick of a switch -- there’s no need to change the code again.

How do Auto ads work?

  Select the ad formats you want to show on your pages by switching them on with a simple toggle

 Place the Auto ads code on your pages

Auto ads will now start working for you by analyzing your pages, finding potential ad placements, and showing new ads when they’re likely to perform well and provide a good user experience.
And if you want to have different formats on different pages you can use the new Advanced URL settings feature (e.g. you can choose to place In-feed ads on but not on
Getting started with AdSense Auto ads
Auto ads can work equally well on new sites and on those already showing ads.
Have you manually placed ads on your page?
There’s no need to remove them if you don’t want to. Auto ads will take into account all existing Google ads on your pages.

Already using Anchor or Vignette ads?
Auto ads include Anchor and Vignette ads and many more additional formats such as Text and display, In-feed, and Matched content. Note that all users that used Page-level ads are automatically migrated over to Auto ads without any need to add code to their pages again.

To get started with AdSense Auto ads:
  1. Sign in to your AdSense account.
  2. In the left navigation panel, visit My ads and select Get Started.
  3. On the "Choose your global settings" page, select the ad formats that you'd like to show and click Save.
  4. On the next page, click Copy code.
  5. Paste the ad code between the < head > and </ head > tags of each page where you want to show Auto ads.
  6. Auto ads will start to appear on your pages in about 10-20 minutes.

We'd love to hear what you think about Auto ads in the comments section below this post.

Posted by:
Tom Long, AdSense Engineering Manager
Violetta Kalathaki, AdSense Product Manager

Planet DebianRussell Coker: Dell PowerEdge T30

I just did a Debian install on a Dell PowerEdge T30 for a client. The Dell web site is a bit broken at the moment, it didn’t list the price of that server or give useful specs when I was ordering it. I was under the impression that the server was limited to 8G of RAM, that’s unusually small but it wouldn’t be the first time a vendor crippled a low end model to drive sales of more expensive systems. It turned out that the T30 model I got has 4*DDR4 sockets with only one used for an 8G DIMM. It apparently can handle up to 64G of RAM.

It has space for 4*3.5″ SATA disks but only has 4*SATA connectors on the motherboard. As I never use the DVD in a server this isn’t a problem for me, but if you want 4 disks and a DVD then you need to buy a PCI or PCIe SATA card.

Compared to the PowerEdge T130 I’m using at home the new T30 is slightly shorter and thinner while seeming to have more space inside. This is partly due to better design and partly due to having 2 hard drives in the top near the DVD drive which are a little inconvenient to get to. The T130 I have (which isn’t the latest model) has 4*3.5″ SATA drive bays at the bottom which are very convenient for swapping disks.

It has two PCIe*16 slots (one of which is apparently quad speed), one shorter PCIe slot, and a PCI slot. For a cheap server a PCI slot is a nice feature, it means I can use an old PCI Ethernet card instead of buying a PCIe Ethernet card. The T30 cost $1002 so using an old Ethernet card saved 1% of the overall cost.

The T30 seems designed to be more of a workstation or personal server than a straight server. The previous iterations of the low end tower servers from Dell didn’t have built in sound and had PCIe slots that were adequate for a RAID controller but vastly inadequate for video. This one has built in line in and out for audio and has two DisplayPort connectors on the motherboard (presumably for dual-head support). Apart from the CPU (an E3-1225 which is slower than some systems people are throwing out nowadays) the system would be a decent gaming system.

It has lots of USB ports which is handy for a file server, I can attach lots of backup devices. Also most of the ports support “super speed”, I haven’t yet tested out USB devices that support such speeds but I’m looking forward to it. It’s a pity that there are no USB-C ports.

One deficiency of the T30 is the lack of a VGA port. It has one HDMI and two DisplayPort sockets on the motherboard, this is really great for a system on or under your desk, any monitor you would want on your desk will support at least one of those interfaces. But in a server room you tend to have an old VGA monitor that’s there because no-one wants it on their desk. Not supporting VGA may force people to buy a $200 monitor for their server room. That increases the effective cost of the system by 20%. It has a PC serial port on the motherboard which is a nice server feature, but that doesn’t make up for the lack of VGA.

The BIOS configuration has an option displayed for enabling charging devices from USB sockets when a laptop is in sleep mode. It’s disappointing that they didn’t either make a BIOS build for a non-laptop or have the BIOS detect at run-time that it’s not on laptop hardware and hide that.


The PowerEdge T30 is a nice low-end workstation. If you want a system with ECC RAM because you need it to be reliable and you don’t need the greatest performance then it will do very well. It has Intel video on the motherboard with HDMI and DisplayPort connectors, this won’t be the fastest video but should do for most workstation tasks. It has a PCIe*16 quad speed slot in case you want to install a really fast video card. The CPU is slow by today’s standards, but Dell sells plenty of tower systems that support faster CPUs.

It’s nice that it has a serial port on the motherboard. That could be used for a serial console or could be used to talk to a UPS or other server-room equipment. But that doesn’t make up for the lack of VGA support IMHO.

One could say that a tower system is designed to be a desktop or desk-side system not run in any sort of server room. However it is cheaper than any rack mounted systems from Dell so it will be deployed in lots of small businesses that have one server for everything – I will probably install them in several other small businesses this year. Also tower servers do end up being deployed in server rooms, all it takes is a small business moving to a serviced office that has a proper server room and the old tower servers end up in a rack.

Rack vs Tower

One reason for small businesses to use tower servers when rack servers are more appropriate is the issue of noise. If your “server room” is the room that has your printer and fax then it typically won’t have a door and you just can’t have the noise of a rack mounted server in there. 1RU systems are inherently noisy because the small diameter of the fans means that they have to spin fast. 2RU systems can be made relatively quiet if you don’t have high-end CPUs but no-one seems to be trying to do that.

I think it would be nice if a company like Dell sold low-end servers in a rack mount form-factor (19 inches wide and 2RU high) that were designed to be relatively quiet. Then instead of starting with a tower server and ending up with tower systems in racks a small business could start with a 19 inch wide system on a shelf that gets bolted into a rack if they move into a better office. Any laptop CPU from the last 10 years is capable of running a file server with 8 disks in a ZFS array. Any modern laptop CPU is capable of running a file server with 8 SSDs in a ZFS array. This wouldn’t be difficult to design.

CryptogramHarassment By Package Delivery

People harassing women by delivering anonymous packages purchased from Amazon.

On the one hand, there is nothing new here. This could have happened decades ago, pre-Internet. But the Internet makes this easier, and the article points out that using prepaid gift cards makes this anonymous. I am curious how much these differences make a difference in kind, and what can be done about it.

Worse Than FailureCodeSOD: Functional IsFunction

Julio S recently had to attempt to graft a third-party document viewer onto an internal web app. The document viewer was from a company which specialized in enterprise “document solutions”, which can be purchased for enterprise-sized licensing fees.

Gluing the document viewer onto their internal app didn’t go terribly well. While debugging, and browsing through the vendor’s javascript, he saw a lot of calls to a function called IsFunction. It was loaded from a “utilities.js”-type do-everything library file. Curious, Julio pulled up the implementation.

function IsFunction ( func ) {
    var bChk=false;
    if (func != "undefined") bChk=true;
    else bChk=false;
    return bChk;

I cannot emphasize enough how beautiful this block of code is, by the standards of bad code. There’s so much there. One variable, bChk uses Hungarian notation. Nothing else seems to. It’s a totally superfluous variable, as we could just do return func != "undefined".

Then again why would we even do that? The real beauty, though, is how the name of the function and its implementation have no relationship to each other, and the implementation is utterly useless. For example:

IsFunction("Hello World"); //true
IsFunction({spam: "eggs"}); //true
IsFunction(function() {}); //true, but it was probably an accident
IsFunction(undefined); //true
IsFunction("undefined"); //false

Yes, the only time this function returns false is the specific case where you pass it the string “undefined”. Everything else IsFunction apparently. The useless function sounds important. Someone wrote it, probably as a quick attempt at vaguely defensive programming. “I should make sure my inputs are valid”. They didn’t test it. The certainly didn’t think about it. But they wrote it. And then someone else saw the function in use, and said, “Oh… I should probably use that, too.” Somewhere, there’s probably a “Style Guide”, which mandates that, before attempting to invoke a variable that should contain a function, you use IsFunction to confirm it does. It comes up in code reviews, and code has been held from going into production because someone didn't use IsFunction.

And Julio probably is the first person to actually check the implementation since it was first written.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!


Planet DebianRenata D'Avila: How to use the EventCalendar ical


If you follow this blog, you should probably know by now that I have been working with my mentors to contribute to MoinMoin EventCalendar macro, adding the possility to export the events' data to an icalendar file.

A screenshot of the code, with the function definition for creating the ical file from events from the macro

The code (which can be found on this Github repository) isn't quite ready yet, because I'm still working to convert the recurrence rule to the icalendar format, but other than that, it should be working. Hopefully.

This guide assumes that you have the EventCalendar macro installed on the wiki and that the macro is called on a determined wikipage.

The icalendar file is now generated as an attachment the moment the macro is loaded. I created an "ical" link at the bottom of the calendar. When activated, this link prompts the download of the ical attachment of the page. Being an attachment, there is still the possibility to just view ical the file using the "attachment" menu if the user wishes to do so.

Wiki page showing the calendar, with the 'ical' link at the bottom

There are two ways of importing this calendar on Thunderbird. The first one is to download the file by clicking on the link and then proceeding to import it manually to Thunderbird.

Thunderbird screenshot, with the menus "Events and Tasks" and "Import" selected

The second option is to "Create a new calendar / On the network" and to use the URL address from the ical link as the "location", as it is shown below:

Thunderbird screenshot, showing the new calendar dialog and the ical URL pasted into the "location" textboxd

As usual, it's possible to customize the name for the calendar, the color for the events and such...

Thunderbird screenshot, showing the new calendar with it's events

I noticed a few Wikis that use the EventCalendar, such as Debian wiki itself and the FSFE wiki. Python wiki also seems to be using MoinMoin and EventCalendar, but it seems that they use a Google service to export the event data do iCal.

If you read this and are willing to try the code in your wiki and give me feedback, I would really appreciate. You can find the ways to contact me in my Debian Wiki profile.

Planet DebianJonathan McDowell: Getting Debian booting on a Lenovo Yoga 720

I recently got a new work laptop, a 13” Yoga 720. It proved difficult to install Debian on; pressing F12 would get a boot menu allowing me to select a USB stick I have EFI GRUB on, but after GRUB loaded the kernel and the initrd it would just sit there never outputting anything else that indicated the kernel was even starting. I found instructions about Ubuntu 17.10 which helped but weren’t the complete picture. What seems to be the situation is that the kernel won’t happily boot if “Legacy Support” is not enabled - enabling this (and still booting as EFI) results in a happier experience. However in order to be able to enable legacy boot you have to switch the SATA controller from RAID to AHCI, which can cause Windows to get unhappy about its boot device going away unless you warn it first.

  • Fire up an admin shell in Windows (right click on the start menu)
  • bcdedit /set safeboot minimal
  • Reboot into the BIOS
  • Change the SATA Controller mode from RAID to AHCI (dire warnings about “All data will be erased”. It’s not true, but you’ve back up first, right?) Set “Boot Mode” to “Legacy Support”.
  • Save changes and let Windows boot to Safe Mode
  • Fire up an admin shell in Windows (right click on the start menu again)
  • bcdedit /deletevalue safeboot
  • Reboot again and Windows will load in normal mode with the AHCI drivers

Additionally I had problems getting the GRUB entry added to the BIOS; efibootmgr shows it fine but it never appears in the BIOS boot list. I ended up using Windows to add it as the primary boot option using the following (<guid> gets replaced with whatever the new “Debian” section guid is):

bcdedit /enum firmware
bcdedit /copy "{bootmgr}" /d "Debian"
bcdedit /set "{<guid>}" path \EFI\Debian\grubx64.efi
bcdedit /set "{fwbootmgr}" displayorder "{<guid>}" /addfirst

Even with that at one point the BIOS managed to “forget” about the GRUB entry and require me to re-do the final “displayorder” command.

Once you actually have the thing installed and booting it seems fine - I’m running Buster due to the fact it’s a Skylake machine with lots of bits that seem to want a newer kernel, but claimed battery life is impressive, the screen is very shiny (though sometimes a little too shiny and reflective) and the NVMe SSD seems pretty nippy as you’d expect.

TEDRemembering pastor Billy Graham, and more news in brief

Behold, your recap of TED-related news:

Remembering Billy Graham. For more than 60 years, pastor Billy Graham inspired countless people around the world with his sermons. On Wednesday, February 21, he passed away at his home in North Carolina after struggling with numerous illnesses over the past few years. He was 99 years old. Raised on a dairy farm in N.C., Graham used the power of new technologies, like radio and television, to spread his message of personal salvation to an estimated 215 million people globally, while simultaneously reflecting on technology’s limitations. Reciting the story of King David to audiences at TED1998, “David found that there were many problems that technology could not solve. There were many problems still left. And they’re still with us, and you haven’t solved them, and I haven’t heard anybody here speak to that,” he said, referring to human evil, suffering, and death. To Graham, the answer to these problems was to be found in God. Even after his death, through the work of the Billy Graham Evangelistic Association, led by his son Franklin, his message of personal salvation will live on. (Watch Graham’s TED Talk)

Fashion inspired by Black Panther. TED Fellow and fashion designer Walé Oyéjidé draws on aesthetics from around the globe to create one-of-a-kind pieces that dismantle bias and celebrate often-marginalized groups. For New York Fashion Week, Oyéjidé designed a suit with a coat and scarf for a Black Panther-inspired showcase, sponsored by Marvel Studios. One of Oyéjidé’s scarves is also worn in the movie by its protagonist, King T’Challa. “The film is very much about the joy of seeing cultures represented in roles that they are generally not seen in. There’s villainy and heros, tech genius and romance,” Oyéjidé told the New York Times, “People of color are generally presented as a monolithic image. I’m hoping it smashes the door open to show that people can occupy all these spaces.” (Watch Oyéjidé’s TED Talk)

Nuclear energy advocate runs for governor. Environmentalist and nuclear energy advocate Michael Shellenberger has launched his campaign for governor of California as an independent candidate. “I think both parties are corrupt and broken. We need to start fresh with a fresh agenda,” he says. Shellenberger intends to run on an energy and environmental platform, and he hopes to involve student environmental activists in his campaign. California’s gubernatorial election will be held in November 2018. (Watch Shellenberger’s TED Talk)

Can UV light help us fight the flu? Radiation scientist David Brenner and his research team at Columbia University’s Irving Medical Center are exploring whether a type of ultraviolet light known as far-UVC could be used to kill the flu virus. To test their theory, they released a strain of the flu virus called H1N1 in an enclosed chamber and exposed it to low doses of UVC. In a paper published in Nature’s Scientific Reports, they report that far-UVC successfully deactivated the virus. Previous research has shown that far-UVC doesn’t penetrate the outer layer of human skin or eyes, unlike conventional UV rays, which means that it appears to be safe to use on humans. Brenner suggests that far-UVC could be used in public spaces to fight the flu. “Think about doctors’ waiting rooms, schools, airports and airplanes—any place where there’s a likelihood for airborne viruses,” Brenner told Time. (Watch Brenner’s TED Talk.)

A beautiful sculpture for Madrid. For the 400 anniversary of Madrid’s Plaza Mayor, artist Janet Echelman created a colorful, fibrous sculpture, which she suspended above the historic space. The sculpture, titled “1.78 Madrid,” aims to provoke contemplation of the interconnectedness of time and our spatial reality. The title refers to the number of microseconds that a day on Earth was shortened as a result of the 2011 earthquake in Japan, which was so strong it caused the planet’s rotation to accelerate. At night, colorful lights are projected onto the sculpture, which makes it an even more dynamic, mesmerizing sight for the city’s residents. (Watch Echelman’s TED Talk)

A graduate program that doesn’t require a high school degree. Economist Esther Duflo’s new master’s program at MIT is upending how we think about graduate school admissions. Rather than requiring the usual test scores and recommendation letters, the program allows anyone to take five rigorous, online courses for free. Students only pay to take the final exam, the cost of which ranges from $100 to $1,000 depending on income. If they do well on the final exam, they can apply to MIT’s master’s program in data, economics and development policy. “Anybody could do that. At this point, you don’t need to have gone to college. For that matter, you don’t need to have gone to high school,” Duflo told WBUR. Already, more than 8,000 students have enrolled online. The program intends to raise significant aid to cover the cost of the master’s program and living in Cambridge, with the first class arriving in 2020. (Watch Duflo’s TED Talk)

Have a news item to share? Write us at and you may see it included in this weekly round-up.

Planet DebianMJ Ray: How hard can typing æ, ø and å be?

Petter Reinholdtsen: How hard can æ, ø and å be? comments on the rubbish state of till printers and their mishandling of foreign characters.

Last week, I was trying to type an email, on a tablet, in Dutch. The tablet was running something close to Android and I was using a Bluetooth keyboard, which seemed to be configured correctly for my location in England.

Dutch doesn’t even have many accents. I wanted an e acute (é). If you use the on screen keyboard, this is actually pretty easy, just press and hold e and slide to choose the accented one… but holding e on a Bluetooth keyboard? eeeeeeeeeee!

Some guides suggest Alt and e, then e. Apparently that works, but not on keyboards set to Great British… because, I guess, we don’t want any of that foreign muck since the Brexit vote, or something(!)

Even once you figure out that madness and switch the keyboard back to international, which also enables alt i, u, n and so on to do other accents, I can’t find grave, check, breve or several other accents. I managed to send the emails in Dutch but I’d struggle with various other languages.

Have I missed a trick or what are the Android developers thinking? Why isn’t there a Compose key by default? Is there any way to get one?

CryptogramNew Spectre/Meltdown Variants

Researchers have discovered new variants of Spectre and Meltdown. The software mitigations for Spectre and Meltdown seem to block these variants, although the eventual CPU fixes will have to be expanded to account for these new attacks.

Worse Than FailureShiny Side Up


It feels as though disc-based media have always been with us, but the 1990s were when researchers first began harvesting these iridescent creatures from the wild in earnest, pressing data upon them to create the beast known as CD-ROM. Click-and-point adventure games, encyclopedias, choppy full-motion video ... in some cases, ambition far outweighed capability. Advances in technology made the media cheaper and more accessible, often for the worst. There are some US households that still burn America Online 7.0 CDs for fuel.

But we’re not here to delve into the late-90s CD marketing glut. We’re nestling comfortably into the mid-90s, when Internet was too slow and unreliable for anyone to upload installers onto a customer portal and call it a day. Software had to go out on physical media, and it had to be as bug-free as possible before shipping.

Chris, a developer fresh out of college, worked on product catalog database applications that were mailed to customers on CDs. It was a small shop with no Tech Support department, so he and the other developers had to take turns fielding calls from customers having issues with the admittedly awful VB4 installer. It was supposed to launch automatically, but if the auto-play feature was disabled in Windows 95, or the customer canceled the installer pop-up without bothering to read it, Chris or one of his colleagues was likely to hear about it.

And then came the caller who had no clue what Chris meant when he suggested, "Why don't we open up the CD through the file system and launch the installer manually?"

These were the days before remote desktop tools, and the caller wasn't the savviest computer user. Talking him through minimizing his open programs, double-clicking on My Computer, and browsing into the CD drive took Chris over half an hour.

"There's nothing here," the caller said.

So close to the finish line, and yet so far. Chris stifled his exasperation. "What do you mean?"

"I opened the CD like you said, and it's completely empty."

This was new. Chris frowned. "You're definitely looking at the right drive? The one with the shiny little disc icon?"

"Yes, that's the one. It's empty."

Chris' frown deepened. "Then I guess you got a bad copy of the CD. I'm sorry about that! Let me copy down your name and address, and I'll get a new one sent out to you."

The customer provided his mailing address accordingly. Chris finished scribbling it onto a Post-it square. "OK, lemme read that back to—"

"The shiny side is supposed to be turned upwards, right?" the customer blurted. "Like a gramophone record?"

Chris froze, then slapped the mute button before his laughter spilled out over the line. After composing himself, he returned to the call as the model of professionalism. "Actually, it should be shiny-side down."

"Really? Huh. The little icon's lying, then."

"Yeah, I guess it is," Chris replied. "Unfortunately, that's on Microsoft to fix. Let's turn the disc over and try again."

[Advertisement] Incrementally adopt DevOps best practices with BuildMaster, ProGet and Otter, creating a robust, secure, scalable, and reliable DevOps toolchain.

Planet Linux AustraliaColin Charles: MariaDB Developer’s unconference & M|18

Been a while since I wrote anything MySQL/MariaDB related here, but there’s the column on the Percona blog, that has weekly updates.

Anyway, I’ll be at the developer’s unconference this weekend in NYC. Even managed to snag a session on the schedule, MySQL features missing in MariaDB Server (Sunday, 12.15–13.00). Signup on meetup?

Due to the prevalence of “VIP tickets”, I too signed up for M|18. If you need a discount code, I’ll happily offer them up to you to see if they still work (though I’m sure a quick Google will solve this problem for you). I’ll publish notes, probably in my weekly column.

If you’re in New York and want to say hi, talk shop, etc. don’t hesitate to drop me a line.

Planet DebianSam Hartman: Tools of Love

From my spiritual blog

I have been quiet lately. My life has been filled with gentle happiness, work, and less gentle wedding planning. How do you write about quiet happiness without sounding like the least contemplative aspects of Facebook? How do I share this part of the journey in a way that others can learn from? I was offering thanks the other day and was reminded of one of my early experiences at Fires of Venus. Someone was talking about how they were there working to do the spiritual work they needed in order to achieve their dream of opening a restaurant. I'll admit that when I thought of going to a multi-day retreat focused on spiritual connection to love, opening a restaurant had not been at the forefront of my mind. And yet, this was their dream, and surely dreams are the stuff of love. As they continued, they talked about finding self love deep enough to have the confidence to believe in dreams.

As I recalled this experience, I offered thanks for all the tools I've found to use as a lover. Every time I approach something with joy and awe, I gain new insight into the beauty of the world around us. In my work within the IETF I saw the beauty of the digital world we're working to create. Standing on sacred land, I can find the joy and love of nature and the moment.

I can share the joy I find and offer it to others. I've been mentoring someone at work. They're at a point where they're appreciating some of the great mysteries of computing like “Reflections on Trusting Trust” or two's compliment arithmetic. I’ve had the pleasure of watching their moments of discovery and also helping them understand the complex history in how we’ve built the digital world we have. Each moment of delight reinforces the idea that we live in a world where we expect to find this beauty and connect with it. Each experience reinforces the idea that we live in a world filled with things to love.

And so, I’ve turned even my experiences as a programmer into tools for teaching love and joy. I’ve been learning another new tool lately. I’ve been putting together the dance mix for my wedding. Between that and a project last year, I’ve learned a lot about music. I will never be a professional DJ or song producer. However, I have always found joy in music and dance, and I absolutely can be good enough to share that with my friends. I can be good enough to let music and rhythm be tools I use to tell stories and share joy. In learning skills and improving my ability with music, I better appreciate the music I hear.

The same is true with writing: both my work here and my fiction. I’m busy enough with other things that I am unlikely to even attempt writing as my livelihood. Even so, I have more tools for sharing the love I find and helping people find the love and joy in their world.

These are all just tools. Words and song won’t suddenly bring us all together any more than physical affection and our bodies. However, words, song, and the joy we find in each other and in the world we build can help us find connection and empathy. We can learn to see the love that is there between us. All these tools can help us be vulnerable and open together. And that—the changes we work within ourselves using these tools—can bring us to a path of love. And so how do I write about happiness? I give thanks for the things it allows me to explore. I find value in growing and trying new things. In my best moments, each seems a lens through which I can grow as a lover as I walk Venus’s path.


Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #147

Here's what happened in the Reproducible Builds effort between Sunday February 11 and Saturday February 17 2018:

Media coverage

Reproducible work in other projects

Packages reviewed and fixed, and bugs filed

Various previous patches were merged upstream:

Reviews of unreproducible packages

38 package reviews have been added, 27 have been updated and 13 have been removed in this week, adding to our knowledge about identified issues.

4 issue types have been added:

One issue type has been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (24)
  • Boyuan Yang (1)
  • Cédric Boutillier (1)
  • Jeremy Bicha (1)
  • Matthias Klose (1)

diffoscope development

  • Chris Lamb:
    • Add support for comparing Berkeley DB files (#890528). This is currently incomplete because the Berkeley DB libraries do not return the same uid/hash reliably (it returns "random" memory contents) so we must strip those from the human-readable output.

Website development


This week's edition was written by Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianBenjamin Mako Hill: Lookalikes

Hippy/mako lookalikes

Did I forget a period of my life when I grew a horseshoe mustache and dreadlocks, walked around topless, and illustrated this 2009 article in the Economist on the economic boon that hippy festivals represent to rural American communities?

Previous lookalikes are here.

Planet DebianRaphaël Hertzog: Time to Join Extended Long Term Support for Debian 7 Wheezy

Debian 7 Wheezy LTS period ends on May 31st and some companies asked Freexian if they could get security support past this date. Since about half of the current team of paid LTS contributors is willing to continue to provide security updates for Wheezy, I have started to work on making this possible.

I just initiated a discussion on debian-devel with multiple Debian teams to see whether it is possible to continue to use infrastructure to host the wheezy security updates that would be prepared in this extended LTS period.

From the sponsor side, this extended LTS will not work like the regular LTS. It is unrealistic to continue to support all packages and all architectures so only the packages/architectures requested by sponsors will be supported. The amount invoiced to each sponsor will be directly related to the package list that they ask us to support. We made an estimation (based on history) of how much it costs to support each package and we split that cost between all the sponsors that are requesting support for this package. That cost is re-evaluated quarterly and will likely increase over time as sponsors are stopping their support (when they finished to migrate all their machines for example).

This extended LTS will also have some restrictions in terms of packages that we can support. For instance, we will no longer support the linux kernel from wheezy, you will have to switch to the kernel used in jessie (or maybe we will maintain a backport ourselves in wheezy). It is also not yet clear whether we can support OpenJDK since upstream support of version 7 stops at the end of June. And switching to OpenJDK 8 is likely non-trivial. There are likely other unsupportable packages too.

Anyway, if your company needs wheezy security support past end of May, now is the time to worry about it. Please send us a mail with the list of source packages that you would like to see supported. The more companies get involved, the less it will cost to each of them. Our plans are to gather the required data from interested companies in the next few weeks and make a first estimation of the price they will have to pay for the first quarter by mid-march. Then they confirm that they are OK with the offer and we will emit invoices in April so that they can be paid before end of May.

Note however that we decided that it would not be possible to sponsor extended wheezy support (and thus influence which packages are supported) if you are not among the regular LTS sponsors (at bronze level at least). Extended LTS would not be possible without the regular LTS so if you need the former, you have to support the latter too.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianMichal Čihař: Weblate 2.19.1

Weblate 2.19.1 has been released today. This is bugfix only release mostly to fix problematic migration from 2.18 which some users have observed.

Full list of changes:

  • Fixed migration issue on upgrade from 2.18.
  • Improved file upload API validation.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Planet DebianNicolas Dandrimont: Listing and loading of Debian repositories: now live on Software Heritage

Software Heritage is the project for which I’ve been working during the past two and a half years now. The grand vision of the project is to build the universal software archive, which will collect, preserve and share the Software Commons.

Today, we’ve announced that Software Heritage is archiving the contents of Debian daily. I’m reposting this article on my blog as it will probably be of interest to readers of Planet Debian.

TL;DR: Software Heritage now archives all source packages of Debian as well as its security archive daily. Everything is ready for archival of other Debian derivatives as well. Keep on reading to get details of the work that made this possible.


When we first announced Software Heritage, back in 2016, we had archived the historical contents of Debian as present on the service, as a one-shot proof of concept import.

This code was then left in a drawer and never touched again, until last summer when Sushant came do an internship with us. We’ve had the opportunity to rework the code that was originally written, and to make it more generic: instead of the specifics of, the code can now work with any Debian repository. Which means that we could now archive any of the numerous Debian derivatives that are available out there.

This has been live for a few months, and you can find Debian package origins in the Software Heritage archive now.

Mapping a Debian repository to Software Heritage

The main challenge in listing and saving Debian source packages in Software Heritage is mapping the content of the repository to the generic source history data model we use for our archive.

Organization of a Debian repository

Before we start looking at a bunch of unpacked Debian source packages, we need to know how a Debian repository is actually organized.

At the top level of a Debian repository lays a set of suites, representing versions of the distribution, that is to say a set of packages that have been tested and are known to work together. For instance, Debian currently has 6 active suites, from wheezy (“old old stable” version), all the way up to experimental; Ubuntu has 8, from precise (12.04 LTS), up to bionic (the future 18.04 release), as well as a devel suite. Each of those suites also has a bunch of “overlay” suites, such as backports, which are made available in the archive alongside full suites.

Under the suites, there’s another level of subdivision, which Debian calls components, and Ubuntu calls areas. Debian uses its components to segregate packages along licensing terms (main, contrib and non-free), while Ubuntu uses its areas to denote the level of support of the packages (main, universe, multiverse, …).

Finally, components contain source packages, which merge upstream sources with distribution-specific patches, as well as machine-readable instructions on how to build the package.

Organization of the Software Heritage archive

The Software Heritage archive is project-centric rather than version-centric. What this means is that we are interested in keeping the history of what was available in software origins, which can be thought of as a URL of a repository containing software artifacts, tagged with a type representing the means of access to the repository.

For instance, the origin for the GitHub mirror of the Linux kernel repository has the following data:

For each visit of an origin, we take a snapshot of all the branches (and tagged versions) of the project that were visible during that visit, complete with their full history. See for instance one of the latest visits of the Linux kernel. For the specific case of GitHub, pull requests are also visible as virtual branches, so we fetch those as well (as branches named refs/pull/<pull request number>/head).

Bringing them together

As we’ve seen, Debian archives (just as well as archives for other “traditional” Linux distributions) are release-centric rather than package-centric. Mapping distributions to the Software Heritage archive therefore takes a little bit of gymnastics, to transpose the list of source packages available in each suite to a list of available versions per source package. We do this step by step:

  1. Download the Sources indices for all the suites and components known in the Debian repository
  2. Parse the Sources indices, listing all source packages inside
  3. For each source package, tell the Debian loader to load all the available versions (grouped by name), generating a complete snapshot of the state of the source package across the Debian repository

The source packages are mapped to origins using the following format:

  • type: deb
  • url: deb://<repository name>/packages/<source package name> (e.g. deb://Debian/packages/linux)

We use a repository name rather than the actual URL to a repository so that links can persist even if a given mirror disappears.

Loading Debian source packages

To load Debian source packages into the Software Heritage archive, we have to convert them: Debian-based distributions distribute source packages as a set of files, a dsc (Debian Source Control) and a set of tarballs (usually, an upstream tarball and a Debian-specific overlay). On the other hand, Software Heritage only stores version-control information such as revisions, directories, files.

Unpacking the source packages

Our philosophy at Software Heritage is to store the source code of software in the precise form that allows a developer to start working on it. For Debian source packages, this is the unpacked source code tree, with all patches applied. After checking that the files we have downloaded match the checksums published in the index files, we simply use dpkg-source -x to extract the source package, with patches applied, ready to build. This also means that we currently fail to import packages that don’t extract with the version of dpkg-source available in Debian Stretch.

Generating a synthetic revision

After walking the extracted source package tree, computing identifiers for all its contents, we get the identifier of the top-level tree, which we will reference in the synthetic revision.

The synthetic revision contains the “reproducible” metadata that is completely intrinsic to the Debian source package. With the current implementation, this means:

  • the author of the package, and the date of modification, as referenced in the last entry of the source package changelog (referenced as author and committer)
  • the original artifact (i.e. the information about the original source package)
  • basic information about the history of the package (using the parsed changelog)

However, we never set parent revisions in the synthetic commits, for two reasons:

  • there is no guarantee that packages referenced in the changelog have been uploaded to the distribution, or imported by Software Heritage (our update frequency is lower than that of the Debian archive)
  • even if this guarantee existed, and all versions of all packages were available in Software Heritage, there would be no guarantee that the version referenced in the changelog is indeed the version we imported in the first place

This makes the information stored in the synthetic revision fully intrinsic to the source package, and reproducible. In turn, this allows us to keep a cache, mapping the original artifacts to synthetic revision ids, to avoid loading packages again once we have loaded them once.

Storing the snapshot

Finally, we can generate the top-level object in the Software Heritage archive, the snapshot. For instance, you can see the snapshot for the latest visit of the glibc package.

To do so, we generate a list of branches by concatenating the suite, the component, and the version number of each detected source package (e.g. stretch/main/2.24-10 for version 2.24-10 of the glibc package available in stretch/main). We then point each branch to the synthetic revision that was generated when loading the package version.

In case a version of a package fails to load (for instance, if the package version disappeared from the mirror between the moment we listed the distribution, and the moment we could load the package), we still register the branch name, but we make it a “null” pointer.

There’s still some improvements to make to the lister specific to Debian repositories: it currently hardcodes the list of components/areas in the distribution, as the repository format provides no programmatic way of eliciting them. Currently, only Debian and its security repository are listed.

Looking forward

We believe that the model we developed for the Debian use case is generic enough to capture not only Debian-based distributions, but also RPM-based ones such as Fedora, Mageia, etc. With some extra work, it should also be possible to adapt it for language-centric package repositories such as CPAN, PyPI or Crates.

Software Heritage is now well on the way of providing the foundations for a generic and unified source browser for the history of traditional package-based distributions.

We’ll be delighted to welcome contributors that want to lend a hand to get there.

CryptogramFacebook Will Verify the Physical Location of Ad Buyers with Paper Postcards

It's not a great solution, but it's something:

The process of using postcards containing a specific code will be required for advertising that mentions a specific candidate running for a federal office, Katie Harbath, Facebook's global director of policy programs, said. The requirement will not apply to issue-based political ads, she said.

"If you run an ad mentioning a candidate, we are going to mail you a postcard and you will have to use that code to prove you are in the United States," Harbath said at a weekend conference of the National Association of Secretaries of State, where executives from Twitter Inc and Alphabet Inc's Google also spoke.

"It won't solve everything," Harbath said in a brief interview with Reuters following her remarks.

But sending codes through old-fashioned mail was the most effective method the tech company could come up with to prevent Russians and other bad actors from purchasing ads while posing as someone else, Harbath said.

It does mean a several-days delay between purchasing an ad and seeing it run.

Krebs on SecurityMoney Laundering Via Author Impersonation on Amazon?

Patrick Reames had no idea why sent him a 1099 form saying he’d made almost $24,000 selling books via Createspace, the company’s on-demand publishing arm. That is, until he searched the site for his name and discovered someone has been using it to peddle a $555 book that’s full of nothing but gibberish.

The phony $555 book sold more than 60 times on Amazon using Patrick Reames’ name and Social Security number.

Reames is a credited author on Amazon by way of several commodity industry books, although none of them made anywhere near the amount Amazon is reporting to the Internal Revenue Service. Nor does he have a personal account with Createspace.

But that didn’t stop someone from publishing a “novel” under his name. That word is in quotations because the publication appears to be little more than computer-generated text, almost like the gibberish one might find in a spam email.

“Based on what I could see from the ‘sneak peak’ function, the book was nothing more than a computer generated ‘story’ with no structure, chapters or paragraphs — only lines of text with a carriage return after each sentence,” Reames said in an interview with KrebsOnSecurity.

The impersonator priced the book at $555 and it was posted to multiple Amazon sites in different countries. The book — which as been removed from most Amazon country pages as of a few days ago — is titled “Lower Days Ahead,” and was published on Oct 7, 2017.

Reames said he suspects someone has been buying the book using stolen credit and/or debit cards, and pocketing the 60 percent that Amazon gives to authors. At $555 a pop, it would only take approximately 70 sales over three months to rack up the earnings that Amazon said he made.

“This book is very unlikely to ever sell on its own, much less sell enough copies in 12 weeks to generate that level of revenue,” Reames said. “As such, I assume it was used for money laundering, in addition to tax fraud/evasion by using my Social Security number. Amazon refuses to issue a corrected 1099 or provide me with any information I can use to determine where or how they were remitting the royalties.”

Reames said the books he has sold on Amazon under his name were done through his publisher, not directly via a personal account (the royalties for those books accrue to his former employer) so he’d never given Amazon his Social Security number. But the fraudster evidently had, and that was apparently enough to convince Amazon that the imposter was him.

Reames said after learning of the impersonation, he got curious enough to start looking for other examples of author oddities on Amazon’s Createspace platform.

“I have reviewed numerous Createspace titles and its clear to me that there may be hundreds if not thousands of similar fraudulent books on their site,” Reames said. “These books contain no real content, only dozens of pages of gibberish or computer generated text.”

For example, searching Amazon for the name Vyacheslav Grzhibovskiy turns up dozens of Kindle “books” that appear to be similar gibberish works — most of which have the words “quadrillion,” “trillion” or a similar word in their titles. Some retail for just one or two dollars, while others are inexplicably priced between $220 and $320.

Some of the “books” for sale on Amazon attributed to a Vyacheslav Grzhibovskiy.

“Its not hard to imagine how these books could be used to launder money using stolen credit cards or facilitating transactions for illicit materials or funding of illegal activities,” Reames said. “I can not believe Amazon is unaware of this and is unwilling to intercede to stop it. I also believe they are not properly vetting their new accounts to limit tax fraud via stolen identities.”

Reames said Amazon refuses to send him a corrected 1099, or to discuss anything about the identity thief.

“They say all they can do at this point is send me a letter acknowledging than I’m disputing ever having received the funds, because they said they couldn’t prove I didn’t receive the funds. So I told them, ‘If you’re saying you can’t say whether I did receive the funds, tell me where they went?’ And they said, “Oh, no, we can’t do that.’ So I can’t clear myself and they won’t clear me.”

Amazon said in a statement that the security of customer accounts is one of its highest priorities.

“We have policies and security measures in place to help protect them. Whenever we become aware of actions like the ones you describe, we take steps to stop them. If you’re concerned about your account, please contact Amazon customer service immediately using the help section on our website.”

Beware, however, if you plan to contact Amazon customer support via phone. Performing a simple online search for Amazon customer support phone numbers can turn up some dubious and outright fraudulent results.

Earlier this month, KrebsOnSecurity heard from a fraud investigator for a mid-sized bank who’d recently had several customers who got suckered into scams after searching for the customer support line for Amazon. She said most of these customers were seeking to cancel an Amazon Prime membership after the trial period ended and they were charged a $99 fee.

The fraud investigator said her customers ended up calling fake Amazon support numbers, which were answered by people with a foreign accent who proceeded to request all manner of personal data, including bank account and credit card information. In short order, the customers’ accounts were used to set up new Amazon accounts as well as accounts at, a service that facilitates the purchase of virtual currencies like Bitcoin.

This Web site does a good job documenting the dozens of phony Amazon customer support numbers that are hoodwinking unsuspecting customers. Amazingly, many of these numbers seem to be heavily promoted using Amazon’s own online customer support discussion forums, in addition to third-party sites like

Interestingly, clicking on the Customer Help Forum link link from the Amazon Support Options and Contact Us page currently sends visitors to the page pictured below, which displays a “Sorry, We Couldn’t Find That Page” error. Perhaps the company is simply cleaning things up after being notified last week by KrebsOnSecurity about the bogus phone numbers being promoted on the forum.

In any case, it appears some of these fake Amazon support numbers are being pimped by a number dubious-looking e-books for sale on Amazon that are all about — you guessed it — how to contact Amazon customer support.

If you wish to contact Amazon by phone, the only numbers you should use are:

U.S. and Canada: 1-866-216-1072

International: 1-206-266-2992

Amazon’s main customer help page is here.

Update, 11:44 a.m. ET: Not sure when it happened exactly, but this notice says Amazon has closed its discussion boards.

Update, 4:02 p.m. ET: Amazon just shared the following statement, in addition to their statement released earlier urging people to visit a help page that didn’t exist (see above):

“Anyone who believes they’ve received an incorrect 1099 form or a 1099 form in error can contact and we will investigate.”

“This is the general Amazon help page:”

Update 4:01 p.m ET: Reader zboot has some good stuff. What makes Amazon a great cashout method for cybercrooks as opposed to, say, bitcoin cashouts, is that funds can be deposited directly into a bank account. He writes:

“It’s not that the darkweb is too slow, it’s that you still need to cash out at the end. Amazon lets you go from stolen funds directly to a bank account. If you’ve set it up with stolen credentials, that process may be faster than getting money out of a bitcoin exchange which tend to limit fiat withdraws to accounts created with the amount of information they managed to steal.”

Planet DebianDaniel Pocock: Hacking at EPFL Toastmasters, Lausanne, tonight

As mentioned in my earlier blog, I give a talk about Hacking at the Toastmasters club at EPFL tonight. Please feel free to join us and remember to turn off your mobile device or leave it at home, you never know when it might ring or become part of a demonstration.

Worse Than FailureCodeSOD: The Telltale Snippet

True! nervous, very, very dreadfully nervous I had been and am; but why will you say that I am mad? The disease had sharpened my senses, not destroyed, not dulled them. Above all was the sense of hearing acute. I heard all things in the heaven and in the earth. I heard many things in hell. How then am I mad? Hearken! and observe how healthily, how calmly I can tell you the whole story. - “The Telltale Heart” by Edgar Allen Poe

Today’s submitter credits themselves as Too Afraid To Say (TATS) who they are. Why? Because like a steady “thump thump” from beneath the floorboards, they are haunted by their crimes. The haunting continues to this very day.

It is impossible to say how the idea entered TATS’s brain, but as a fresh-faced junior developer, they set out to write a flexible web-control in JavaScript. What they wanted was to dynamically add items to the control. Each item was a set of fields- an ID, a tool tip, a description, etc.

Think about how you might pass a list of objects to a method.

    ObjectLookupField.prototype._AddItems = function _AddItems(objItems)
        if (objItems && objItems.length > 0)
            var objItemIDs = [];
            var objTooltips = [];
            var objImages = [];
            var objTypes = [];
            var objDeleted = [];
            var objDescriptions = [];
            var objParentTreeCodes = [];
            var objHasChilderen = [];
            var objPath = [];
            var objMarked = [];
            var objLocked = [];

            var blnSkip;

            for (var intI = 0; intI < objItems.length; intI++)
                objImages.push((objItems[intI].TypeIconURL ? objItems[intI].TypeIconURL : objItems[intI].IconURL));
                objTooltips.push(objItems[intI].Tooltip ? objItems[intI].Tooltip : '');
                objMarked.push(objItems[intI].Marked ? 'Marked' : '');

                                // SNIP, not really related
                            //TATS also implemented `addItems` which requires all these arrays
            window[this._strControlID].addItems([objItemIDs, objImages, objPath, objTooltips, objLocked, objMarked, objParentTreeCodes, objHasChilderen]);

TATS used the infamous “Arrject” pattern. Instead of having a list of objects, where each object has all of the fields it needs, the Arrject pattern has one array per field, and then we’ll hope that each index holds all the related data for a given item. For example:

    arrNames = {"Joebob", "Sallybob", "Suebob"};
    arrAddresses = {"123 Street St", "234 Road Rd", "345 Lane Ln"};
    arrPhones = {"555-1234", "555-2345", "555-3456"};

The 0th index of every array contains everything you want to know about Joebob.

Most uses of the Arrject pattern end up in code that doesn’t use objects at all, but TATS adds their own little twist. They explode an object into a set of arrays, and then pass those arrays to their own method which creates the necessary DOM elements.

TATS smiled, for what did they have to fear? They bade the senior developers welcome: use my code. And they did.

Before long, this little bit of code propagated throughout their entire codebase; copied, pasted, dropped in, loaded as a JS dependency, hosted on a private CDN. It was everywhere. Time passed, and careers changed. TATS got promoted up to senior. Other seniors left and handed their code off to TATS. And that’s when the thumping beneath the floorboards became intolerable. That is why they are “Too Afraid to Say”. This little ghost, this reminder of their mistakes as a junior dev is always there, waiting beneath their feet, and it keeps. getting. louder.

“Villains!” I shrieked, “dissemble no more! I admit the deed!—tear up the planks!—here, here!—it is the beating of his hideous heart!”

[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!


CryptogramOn the Security of Walls

Interesting history of the security of walls:

Dún Aonghasa presents early evidence of the same principles of redundant security measures at work in 13th century castles, 17th century star-shaped artillery fortifications, and even "defense in depth" security architecture promoted today by the National Institute of Standards and Technology, the Nuclear Regulatory Commission, and countless other security organizations world-wide.

Security advances throughout the centuries have been mostly technical adjustments in response to evolving weaponry. Fortification -- the art and science of protecting a place by imposing a barrier between you and an enemy -- is as ancient as humanity. From the standpoint of theory, however, there is very little about modern network or airport security that could not be learned from a 17th century artillery manual. That should trouble us more than it does.

Fortification depends on walls as a demarcation between attacker and defender. The very first priority action listed in the 2017 National Security Strategy states: "We will secure our borders through the construction of a border wall, the use of multilayered defenses and advanced technology, the employment of additional personnel, and other measures." The National Security Strategy, as well as the executive order just preceding it, are just formal language to describe the recurrent and popular idea of a grand border wall as a central tool of strategic security. There's been a lot said about the costs of the wall. But, as the American finger hovers over the Hadrian's Wall 2.0 button, whether or not a wall will actually improve national security depends a lot on how walls work, but moreso, how they fail.

Lots more at the link.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, January 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, about 160 work hours have been dispatched among 11 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased slightly at 187 hours per month. It would be nice if the slow growth could continue as the amount of work seems to be slowly growing too.

The security tracker currently lists 23 packages with a known CVE and the dla-needed.txt file 23 too. The number of open issues seems to be stable compared to last month which is a good sign.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Krebs on SecurityIRS Scam Leverages Hacked Tax Preparers, Client Bank Accounts

Identity thieves who specialize in tax refund fraud have been busy of late hacking online accounts at multiple tax preparation firms, using them to file phony refund requests. Once the Internal Revenue Service processes the return and deposits money into bank accounts of the hacked firms’ clients, the crooks contact those clients posing as a collection agency and demand that the money be “returned.”

In one version of the scam, criminals are pretending to be debt collection agency officials acting on behalf of the IRS. They’ll call taxpayers who’ve had fraudulent tax refunds deposited into their bank accounts, claim the refund was deposited in error, and threaten recipients with criminal charges if they fail to forward the money to the collection agency.

This is exactly what happened to a number of customers at a half dozen banks in Oklahoma earlier this month. Elaine Dodd, executive vice president of the fraud division at the Oklahoma Bankers Association, said many financial institutions in the Oklahoma City area had “a good number of customers” who had large sums deposited into their bank accounts at the same time.

Dodd said the bank customers received hefty deposits into their accounts from the U.S. Treasury, and shortly thereafter were contacted by phone by someone claiming to be a collections agent for a firm calling itself DebtCredit and using the Web site name debtcredit[dot]us.

“We’re having customers getting refunds they have not applied for,” Dodd said, noting that the transfers were traced back to a local tax preparer who’d apparently gotten phished or hacked. Those banks are now working with affected customers to close the accounts and open new ones, Dodd said. “If the crooks have breached a tax preparer and can send money to the client, they can sure enough pull money out of those accounts, too.”

Several of the Oklahoma bank’s clients received customized notices from a phony company claiming to be a collections agency hired by the IRS.

The domain debtcredit[dot]us hasn’t been active for some time, but an exact copy of the site to which the bank’s clients were referred by the phony collection agency can be found at jcdebt[dot]com — a domain that was registered less than a month ago. The site purports to be associated with a company in New Jersey called Debt & Credit Consulting Services, but according to a record (PDF) retrieved from the New Jersey Secretary of State’s office, that company’s business license was revoked in 2010.

“You may be puzzled by an erroneous payment from the Internal Revenue Service but in fact it is quite an ordinary situation,” reads the HTML page shared with people who received the fraudulent IRS refunds. It includes a video explaining the matter, and references a case number, the amount and date of the transaction, and provides a list of personal “data reported by the IRS,” including the recipient’s name, Social Security Number (SSN), address, bank name, bank routing number and account number.

All of these details no doubt are included to make the scheme look official; most recipients will never suspect that they received the bank transfer because their accounting firm got hacked.

The scammers even supposedly assign the recipients an individual “appointed debt collector,” complete with a picture of the employee, her name, telephone number and email address. However, the emails to the domain used in the email address from the screenshot above (debtcredit[dot]com) bounced, and no one answers at the provided telephone number.

Along with the Web page listing the recipient’s personal and bank account information, each recipient is given a “transaction error correction letter” with IRS letterhead (see image below) that includes many of the same personal and financial details on the HTML page. It also gives the recipient instructions on the account number, ACH routing and wire number to which the wayward funds are to be wired.

A phony letter from the IRS instructing recipients on how and where to wire the money that was deposited into their bank account as a result of a fraudulent tax refund request filed in their name.

Tax refund fraud affects hundreds of thousands, if not millions, of U.S. citizens annually. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

On Feb. 2, 2018, the IRS issued a warning to tax preparers, urging them to step up their security in light of increased attacks. On Feb. 13, the IRS warned that phony refunds through hacked tax preparation accounts are a “quickly growing scam.”

“Thieves know it is more difficult to identify and halt fraudulent tax returns when they are using real client data such as income, dependents, credits and deductions,” the agency noted in the Feb. 2 alert. “Generally, criminals find alternative ways to get the fraudulent refunds delivered to themselves rather than the real taxpayers.”

The IRS says taxpayer who receive fraudulent transfers from the IRS should contact their financial institution, as the account may need to be closed (because the account details are clearly in the hands of cybercriminals). Taxpayers receiving erroneous refunds also should consider contacting their tax preparers immediately.

If you go to file your taxes electronically this year and the return is rejected, it may mean fraudsters have beat you to it. The IRS advises taxpayers in this situation to follow the steps outlined in the Taxpayer Guide to Identity Theft. Those unable to file electronically should mail a paper tax return along with Form 14039 (PDF) — the Identity Theft Affidavit — stating they were victims of a tax preparer data breach.

Worse Than FailureCousin of ITAPPMONROBOT

Logitech Quickcam Pro 4000

Every year, Initrode Global was faced with further and further budget shortages in their IT department. This wasn't because the company was doing poorly—on the contrary, the company overall was doing quite well, hitting record sales every quarter. The only way to spin that into a smaller budget was to dream bigger. Thus, every quarter, the budget demanded greater and greater increases in sales, and the exceptional growth was measured against the desired phenomenal growth and found wanting.

IT, being a cost center, was always hit by budget cuts the hardest. What did they need money for? The lights were still on, the mainframes still churning; any additional funds would only encourage them to take wild risks and break things.

One of the things people were worried about breaking were the thin clients. These had been purchased some years ago from Smyrt, who had been acquired the previous year by Hell Computers. There would be no tech support or patching, not from Hell. The IT department was on their own to ensure the clients kept running.

Unfortunately, the things seemed to have a will of their own—and that will did not include remaining up for weeks on end. Every once in a while, when booting Linux on the thin clients, the Thin Film Transistor screen would turn dark as soon as the X server started. They would remain dark after that; however, when the helpdesk SSH'd into the system, the screen would of course render perfectly on their end. So there was nothing to do to troubleshoot except lug a thin client to their work area and test workarounds from there.

The worst part of this kind of troubleshooting is when the problem is an intermittent one. The only way they could think to reproduce the problem was to spend hours in front of the client, turning it off and back on again. In the face of budget cuts, the already understaffed desk had no manpower to do something so trivial and dull.

Tedium is the mother of invention. Many of the most ingenious pieces of automation were put in place when an enterprising programmer was faced with performing a mind-numbing task over and over for the foreseeable future. Such is the case in this instance. Lacking the support staff to power cycle the machine over and over, the staff instead built a robot.

A webcam was found in the back room, dusty and abandoned, the last vestige of a proposed work-from-home solution that never quite came to fruition years before. A sticker of transparent rubber someone found in their desk was placed over the metal rim of the camera so it wouldn't leave any scratches on the glass of the TFT screen. The webcam was placed up close against one strategically chosen corner of the screen, and attached to a Raspberry Pi someone brought from home.

The Pi was programmed to run a bash script, which in turn called a CLI image-grabbing tool and then applied some ImageMagick filters to determine the brightness value of the patch of screen it could see. This brightness value was compared against a known list of brightnesses to determine which state the machine was in: the boot menu, the Linux kernel messages scrolling past, the colorful login screen, or the solid black screen representing the problem. When the Pi detected a login screen, it would run a scripted reboot on the thin client using SSH and a keypair. If, instead, the screen remained dark for a long period of time, it would send an IM through the company messaging solution to alert the staff that they could begin their testing, then exit.

We've seen machines with the ability to manipulate physical servers. Now, we have machines seeing and evaluating the world in front of them. How long before we reach peak Skynet potential here at TDWTF? And what would the robot revolution look like, with founding members such as these?

[Advertisement] Incrementally adopt DevOps best practices with BuildMaster, ProGet and Otter, creating a robust, secure, scalable, and reliable DevOps toolchain.

Planet DebianSteve Kemp: How we care for our child

This post is a departure from the regular content, which is supposed to be "Debian and Free Software", but has accidentally turned into a hardware blog recently!

Anyway, we have a child who is now about 14 months old. The way that my wife and I care for him seems logical to us, but often amuses local people. So in the spirit of sharing this is what we do:

  • We divide the day into chunks of time.
  • At any given time one of us is solely responsible for him.
    • The other parent might be nearby, and might help a little.
    • But there is always a designated person who will be changing nappies, feeding, and playing at any given point in the day.
  • The end.

So our weekend routine, covering Saturday and Sunday, looks like this:

  • 07:00-08:00: Husband
  • 08:01-13:00: Wife
  • 13:01-17:00: Husband
  • 17:01-18:00: Wife
  • 18:01-19:30: Husband

Our child, Oiva, seems happy enough with this and he sometimes starts walking from one parent to the other at the appropriate time. But the real benefit is that each of us gets some time off - in my case I get "the morning" off, and my wife gets the afternoon off. We can hide in our bedroom, go shopping, eat cake, or do anything we like.

Week-days are similar, but with the caveat that we both have jobs. I take the morning, and the evenings, and in exchange if he wakes up overnight my wife helps him sleep and settle between 8PM-5AM, and if he wakes up later than 5AM I deal with him.

Most of the time our child sleeps through the night, but if he does wake up it tends to be in the 4:30AM/5AM timeframe. I'm "happy" to wake up at 5AM and stay up until I go to work because I'm a morning person and I tend to go to bed early these days.

Day-care is currently a complex process. There are three families with small children, and ourselves. Each day of the week one family hosts all the children, and the baby-sitter arrives there too (all the families live within a few blocks of each other).

All of the parents go to work, leaving one carer in charge of 4 babies for the day, from 08:15-16:15. On the days when we're hosting the children I greet the carer then go to work - on the days the children are at a different families house I take him there in the morning, on my way to work, and then my wife collects him in the evening.

At the moment things are a bit terrible because most of the children have been a bit sick, and the carer too. When a single child is sick it's mostly OK, unless that is the child which is supposed to be host-venue. If that child is sick we have to panic and pick another house for that day.

Unfortunately if the child-carer is sick then everybody is screwed, and one parent has to stay home from each family. I guess this is the downside compared to sending the children to public-daycare.

This is private day-care, Finnish-style. The social-services (kela) will reimburse each family €700/month if you're in such a scheme, and carers are limited to a maximum of 4 children. The net result is that prices are stable, averaging €900-€1000 per-child, per month.

(The €700 is refunded after a month or two, so in real-terms people like us pay €200-€300/month for Monday-Friday day-care. Plus a bit of beaurocracy over deciding which family is hosting, and which parents are providing food. With the size being capped, and the fees being pretty standard the carers earn €3600-€4000/month, which is a good amount. To be a school-teacher you need to be very qualified, but to do this caring is much simpler. It turns out that being an English-speaker can be a bonus too, for some families ;)

Currently our carer has a sick-note for three days, so I'm staying home today, and will likely stay tomorrow too. Then my wife will skip work on Wednesday. (We usually take it in turns but sometimes that can't happen easily.)

But all of this is due to change in the near future, because we've had too many sick days, and both of us have missed too much work.

More news on that in the future, unless I forget.


Planet DebianDaniel Pocock: SwissPost putting another nail in the coffin of Swiss sovereignty

A few people have recently asked me about the SwissID, as SwissPost has just been sending spam emails out to people telling them "Link your Swiss Post user account to SwissID".

This coercive new application of technology demands users email addresses and mobile phone numbers "for security". A web site coercing people to use text messages "for security" has quickly become a red flag for most people and many blogs have already covered why it is only an illusion of security, putting your phone account at risk so companies can profit from another vector for snooping on you.

SwissID is not the only digital identity solution in Switzerland but as it is run by SwissPost and has a name similar to another service it is becoming very well known.

In 2010 they began offering a solution which they call SuisseID (notice the difference? They are pronounced the same way.) based on digital certificates and compliant with Swiss legislation. Public discussion focussed on the obscene cost with little comment about the privacy consequences and what this means for Switzerland as a nation.

Digital certificates often embed an email address in the certificate.

With SwissID, however, they have a web site that looks like little more than vaporware, giving no details at all whether certificates are used. It appears they are basically promoting an app that is designed to harvest the email addresses and phone numbers of any Swiss people who install it, lulling them into that folly by using a name that looks like their original SuisseID. If it looks like phishing, if it feels like phishing and if it smells like phishing to any expert takes a brief sniff of their FAQ, then what else is it?

The thing is, the original SuisseID runs on a standalone smartcard so it doesn't need to have your mobile phone number, have permissions to all the data in your phone and be limited to working in areas with mobile phone signal.

The emails currently being sent by SwissPost tell people they must "Please use a private e-mail address for this purpose" but they don't give any information about the privacy consequences of creating such an account or what their app will do when it has access to read all the messages and contacts in your phone.

The actions you can take that they didn't tell you about

  • You can post a registered letter to SwissPost and tell them that for privacy reasons, you are immediately retracting the email addresses and mobile phone numbers they currently hold on file and that you are exercising your right not to give an email address or mobile phone number to them in future.
  • If you do decide you want a SwissID, create a unique email address for it and only use that email address with SwissPost so that it can't be cross-referenced with other companies. This email address is also like a canary in a coal mine: if you start receiving spam on that email address then you know SwissPost/SwissID may have been hacked or the data has been leaked or sold.
  • Don't install their app and if you did, remove it and you may want to change your mobile phone number.

Oddly enough, none of these privacy-protecting ideas were suggested in the email from SwissPost. Who's side are they on?

Why should people be concerned?

SwissPost, like every postal agency, has seen traditional revenues drop and so they seek to generate more revenue from direct marketing and they are constantly looking for ways to extract and profit from data about the public. They are also a huge company with many employees: when dealing with vast amounts of data in any computer system, it only takes one employee to compromise everything: just think of how Edward Snowden was able to act alone to extract many of the NSA's most valuable secrets.

SwissPost is going to great lengths to get accurate data on every citizen and resident in Switzerland, including deploying an app to get your mobile phone number and demanding an email address when you use their web site. That also allows them to cross-reference with your IP addresses.

  • Any person or organization who has your email address or mobile number may find it easier to get your home address.
  • Any person or organization who has your home address may be able to get your email address or mobile phone number.
  • When you call a company from your mobile phone and their system recognizes your phone number, it becomes easier for them to match it to your home address.
  • If SwissPost and the SBB successfully convince a lot of people to use a SwissID, some other large web sites may refuse to allow access without getting you to link them to your SwissID and all the data behind it too. Think of how many websites already try to coerce you to give them your mobile phone number and birthday to "secure" your account, but worse.

The Google factor

The creepiest thing is that over seventy percent of people are apparently using Gmail addresses in Switzerland and these will be a dependency of their registration for SwissID.

Given that SwissID is being promoted as a solution compliant with ZertES legislation that can act as an interface between citizens and the state, the intersection with such a powerful foreign actor as Gmail is extraordinary. For example, if people are registering to vote in Switzerland's renowned referendums and their communication is under the surveillance of a foreign power like the US, that is a mockery of democracy and it makes the allegations of Russian election hacking look like child's play.

Switzerland's referendums, decentralized system of Government, part-time army and privacy regime are all features that maintain a balance between citizen and state: by centralizing power in the hands of SwissID and foreign IT companies, doesn't it appear that the very name SwissID is a mockery of the Swiss identity?

Yellow in motion

No canaries were harmed in the production of this blog.

Planet DebianDima Kogan: Vnlog!

In the last few jobs I've worked at I ended up writing a tool to store data in a nice format, and to be able to manipulate it easily. I'd rewrite this from scratch each time partly because I was never satisfied with the previous version. Each iteration was an improvement on the previous one, and this version is the good one. I wrote it at NASA/JPL, went through the release process (this thing was called asciilog then), added a few more features, and I'm now releasing it. The toolkit lives here and here's the initial README:


Vnlog (pronounced "vanillog") is a trivially-simple log format:

  • A whitespace-separated table of ASCII human-readable text
  • Lines beginning with # are comments
  • The first line that begins with a single # (not ## or #!) is a legend, naming each column


# a b c
1 2 3
## another comment
4 5 6

Such data works very nicely with normal UNIX tools (awk, sort, join), can be easily read by fancier tools (numpy, matlab (yuck), excel (yuck), etc), and can be plotted with feedgnuplot. This tookit provides some tools to manipulate vnlog data and a few libraries to read/write it. The core philosophy is to keep everything as simple and light as possible, and to provide methods to enable existing (and familiar!) tools and workflows to be utilized in nicer ways.


In one terminal, sample the CPU temperature over time, and write the data to a file as it comes in, at 1Hz:

$ ( echo '# time temp1 temp2 temp3';
    while true; do echo -n "`date +%s` "; < /proc/acpi/ibm/thermal awk '{print $2,$3,$4; fflush()}';
    sleep 1; done )
    > /tmp/temperature.vnl

In another terminal, I sample the consumption of CPU resources, and log that to a file:

$ (echo "# user system nice idle waiting hardware_interrupt software_interrupt stolen";
   top -b -d1 | awk '/%Cpu/ {print $2,$4,$6,$8,$10,$12,$14,$16; fflush()}')
   > /tmp/cpu.vnl

These logs are now accumulating, and I can do stuff with them. The legend and the last few measurements:

$ vnl-tail /tmp/temperature.vnl
# time temp1 temp2 temp3
1517986631 44 38 34
1517986632 44 38 34
1517986633 44 38 34
1517986634 44 38 35
1517986635 44 38 35
1517986636 44 38 35
1517986637 44 38 35
1517986638 44 38 35
1517986639 44 38 35
1517986640 44 38 34

I grab just the first temperature sensor, and align the columns

$ < /tmp/temperature.vnl vnl-tail |
    vnl-filter -p time,temp=temp1 |
#  time    temp
1517986746 45
1517986747 45
1517986748 46
1517986749 46
1517986750 46
1517986751 46
1517986752 46
1517986753 45
1517986754 45
1517986755 45

I do the same, but read the log data in realtime, and feed it to a plotting tool to get a live reporting of the cpu temperature. This plot updates as data comes in. I then spin a CPU core (while true; do true; done), and see the temperature climb. Here I'm making an ASCII plot that's pasteable into the docs.

$ < /tmp/temperature.vnl vnl-tail -f           |
    vnl-filter --unbuffered -p time,temp=temp1 |
     feedgnuplot --stream --domain
       --lines --timefmt '%s' --set 'format x "%M:%S"' --ymin 40
       --unset grid --terminal 'dumb 80,40'

  70 +----------------------------------------------------------------------+
     |      +      +      +      +       +      +      +      +      +      |
     |                                                                      |
     |                                                                      |
     |                                                                      |
     |                      **                                              |
  65 |-+                   ***                                            +-|
     |                    ** *                                              |
     |                    *  *                                              |
     |                    *  *                                              |
     |                   *   *                                              |
     |                  **   **                                             |
  60 |-+                *     *                                           +-|
     |                 *      *                                             |
     |                 *      *                                             |
     |                 *      *                                             |
     |                **      *                                             |
     |                *       *                                             |
  55 |-+              *       *                                           +-|
     |                *       *                                             |
     |                *       **                                            |
     |                *        *                                            |
     |               **        *                                            |
     |               *         **                                           |
  50 |-+             *          **                                        +-|
     |               *           **                                         |
     |               *            ***                                       |
     |               *              *                                       |
     |               *              ****                                    |
     |               *                 *****                                |
  45 |-+             *                     ***********                    +-|
     |    ************                               ********************** |
     |          * **                                                        |
     |                                                                      |
     |                                                                      |
     |      +      +      +      +       +      +      +      +      +      |
  40 +----------------------------------------------------------------------+
   21:00  22:00  23:00  24:00  25:00   26:00  27:00  28:00  29:00  30:00  31:00

Cool. I can then join the logs, pull out simultaneous CPU consumption and temperature numbers, and plot the path in the temperature-cpu space:

$ vnl-join -j time /tmp/temperature.vnl /tmp/cpu.vnl |
  vnl-filter -p temp1,user                           |
  feedgnuplot --domain --lines
    --unset grid --terminal 'dumb 80,40'

  45 +----------------------------------------------------------------------+
     |           +           +           +          +           +           |
     |                                       *                              |
     |                                       *                              |
  40 |-+                                    **                            +-|
     |                                      **                              |
     |                                     * *                              |
     |                                     * *      *    *    *             |
  35 |-+               ****      *********** **** * **** ***  ******      +-|
     |        *********   ********       *   *  *****  *** * ** *  *        |
     |        *    *                            * * *  * * ** * *  *        |
     |        *    *                                   *   *  *    *        |
  30 |-+      *                                                    *      +-|
     |        *                                                    *        |
     |        *                                                    *        |
     |        *                                                    *        |
  25 |-+      *                                                    *      +-|
     |        *                                                    *        |
     |        *                                                    *        |
     |        *                                                    *        |
  20 |-+      *                                                    *      +-|
     |        *                                                    *        |
     |        *                                                    *        |
     |      * *                                                    *        |
  15 |-+    * *  *                                                 *      +-|
     |      * *  *                                                 *        |
     |      ***  *                                                 *        |
     |      ***  *                                                 *        |
  10 |-+    ***  *                                                 *      +-|
     |      ***  *                                                 *        |
     |      ***  *                                                 *        |
     |      ***  *                                                 *        |
   5 |-+    ***  *                                                 *      +-|
     |      ***  *                                                 *        |
     |      * *  * *                                               *        |
     |      **** * ** *****  *********** +       *******       *****        |
   0 +----------------------------------------------------------------------+
     40          45          50          55         60          65          70


As stated before, vnlog tools are designed to be very simple and light. There exist other tools that are similar. For instance:

These all provide facilities to run various analyses, and are neither simple nor light. Vnlog by contrast doesn't analyze anything, but makes it easy to write simple bits of awk or perl to process stuff to your heart's content. The main envisioned use case is one-liners, and the tools are geared for that purpose. The above mentioned tools are much more powerful than vnlog, so they could be a better fit for some use cases.

In the spirit of doing as little as possible, the provided tools are wrappers around tools you already have and are familiar with. The provided tools are:

  • vnl-filter is a tool to select a subset of the rows/columns in a vnlog and/or to manipulate the contents. This is effectively an awk wrapper where the fields can be referenced by name instead of index. 20-second tutorial:
vnl-filter -p col1,col2,colx=col3+col4 'col5 > 10' --has col6

will read the input, and produce a vnlog with 3 columns: col1 and col2 from the input and a column colx that's the sum of col3 and col4 in the input. Only those rows for which the col5 > 10 is true will be output. Finally, only those rows that have a non-null value for col6 will be selected. A null entry is signified by a single - character.

vnl-filter --eval '{s += x} END {print s}'

will evaluate the given awk program on the input, but the column names work as you would hope they do: if the input has a column named x, this would produce the sum of all values in this column.

  • vnl-sort, vnl-join, vnl-tail are wrappers around the corresponding GNU Coreutils tools. These work exactly as you would expect also: the columns can be referenced by name, and the legend comment is handled properly. These are wrappers, so all the commandline options those tools have "just work" (except options that don't make sense in the context of vnlog). As an example, vnl-tail -f will follow a log: data will be read by vnl-tail as it is written into the log (just like tail -f, but handling the legend properly). And you already know how to use these tools without even reading the manpages! Note: these were written for and have been tested with the Linux kernel and GNU Coreutils sort, join and tail. Other kernels and tools probably don't (yet) work. Send me patches.
  • vnl-align aligns vnlog columns for easy interpretation by humans. The meaning is unaffected
  • Vnlog::Parser is a simple perl library to read a vnlog
  • libvnlog is a C library to simplify writing a vnlog. Clearly all you really need is printf(), but this is useful if we have lots of columns, many containing null values in any given row, and/or if we have parallel threads writing to a log
  • vnl-make-matrix converts a one-point-per-line vnlog to a matrix of data. I.e.
$ cat dat.vnl
# i j x
0 0 1
0 1 2
0 2 3
1 0 4
1 1 5
1 2 6
2 0 7
2 1 8
2 2 9
3 0 10
3 1 11
3 2 12

$ < dat.vnl vnl-filter -p i,x | vnl-make-matrix --outdir /tmp
Writing to '/tmp/x.matrix'

$ cat /tmp/x.matrix
1 2 3
4 5 6
7 8 9
10 11 12

All the tools have manpages that contain more detail. And tools will probably be added with time.

C interface

For most uses, these logfiles are simple enough to be generated with plain prints. But then each print statement has to know which numeric column we're populating, which becomes effortful with many columns. In my usage it's common to have a large parallelized C program that's writing logs with hundreds of columns where any one record would contain only a subset of the columns. In such a case, it's helpful to have a library that can output the log files. This is available. Basic usage looks like this:

In a shell:

$ vnl-gen-header 'int w' 'uint8_t x' 'char* y' 'double z' 'void* binary' > vnlog_fields_generated.h

In a C program test.c:

#include "vnlog_fields_generated.h"

int main()


    vnlog_set_field_value__binary("\x01\x02\x03", 3);


    return 0;

Then we build and run, and we get

$ cc -o test test.c -lvnlog

$ ./test

# w x y z binary
-10 40 asdf - -
-20 50 - 0.2999999999999999889 AQID
-30 10 whoa 0.5 -

The binary field in base64-encoded. This is a rarely-used feature, but sometimes you really need to log binary data for later processing, and this makes it possible.

So you

  1. Generate the header to define your columns
  2. Call vnlog_emit_legend()
  3. Call vnlog_set_field_value__...() for each field you want to set in that row.
  4. Call vnlog_emit_record() to write the row and to reset all fields for the next row. Any fields unset with a vnlog_set_field_value__...() call are written as null: -

This is enough for 99% of the use cases. Things get a bit more complex if we have have threading or if we have multiple vnlog ouput streams in the same program. For both of these we use vnlog contexts.

To support reentrant writing into the same vnlog by multiple threads, each log-writer should create a context, and use it when talking to vnlog. The context functions will make sure that the fields in each context are independent and that the output records won't clobber each other:

void child_writer( // the parent context also writes to this vnlog. Pass NULL to
                   // use the global one
                   struct vnlog_context_t* ctx_parent )
    struct vnlog_context_t ctx;
    vnlog_init_child_ctx(&ctx, ctx_parent);

        vnlog_set_field_value_ctx__xxx(&ctx, ...);
        vnlog_set_field_value_ctx__yyy(&ctx, ...);
        vnlog_set_field_value_ctx__zzz(&ctx, ...);

If we want to have multiple independent vnlog writers to different streams (with different columns andlegends), we do this instead:


#include "vnlog_fields_generated1.h"

void f(void)
    // Write some data out to the default context and default output (STDOUT)


#include "vnlog_fields_generated2.h"

void g(void)
    // Make a new session context, send output to a different file, write
    // out legend, and send out the data
    struct vnlog_context_t ctx;
    FILE* fp = fopen(...);
    vnlog_set_output_FILE(&ctx, fp);

Note that it's the user's responsibility to make sure the new sessions go to a different FILE by invoking vnlog_set_output_FILE(). Furthermore, note that the included vnlog_fields_....h file defines the fields we're writing to; and if we have multiple different vnlog field definitions in the same program (as in this example), then the different writers must live in different source files. The compiler will barf if you try to #include two different vnlog_fields_....h files in the same source.

More APIs are

vnlog_printf(...) and vnlog_printf_ctx(ctx, ...) write to a pipe like printf() does. This exists for comments.

vnlog_clear_fields_ctx(ctx, do_free_binary): Clears out the data in a context and makes it ready to be used for the next record. It is rare for the user to have to call this manually. The most common case is handled automatically (clearing out a context after emitting a record). One area where this is useful is when making a copy of a context:

struct vnlog_context_t ctx1;
// .... do stuff with ctx1 ... add data to it ...

struct vnlog_context_t ctx2 = ctx1;
// ctx1 and ctx2 now both have the same data, and the same pointers to
// binary data. I need to get rid of the pointer references in ctx1

vnlog_clear_fields_ctx(&ctx1, false);


Frees memory for an vnlog context. Do this before throwing the context away. Currently this is only needed for context that have binary fields, but this should be called in for all contexts, just in case

numpy interface

The built-in numpy.loadtxt numpy.savetxt functions work well to read and write these files. For example to write to standard output a vnlog with fields a, b and c:

numpy.savetxt(sys.stdout, array, fmt="%g", header="a b c")

Note that numpy automatically adds the # to the header. To read a vnlog from a file on disk, do something like

array = numpy.loadtxt('data.vnl')

These functions know that # lines are comments, but don't interpret anything as field headers. That's easy to do, so I'm not providing any helper libraries. I might do that at some point, but in the meantime, patches are welcome.

Caveats and bugs

The tools that wrap GNU coreutils (vnl-sort, vnl-join, vnl-tail) are written specifically to work with the Linux kernel and the GNU coreutils. None of these have been tested with BSD tools or with non-Linux kernels, and I'm sure things don't just work. It's probably not too effortful to get that running, but somebody needs to at least bug me for that. Or better yet, send me nice patches :)

These tools are meant to be simple, so some things are hard requirements. A big one is that columns are whitespace-separated. There is no mechanism for escaping or quoting whitespace into a single field. I think supporting something like that is more trouble than it's worth.


Dima Kogan (

License and copyright

This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version.

Copyright 2016-2017 California Institute of Technology

Copyright 2017-2018 Dima Kogan (

b64_cencode.c comes from cencode.c in the libb64 project. It is written by Chris Venter ( who placed it in the public domain. The full text of the license is in that file.

Planet DebianPetter Reinholdtsen: The SysVinit upstream project just migrated to git

Surprising as it might sound, there are still computers using the traditional Sys V init system, and there probably will be until systemd start working on Hurd and FreeBSD. The upstream project still exist, though, and up until today, the upstream source was available from Savannah via subversion. I am happy to report that this just changed.

The upstream source is now in Git, and consist of three repositories:

I do not really spend much time on the project these days, and I has mostly retired, but found it best to migrate the source to a good version control system to help those willing to move it forward.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Don MartiThe tracker will always get through?

(I work for Mozilla. None of this is secret. None of this is Mozilla policy. Not speaking for Mozilla here.)

A big objection to tracking protection is the idea that the tracker will always get through. Some people suggest that as browsers give users more ability to control how their personal information gets leaked across sites, things won't get better for users, because third-party tracking will just keep up. On this view, today's easy-to-block third-party cookies will be replaced by techniques such as passive fingerprinting where it's hard to tell if the browser is succeeding at protecting the user or not, and users will be stuck in the same place they are now, or worse.

I doubt this is the case because we're playing a more complex game than just trackers vs. users. The game has at least five sides, and some of the fastest-moving players with the best understanding of the game are the adfraud hackers. Right now adfraud is losing in some areas where they had been winning, and the resulting shift in adfraud is likely to shift the risks and rewards of tracking techniques.

Data center adfraud

Fraudbots, running in data centers, visit legit sites (with third-party ads and trackers) to pick up a realistic set of third-party cookies to make them look like high-value users. Then the bots visit dedicated fraudulent "cash out" sites (whose operators have the same third-party ads and trackers) to generate valuable ad impressions for those sites. If you wonder why so many sites made a big deal out of "pivot to video" but can't remember watching a video ad, this is why. Fraudbots are patient enough to get profiled as, say, a car buyer, and watch those big-money ads. And the money is good enough to motivate fraud hackers to make good bots, usually based on real browser code. When a fraudbot network gets caught and blocked from high-value ads, it gets recycled for lower and lower value forms of advertising. By the time you see traffic for sale on fraud boards, those bots are probably only getting past just enough third-party anti-fraud services to be worth running.

This version of adfraud has minimal impact on real users. Real users don't go to fraud sites, and fraudbots do their thing in data centers Doesn't everyone do their Christmas shopping while chilling out in the cold aisle at an Amazon AWS data center? Seems legit to me. and don't touch users' systems. The companies that pay for it are legit publishers, who not only have to serve pages to fraudbots—remember, a bot needs to visit enough legit sites to look like a real user—but also end up competing with adfraud for ad revenue. Adfraud has only really been a problem for legit publishers. The adtech business is fine with it, since they make more money from fraud than the fraud hackers do, and the advertisers are fine with it because fraud is priced in, so they pay the fraud-adjusted price even for real impressions.

What's new for adfraud

So what's changing? More fraudbots in data centers are getting caught, just because the adtech firms have mostly been shamed into filtering out the embarassingly obvious traffic from IP addresses that everyone can tell probably don't have a human user on them. So where is fraud going now? More fraud is likely to move to a place where a bot can look more realistic but probably not stay up as long—your computer or mobile device. Expect adfraud concealed within web pages, as a payload for malware, and of course in lots and lots of cheesy native mobile apps.The Google Play Store has an ongoing problem with adfraud, which is content marketing gold for Check Point Software, if you like "shitty app did WHAT?" stories. Adfraud makes way more money than cryptocurrency mining, using less CPU and battery.

So the bad news is that you're going to have to reformat your uncle's computer a lot this year, because more client-side fraud is coming. Data center IPs don't get by the ad networks as well as they once did, so adfraud is getting personal. The good news, is, hey, you know all that big, scary passive fingerprinting that's supposed to become the harder-to-beat replacement for the third-party cookie? Client-side fraud has to beat it in order to get paid, so they'll beat it. As a bonus, client-side bots are way better at attribution fraud (where a fraudulent ad gets credit for a real sale) than data center bots.

Users don't have to get protected from every possible tracking technique in order to shift the web advertising game from a hacking contest to a reputation contest. It often helps simply to shift the advertiser's ROI from negative-externality advertising below the ROI of positive-externality advertising.

Advertisers have two possible responses to adfraud: either try to out-hack it, or join the "flight to quality" and cut back on trying to follow big-money users to low-reputation sites in the first place. Hard-to-detect client-side bots, by making creepy fingerprinting techniques less trustworthy, tend to increase the uncertainty of the hacking option and make flight to quality relatively more attractive.


Planet DebianJoey Hess: futures of distributions

Seems Debian is talking about why they are unable to package whole categories of modern software, such as anything using npm. It's good they're having a conversation about that, and I want to give a broader perspective.

Lars Wirzenius's blog post about it explains the problem well from the Debian perspective. In short: The granularity at which software is built has fundamentally changed. It's now typical for hundreds of small libraries to be used by any application, often pegged to specific versions. Language-specific tools manage all the resulting complexity automatically, but distributions can't muster the manpower to package a fraction of this stuff.

Lars lists some ideas for incremental improvements, but the space within which a Linux distribution exists has changed, and that calls not for incremental changes, but for a fundamental rethink from the ground up. Whether Debian is capable of making such fundamental changes at this point in its lifecycle is up to its developers to decide.

Perhaps other distributions are dealing with the problem better? One way to evaluate this is to look at how a given programming language community feels about a distribution's handling of their libraries. Do they generally see the distribution as a road block that must be worked around, or is the distribution a useful part of their workflow? Do they want their stuff included in the distribution, or does that seem like a lot of pointless bother?

I can only speak about the Haskell community. While there are some exceptions, it generally is not interested in Debian containing Haskell packages, and indeed system-wide installations of Haskell packages can be an active problem for development. This is despite Debian having done a much better job at packaging a lot of Haskell libraries than it has at say, npm libraries. Debian still only packages one version of anything, and there is lag and complex process involved, and so friction with the Haskell community.

On the other hand, there is a distribution that the Haskell community broadly does like, and that's Nix. A subset of the Haskell community uses Nix to manage and deploy Haskell software, and there's generally a good impression of it. Nix seems to be doing something right, that Debian is not doing.

It seems that Nix also has pretty good support for working with npm packages, including ingesting a whole dependency chain into the package manager with a single command, and thousands of npm libraries included in the distribution I don't know how the npm community feels about Nix, but my guess is they like it better than Debian.

Nix is a radical rethink of the distribution model. And it's jettisoned a lot of things that Debian does, like manually packaging software, or extreme license vetting. It's interesting that Guix, which uses the same technologies as Nix, but seems in many ways more Debian-like with its care about licensing etc, has also been unable to manage npm packaging. This suggests to me that at least some of the things that Nix has jettisoned need to be jettisoned in order to succeed in the new distribution space.

But. Nix is not really exploding in popularity from what I can see. It seems to have settled into a niche of its own, and is perhaps expanding here and there, but not rapidly. It's insignificant compared with things like Docker, that also radically rethink the distribution model.

We could easily end up with some nightmare of lithification, as described by Robert "r0ml" Lefkowitz in his talk. Endlessly copied and compacted layers of code, contained or in the cloud. Programmer-archeologists right out of a Vinge SF novel.

r0ml suggests that we assume that's where things are going (or indeed where they already are outside little hermetic worlds like Debian), and focus on solving technical problems, like deployment of modifications of cloud apps, that prevent users from exercising software freedoms.

In a way, r0ml's ideas are what led me to thinking about extending Scuttlebutt with Annah, and indeed if you squint at that right, it's an idea for a radically different kind of distribution.

Well, that's all I have. No answers of course.

Planet DebianJohn Goerzen: The downfall of… Trump or Democracy?

The future of the United States as a democracy is at risk. That’s plenty scary. More scary is that many Americans know this, but don’t care. And even more astonishing is that this same thing happened 45 years ago.

I remember it clearly. January 30, just a couple weeks ago. On that day, we had the news that FBI deputy director McCabe — a frequent target of apparently-baseless Trump criticism — had been pushed out. The Trump administration refused to enforce the bipartisan set of additional sanctions on Russia. And the House Intelligence Committee voted on party lines to release what we all knew then, and since have seen confirmed, was a memo filled with errors designed to smear people investigating the president, but which nonetheless contained enough classified material to cause an almighty kerfuffle in Washington.

I told my wife that evening, “I think today will be remembered as a turning point. Either to the downfall of Trump, or the downfall of our democracy, but I don’t know which.”

I have not written much about this scandal, because so many quality words have already been written. But it is time to add something.

I was interested in Watergate years ago. Back in middle school, I read All the President’s Men. I wondered what it must have been like to live through those events — corruption at the highest level of government, dirty tricks, not knowing how it would play out. I wished I could have experienced it.

A couple of decades later, I have got my wish and I am not amused. After all:

“If these allegations prove to be true, what they were seeking to steal was not the jewels, money or other property of American citizens, but something much more valuable — their most precious heritage, the right to vote in a free election…

If the allegations… are substantiated, there has been a very serious subversion of the integrity of the electoral process, and the committee will be obliged to consider the manner in which such a subversion affects the continued existence of this nation as a representative democracy, and how, if we are to survive, such subversions may be prevented in the future.”

Sen. Sam Ervin Jr, May 17, 1973

That statement from 45 years ago captures accurately my contemporary fears. If foreign interference in our elections is not only tolerated but embraced, where does that leave us? Are we really a republic anymore?

I have been diving back into Watergate. In One Man Against The World: The Tragedy of Richard Nixon, written by Tim Weiner in 2015, he dives into the Nixon story in unprecedented detail, thanks to the release of many more files from that time. In his very first page, he writes:

[Nixon] made war in pursuit of peace. He committed crimes in the name of the law. He tore the country apart while trying to unite it. He sabotaged his presidency by violating the Constitution. He destroyed himself and damaged the nation through deliberate acts of folly…

He practiced geopolitics without subtlety; he preferred subterfuge and brutality. He dropped bombs and napalm without remorse; he believed they delivered a political message beyond flood and fire. Hr charted the course of the war without a strategy; he delivered victory to his adversaries.

His gravest decisions undermined his allies abroad. His grandest delusions armed his enemies at home…

The truth was not in him; secrecy and deception were his touchstones.

That these words describe another American president, one that I’m sure Weiner had not foreseen, is jarring. The parallels between Nixon and Trump in the pages of Weiner’s book are so strong that one sometimes wonders if Weiner has a more accurate story of Trump than Wolff got – and also if the pages of his book let us see what’s in store for us this year.

Today I started listening to the excellent podcast Slow Burn. If you have time for nothing else, listen to episode 5: True Believers. It discusses the politicization of the Senate Watergate committee, and more ominously, the efforts of reports to understand the people that still supported Nixon — despite all the damning testimony already out there.

Gail Sheehy went to a bar where Nixon supporters gathered, wanting to get their reaction to the Watergate hearings. The supporters didn’t want to watch. They thought the hearings were just an attempt by liberals to take down Nixon. Sheehy found the president’s people to be “angry, demoralized, and disconcertingly comfortable with the idea of a police state run by Richard Nixon.”

These guys felt they were nobodies… except Richard Nixon gave them an identity. He was a tough guy who was “going to get rid of all those anti-war people, anarchists, terrorists… the people that were tearing down our country!”

Art Buchwald’s tongue-in-cheek handy excuses for Nixon backers seems to be copied almost verbatim by Fox News (substitute Hillary’s emails for Chappaquiddick).

And what happened to the scum of Richard Nixon’s era? Yes, some went to jail, but not all.

  • Steve King, one of Nixon’s henchmen that kidnapped Martha Mitchell (wife of Attorney General and Nixon henchman John Mitchell) for a week to keep her from spilling the beans on Watergate, beat her up, and had her drugged — well he was appointed by Trump to be ambassador to the Czech Republic and confirmed by the Senate.
  • The man that said that the Watergate burglars were “not criminal at heart” because “their only aim was to re-elect the president” later got elected president himself, and pardoned one of the burglars. (Ronald Reagan)
  • The man that said “just let the president do his job!” was also elected president (George H. W. Bush)
  • The man that finally carried out Nixon’s order to fire special prosecutor Archibald Cox was nominated to the Supreme Court, but his nomination was blocked in the Senate. (Robert Bork) He was, however, on the United States Court of Appeals for 6 years.
  • And in an odd conspiracy-laden introduction to a reprint of a youth’s history book on Watergate, none other than Roger Stone, wrapped up in Trump’s shenanigans, was trying to defend Nixon. Oh, and he was a business partner with Paul Manafort and lobbyist for Ferdinand Marcos.

One comfort from all of this is the knowledge that we had been there before. We had lived through an era of great progress in civil rights, and right after that elected a dictatorial crook president. We survived the president’s fervent supporters refusing to believe overwhelming evidence of his crookedness. We survived.

And yet, that is no guarantee. After all, as John Dean put it, Nixon “might have survived if there’d been a Fox News.”

Planet Linux AustraliaPia Waugh: An optimistic future

This is my personal vision for an event called “Optimistic Futures” to explore what we could be aiming for and figure out the possible roles for government in future.

Technology is both an enabler and a disruptor in our lives. It has ushered in an age of surplus, with decentralised systems enabled by highly empowered global citizens, all creating increasing complexity. It is imperative that we transition into a more open, collaborative, resilient and digitally enabled society that can respond exponentially to exponential change whilst empowering all our people to thrive. We have the means now by which to overcome our greatest challenges including poverty, hunger, inequity and shifting job markets but we must be bold in collectively designing a better future, otherwise we may unintentionally reinvent past paradigms and inequities with shiny new things.

Technology is only as useful as it affects actual people, so my vision starts, perhaps surprisingly for some, with people. After all, if people suffer, the system suffers, so the well being of people is the first and foremost priority for any sustainable vision. But we also need to look at what all sectors and communities across society need and what part they can play:

  • People: I dream of a future where the uniqueness of local communities, cultures and individuals is amplified, where diversity is embraced as a strength, and where all people are empowered with the skills, capacity and confidence to thrive locally and internationally. A future where everyone shares in the benefits and opportunities of a modern, digital and surplus society/economy with resilience, and where everyone can meaningfully contribute to the future of work, local communities and the national/global good.
  • Public sectors: I dream of strong, independent, bold and highly accountable public sectors that lead, inform, collaborate, engage meaningfully and are effective enablers for society and the economy. A future where we invest as much time and effort on transformational digital public infrastructure and skills as we do on other public infrastructure like roads, health and traditional education, so that we can all build on top of government as a platform. Where everyone can have confidence in government as a stabilising force of integrity that provides a minimum quality of life upon which everyone can thrive.
  • The media: I dream of a highly effective fourth estate which is motivated systemically with resilient business models that incentivise behaviours to both serve the public and hold power to account, especially as “news” is also arguably becoming exponential. Actionable accountability that doesn’t rely on the linearity and personal incentives of individuals to respond will be critical with the changing pace of news and with more decisions being made by machines.
  • Private, academic and non-profit sectors: I dream of a future where all sectors can more freely innovate, share, adapt and succeed whilst contributing meaningfully to the public good and being accountable to the communities affected by decisions and actions. I also see a role for academic institutions in particular, given their systemic motivation for high veracity outcomes without being attached to one side, as playing a role in how national/government actions are measured, planned, tested and monitored over time.
  • Finally, I dream of a world where countries are not celebrated for being just “digital nations” but rather are engaged in a race to the top in using technology to improve the lives of all people and to establish truly collaborative democracies where people can meaningfully participate in the shaping the optimistic and inclusive futures.

Technology is a means, not an ends, so we need to use technology to both proactively invent the future we need (thank you Alan Kay) and to be resilient to change including emerging tech and trends.

Let me share a few specific optimistic predictions for 2070:

  • Automation will help us redesign our work expectations. We will have a 10-20 hour work week supported by machines, freeing up time for family, education, civic duties and innovation. People will have less pressure to simply survive and will have more capacity to thrive (this is a common theme, but something I see as critical).
  • 3D printing of synthetic foods and nanotechnology to deconstruct and reconstruct molecular materials will address hunger, access to medicine, clothes and goods, and community hubs (like libraries) will become even more important as distribution, education and social hubs, with drones and other aerial travel employed for those who can’t travel. Exoskeletons will replace scooters :)
  • With rocket travel normalised, and only an hour to get anywhere on the planet, nations will see competitive citizenships where countries focus on the best quality of life to attract and retain people, rather than largely just trying to attract and retain companies as we do today. We will also likely see the emergence of more powerful transnational communities that have nationhood status to represent the aspects of people’s lives that are not geopolitically bound.
  • The public service has highly professional, empathetic and accountable multi-disciplinary experts on responsive collaborative policy, digital legislation, societal modeling, identifying necessary public digital infrastructure for investment, and well controlled but openly available data, rules and transactional functions of government to enable dynamic and third party services across myriad channels, provided to people based on their needs but under their control. We will also have a large number of citizens working 1 or 2 days a week in paid civic duties on areas where they have passion, skills or experience to contribute.
  • The paralympics will become the main game, as it were, with no limits on human augmentation. We will do the 100m sprint with rockets, judo with cyborgs, rock climbing with tentacles. We have access to medical capabilities to address any form of disease or discomfort but we don’t use the technologies to just comply to a normative view of a human. People are free to choose their form and we culturally value diversity and experimentation as critical attributes of a modern adaptable community.

I’ve only been living in New Zealand a short time but I’ve been delighted and inspired by what I’ve learned from kiwi and Māori cultures, so I’d like to share a locally inspired analogy.

Technology is on one hand, just a waka (canoe), a vehicle for change. We all have a part to play in the journey and in deciding where we want to go. On the other hand, technology is also the winds, the storms, the thunder, and we have to continually work to understand and respond to emerging technologies and trends so we stay safely on course. It will take collaboration and working towards common goals if we are to chart a better future for all.

Planet DebianLars Wirzenius: What is Debian all about, really? Or: friction, packaging complex applications

Another weekend, another big mailing list thread

This weekend, those interested in Debian development have been having a discussion on the debian-devel mailing list about "What can Debian do to provide complex applications to its users?". I'm commenting on that in my blog rather than the mailing list, since this got a bit too long to be usefully done in an email.

directhex's recent blog post "Packaging is hard. Packager-friendly is harder." is also relevant.

The problem

To start with, I don't think the email that started this discussion poses the right question. The problem not really about complex applications, we already have those in Debian. See, for example, LibreOffice. The discussion is really about how Debian should deal with the way some types of applications are developed upstream these days. They're not all complex, and they're not all big, but as usual, things only get interesting when n is big.

A particularly clear example is the whole nodejs ecosystem, but it's not limited to that and it's not limited to web applications. This is also not the first time this topic arises, but we've never come to any good conclusion.

My understanding of the problem is as follows:

A current trend in software development is to use programming languages, often interpreted high level languages, combined with heavy use of third-party libraries, and a language-specific package manager for installing libraries for the developer to use, and sometimes also for the sysadmin installing the software for production to use. This bypasses the Linux distributions entirely. The benefit is that it has allowed ecosystems for specific programming languages where there is very little friction for using libraries written in that language to be used by developers, speeding up development cycles a lot.

When I was young(er) the world was horrible

In comparison, in the old days, which for me means the 1990s, and before Debian took over my computing life, the cycle was something like this:

I would be writing an application, and would need to use a library to make some part of my application easier to write. To use that library, I would download the source code archive of the latest release, and laboriously decipher and follow the build and installation instructions, fix any problems, rinse, repeat. After getting the library installed, I would get back to developing my application. Often the installation of the dependency would take hours, so not a thing to be undertaken lightly.

Debian made some things better

With Debian, and apt, and having access to hundreds upon hundreds of libraries packaged for Debian, this become a much easier process. But only for the things packaged for Debian.

For those developing and publishing libraries, Debian didn't make the process any easier. They would still have to publish a source code archive, but also hope that it would eventually be included in Debian. And updates to libraries in the Debian stable release would not get into the hands of users until the next Debian stable release. This is a lot of friction. For C libraries, that friction has traditionally been tolerable. The effort of making the library in the first place is considerable, so any friction added by Debian is small by comparison.

The world has changed around Debian

In the modern world, developing a new library is much easier, and so also the friction caused by Debian is much more of a hindrance. My understanding is that things now happen more like this:

I'm developing an application. I realise I could use a library. I run the language-specific package manager (pip, cpan, gem, npm, cargo, etc), it downloads the library, installs it in my home directory or my application source tree, and in less than the time it takes to have sip of tea, I can get back to developing my application.

This has a lot less friction than the Debian route. The attraction to application programmers is clear. For library authors, the process is also much streamlined. Writing a library, especially in a high-level language, is fairly easy, and publishing it for others to use is quick and simple. This can lead to a virtuous cycle where I write a useful little library, you use and tell me about a bug or a missing feature, I add it, publish the new version, you use it, and we're both happy as can be. Where this might have taken weeks or months in the old days, it can now happen in minutes.

The big question: why Debian?

In this brave new world, why would anyone bother with Debian anymore? Or any traditional Linux distribution, since this isn't particularly specific to Debian. (But I mention Debian specifically, since it's what I now best.)

A number of things have been mentioned or alluded to in the discussion mentioned above, but I think it's good for the discussion to be explicit about them. As a computer user, software developer, system administrator, and software freedom enthusiast, I see the following reasons to continue to use Debian:

  • The freeness of software included in Debian has been vetted. I have a strong guarantee that software included in Debian is free software. This goes beyond the licence of that particular piece of software, but includes practical considerations like the software can actually be built using free tooling, and that I have access to that tooling, because the tooling, too, is included in Debian.

    • There was a time when Debian debated (with itself) whether it was OK to include a binary that needed to be built using a proprietary C compiler. We decided that it isn't, or not in the main package archive.

    • These days we have the question of whether "minimised Javascript" is OK to be included in Debian, if it can't be produced using tools packaged in Debian. My understanding is that we have already decided that it's not, but the discussion continues. To me, this seems equivalent to the above case.

  • I have a strong guarantee that software in a stable Debian release won't change underneath me in incompatible ways, except in special circumstances. This means that if I'm writing my application and targeting Debian stable, the library API won't change, at least not until the next Debian stable release. Likewise for every other bit of software I use. Having things to continue to work without having to worry is a good thing.

    • Note that a side-effect of the low friction of library development current ecosystems sometimes results in the library API changing. This would mean my application would need to change to adapt to the API change. That's friction for my work.
  • I have a strong guarantee that a dependency won't just disappear. Debian has a large mirror network of its package archive, and there are easy tools to run my own mirror, if I want to. While running my own mirror is possible for other package management systems, each one adds to the friction.

    • The nodejs NPM ecosystem seems to be especially vulnerable to this. More than once packages have gone missing, resulting other projects, which depend on the missing packages, to start failing.

    • The way the Debian project is organised, it is almost impossible for this to happen in Debian. Not only are package removals carefully co-ordinated, packages that are depended on on by other packages aren't removed.

  • I have a strong guarantee that a Debian package I get from a Debian mirror is the official package from Debian: either the actual package uploaded by a Debian developer or a binary package built by a trusted Debian build server. This is because Debian uses cryptographic signatures of the package lists and I have a trust path to the Debian signing key.

    • At least some of the language specific package managers fail to have such a trust path. This means that I have no guarantees that the library package I download today, was the same code uploaded by library author.

    • Note that https does not help here. It protects the transfer from the package manger's web server to me, but makes absolutely no guarantees about the validity of the package. There's been enough cases of the package repository having been attacked that this matters to me. Debian's signatures protect against malicious changes on mirror hosts.

  • I have a reasonably strong guarantee that any problem I find can be fixed, by me or someone else. This is not a strong guarantee, because Debian can't do anything about insanely complicated code, for example, but at least I can rely on being able to rebuild the software. That's a basic requirement for fixing a bug.

  • I have a reasonably strong guarantee that, after upgrading to the next Debian stable release, my stuff continues to work. Upgrades may always break, but at least Debian tests them and treats it as a bug if an upgrade doesn't work, or loses user data.

These are the reasons why I think Debian and the way it packages and distributes software is still important and relevant. (You may disagree. I'm OK with that.)

What about non-Linux free operating systems

I don't have much personal experience with non-Linux systems, so I've only talked about Linux here. I don't think the BSD systems, for example, are actually all that different from Linux distributions. Feel free to substitute "free operating system" for "Linux" throughout.

What is it Debian tries to do, anyway?

The previous section is one level of abstraction too low. It's important, but it's beneficial take a further step back and consider what it is Debian actually tries to achieve. Why does Debian exist?

The primary goal of Debian is to enable its users to use their computers using only free software. The freedom aspect is fundamentally important and a principle that Debian is not willing to compromise on.

The primary approach to achieve this goal is to produce a "distribution" of free software, to make installing a free software operating system and applications, and to maintain such a computer, a feasible thing for our users.

This leads to secondary goals, such as:

  • Making it easy to install Debian on a computer. (For values of easy that should be compared to toggling boot sector bytes manually.)

    We've achieved this, though of course things can always be improved.

  • Making it easy to install applications on a computer with Debian. (Again, compared to the olden days, when that meant configuring and compiling everything from scratch, with no guidance.)

    We've achieved this, too.

  • A system with Debian installed is reasonably secure, and easy to keep reasonably secure.

    This means Debian will provide security support for software it distributes, and has ways in which to install security fixes. We've achieved this, though this, too, can always be improved.

  • A system with Debian installed should keep working for extended periods of time. This is important to make using Debian feasible. If it takes too much effort to have a computer running Debian, it's not feasible for many people to that, and then Debian fails its primary goal.

    This is why Debian has stable releases with years of security support. We've achieved this.

The disconnect

On the one hand, we have Debian, which pretty much has achieved what I declare to be its primary goal. On the other hand, a lot of developers now expect much less friction than what Debian offers. This is a disconnect that is cause, I believe, the debian-devel discussion, and variants of that discussion all over the open source landscape.

These discussions often go one of two ways, depending on which community is talking.

  • In the distribution and more old-school communities, the low-friction approach of language-specific package managers is often considered to be a horror, and an abandonment of all the good things that the Linux world has achieved. "Young saplings, who do they think they are, all agile and bendy and with no principles at all, get off our carefully cultivated lawn."

  • In the low-friction communities, Linux distributions are something only old, stodgy, boring people care about. "Distributions are dead, they only get in the way, nobody bothers with them anymore."

This disconnect will require effort by both sides to close the gap.

On the one hand, so much new software is being written by people using the low-friction approach, that Linux distributions may fail to attract new users and especially new developers, and this will hurt them and their users.

On the other hand, the low-friction people may be sawing the tree branch they're sitting on. If distributions suffer, the base on which low-friction development relies on, will wither away, and we'll be left with running low-friction free software on proprietary platforms.

Things for low-friction proponents to improve

Here's a few things I've noticed that go wrong in the various communities oriented towards the low-friction approach.

  • Not enough care is given to copyright licences. This is a boring topic, but it's the legal basis that all of free software and open source is based on. If copyright licences are violated, or copyrights are not respected, or copyrights or licences are not expressed well enough, or incompatible licences are mixed, the result is very easily not actually either free software or open source.

    It's boring, but be sufficiently pedantic here. It's not even all that difficult.

  • Do provide actual source. It seems quite a number of Javascript projects only distribute "minimised" versions of code. That's not actually source code, any more than, say, Java byte code is, even if a de-compiler can make it kind of editable. If source isn't available, it's not free software or open source.

  • Please try to be careful with API changes. What used to work should still work with a new version of a library. If you need to make an API change that breaks compatibility, find a way to still support those who rely on the old API, using whatever mechanisms available to you. Ideally, support the old API for a long time, years. Two weeks is really not enough.

  • Do be careful with your dependencies. Locking down dependencies on a specific version makes things difficult for distributions, because they often can only provide one or a very small number of versions of any one package. Likewise, avoid embedding dependencies in your own source tree, because that explodes the amount of work distributions have to do to patch security holes. (No, distributions can't rely on tends of thousands of upstream to each do the patching correctly and promptly.)

Things for Debian to improve

There are many sources of friction that come from Debian itself. Some of them are unavoidable: if upstream projects don't take care of copyright licence hygiene, for example, then Debian will impose that on them and that can't be helped. Other things are more avoidable, however. Here's a list off the top of my head:

  • A lot of stuff in Debian happens over email, which might happen using a web application, if it were not for historical reasons. For example, the Debian bug tracking system ( requires using email, and given delays caused by spam filtering, this can cause delays of more than fifteen minutes. This is a source of friction that could be avoided.

  • Likewise, Debian voting happens over email, which can cause friction from delays.

  • Debian lets its package maintainers use any version control system, any packaging helper tooling, and packaging workflow they want. This means that every package is, to some extent, a new territory for someone other than its primary maintainers. Even when the same tools are used, they can be used in variety of different ways. Consistency should reduce friction.

  • There's too little infrastructure to do things like collecting copyright information into debian/control. This really shouldn't be a manual task.

  • Debian packaging uses arcane file formats, loosely based on email headers. More standard formats might make things easier, and reduce friction.

  • There's not enough automated testing, or it's too hard to use, making it too hard to know if a new package will work, or a modified package doesn't break anything that used to work.

  • Overall, making a Debian package tends to require too much manual work. Packaging helpers like dh certainly help, but not enough. I don't have a concrete suggestion how to reduce it, but it seems like an area Debian should work on.

  • Maybe consider supporting installing multiple versions of a package, even if only for, say, Javascript libraries. Possibly with a caveat that only specific versions will be security supported, and a way to alert the sysadmin if vulnerable packages are installed. Dunno, this is a difficult one.

  • Maybe consider providing something where the source package gets automatically updated to every new upstream release (or commit), with binary packages built from that, and those automatically tested. This might be a separate section of the archive, and packages would be included into the normal part of the archive only by manual decision.

  • There's more, but mostly not relevant to this discussion, I think. For example, Debian is a big project, and the mere size is a cause of friction.


I don't allow comments on my blog, and I don't want to debate this in private. If you have comments on anything I've said above, please post to the debian-devel mailing list. Thanks.


To ensure I get some responses, I will leave these bait here:

Anyone who's been programming less than 12332 days is a young whipper-snapper and shouldn't be taken seriously.

Depending on the latest commit of a library is too slow. The proper thing to do for really fast development is to rely on the version in the unsaved editor buffer of the library developer.

You shouldn't have read any of this. I'm clearly a troll.

Planet DebianMartín Ferrari: OSM in IkiWiki

Since about 15 years ago, I have been thinking of creating a geo-referenced wiki of pubs, with loads of structured data to help searching. I don't know if that would be useful for anybody else, but I know I would use it!

Sadly, the many times I started coding something towards that goal, I ended blocked by something, and I keep postponing my dream project.

Independently of that, for the past two years I have been driving a regular social meeting in Dublin for CouchSurfers, called the Dublin Mingle. The idea is pretty simple: to go every week to a different pub, and make friends.

I wanted to make a map marking all the places visited. Completely useless, but pretty! So, I went back to looking into IkiWiki internals, as the current osm plugin would not fulfill all my needs, and has a few annoying bugs.

After a few days of work, I made it: a refurbished osm plugin that uses the modern and pretty Leaflet library. If the javascript is not lost in the way (because you are reading from an aggregator, for example), below you should see the result. Otherwise, you can see it in action on its own page: Mingle)


The code is still not ready for merging into Ikiwiki, as I need to write tests and documentation. But you can find the changes in my GitHub repo.

It is still a long way to go before I can create my pubs wiki, but it is the first building block! Now I need a way to easily import and sync data from OSM, and then to create a structured search function.


Don MartiThis is why we can't have nice brands.

What if I told you that there was an Internet ad technology that...

  • can reach the same user on mobile and desktop

  • uses open-standard persistent identifiers for users

  • can connect users to their purchase history

  • reaches the users that the advertiser chooses, at the time the advertiser chooses

  • and doesn't depend on the Google/Facebook duopoly?

Don't go looking for it on the Lumascape.

I'm describing email spam.

Every feature that adtech is bragging on, or working toward? Email spam had it in the 1990s.

So why didn't brand advertisers jump all over spam? Why did they mostly leave it to low-reputation brands and scammers?

To be honest, it probably wasn't a decision decision in most cases, just corporate sloth. But staying away from spam was the right answer. In the email inbox, spam from a high-reputation brand doesn't look any different from spam that any fly-by-night operation can send. All spammers can do the same stuff:

They can sell to people...for a fraction of what marketing used to cost. And they can collect data on these consumers, track what they buy, what they love and hate about the experience, and market to them directly much more effectively.

Oh, wait. That one isn't about spam in the 1990s. That's about targeted advertising on social media sites today. The CEO of digital advertising's biggest trade group says most big marketers are screwed unless they completely change their business models.

It's the direct consumer relationships, and the use of consumer data, that is completely game-changing for the marketing world. And most big marketers, such as Procter & Gamble and Unilever, are not ready for this new reality, the IAB says.

But of course they're ready. The difference is that those established brand advertisers aren't any more ready than some guy who watched a YouTube video series on "growth hacking" and is ready to start buying targeted ads and drop-shipping.

The "new reality," the targeted advertising business that the IAB wants brands to join them in, is a place where you win based not on how much the audience trusts you, but on how well you can out-hack the competition. And like any information space organized by hacking skill, it's a hellscape of deceptive crap. Read The Strange Brands in Your Instagram Feed by Alexis C. Madrigal.

Some Instagram retailers are legit brands with employees and products. Others are simply middlemen for Chinese goods, built in bedrooms, and launched with no capital or inventory. All of them have been pulled into existence by the power of Instagram and Facebook ads combined with a suite of e-commerce tools based around Shopify.

Of course, not every brand that buys a social media ad or other targeted ad is crap.

But a social media ad is useless for telling crap brands from non-crap ones. It doesn't carry economic signal. There's no such thing as a free watch. (PDF)

Rory Sutherand writes, in Reducing activities to their core misses the point,

Many billions of pounds of advertising expenditure have been shifted from conventional media, most notably newspapers, and moved into digital media in a quest for targeted efficiency. If advertising simply works by the conveyance of messages, this would be a sensible thing to do. However, it is beginning to become apparent that not all, perhaps not even most, advertising works this way. It seems that a large part of advertising creates trust and conviction in its audience precisely because it is perceived to be costly.

If anyone knows that any seller can watch a few YouTube videos and do a certain activity, does that activity really help the audience distinguish a high-reputation seller from a low-reputation one?

And how does it affect a legit brand when its ads show up on the same medium with all the crappy ones?Twitter has a solution that keeps its ads saleable: just don't show any ads to important people. I'm surprised they can get away with this, but given the mix of rip-off and real brand ads I keep seeing there, it seems to be working.

Extremists and state-sponsored misinformation campaigns aren't "abusing" targeted advertising. They're just taking advantage of a system optimized for deception and using it normally.

Now, I don't want to blame targeted advertising for all of the problems of brand equity. When you put high-fructose corn syrup in your product, brand equity suffers. When you outsource or de-skill the customer support function, brand equity suffers. All the half-ass "looks good this quarter" stuff that established brands are doing is bad for brand equity. It just turns out that the kinds of advertising that you can do on the Internet today are all half-ass "looks good this quarter" stuff. If you want to send a credible economic signal, buy TV time or put a flagship store on some expensive real estate. The Internet's got nothing for you.

Failure to create signal-carrying ad units should be more of a concern for people who want to earn ad money on the Internet than it is. See Bob Hoffman's "refrigerator test." All that work that went into building the most complicated ad medium ever? It went into building an ad medium optimized for low-reputation advertisers. And that kind of ad medium tends to see rates go down over time. It doesn't hold value.

And the medium can't gain value until the users trust it, which means they have to trust the browser. In-browser tracking protection is going to have to enable the legit web advertising industry the same way that spam filters enables the legit email newsletter industry.

Here’s why the epidemic of malicious ads grew so much worse last year

Facebook and Google could lose $2B in ad revenue over ‘toxic content’

How I Cracked Facebook’s New Algorithm And Tortured My Friends

Wanted: Console Text Editor for Windows

Where Did All the Advertising Jobs Go?

Facebook patents tech to determine social class

The Mozilla Blog: A Perspective: Firefox Quantum’s Tracking Protection Gives Users The Right To Be Curious

Breaking up with Facebook: users confess they're spending less time

Survey: Facebook is the big tech company that people trust least

The Perils of Paid Content


Unilever pledges to cut ties with ‘platforms that create division’

Content recommendation services Outbrain and Taboola are no longer a guaranteed source of revenue for digital publishers

The House That Spied on Me

Why Facebook's Disclosure to the City of Seattle Doesn't Add Up

Debunking common blockchain-saving-advertising myths

SF tourist industry struggles to explain street misery to horrified visitors

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

How Facebook Helped Ruin Cambodia's Democracy

Planet DebianLouis-Philippe Véronneau: Downloading all the Critical Role podcasts in one batch

I've been watching Critical Role1 for a while now and since I've started my master's degree I haven't had much time to sit down and watch the show on YouTube as I used to do.

I thus started listening to the podcasts instead; that way, I can listen to the show while I'm doing other productive tasks. Pretty quickly, I grew tired of manually downloading every episode each time I finished the last one. To make things worst, the podcast is hosted on PodBean and they won't let you download episodes on a mobile device without their app. Grrr.

After the 10th time opening the terminal on my phone to download the podcast using some wget magic I decided enough was enough: I was going to write a dumb script to download them all in one batch.

I'm a little ashamed to say it took me more time than I had intended... The PodBean website uses semi-randomized URLs, so I could not figure out a way to guess the paths to the hosted audio files. I considered using youtube-dl to get the DASH version of the show on YouTube, but Google has been heavily throttling DASH streams recently. Not cool Google.

I then had the idea to use iTune's RSS feed to get the audio files. Surely they would somehow be included there? Of course Apple doesn't give you a simple RSS feed link on the iTunes podcast page, so I had to rummage around and eventually found out this is the link you have to use:

Surprise surprise, from the json file this links points to, I found out the main Critical Role podcast page has a proper RSS feed. To my defense, the RSS button on the main podcast page brings you to some PodBean crap page.

Anyway, once you have the RSS feed, it's only a matter of using grep and sed until you get what you want.

Around 20 minutes later, I had downloaded all the episodes, for a total of 22Gb! Victory dance!


Here's the bash script I wrote. You will need recode to run it, as the RSS feed includes some HTML entities.

# Get the whole RSS feed
wget -qO /tmp/criticalrole.rss

# Extract the URLS and the episode titles
mp3s=( $(grep -o "http.\+mp3" /tmp/criticalrole.rss) )
titles=( $(tail -n +45 /tmp/criticalrole.rss | grep -o "<title>.\+</title>" \
           | sed -r 's@</?title>@@g; s@ @\\@g' | recode html..utf8) )

# Download all the episodes under their titles
for i in ${!titles[*]}
  wget -qO "$(sed -e "s@\\\@\\ @g" <<< "${titles[$i]}").mp3" ${mp3s[$i]}

1 - For those of you not familiar with Critical Role, it's web series where a group of voice actresses and actors from LA play Dungeons & Dragons. It's so good even people like me who never played D&D can enjoy it..

Planet DebianSergio Durigan Junior: Hello, Planet Debian

Hey, there. This is long overdue: my entry in Planet Debian! I’m creating this post because, until now, I didn’t have a debian tag in my blog! Well, not anymore.

Stay tunned!

Planet Linux AustraliaDonna Benjamin: Site building with Drupal

What even is "Site Building"?

At DrupalDownunder some years back, the wonderful Erica Bramham named her talk "All node, no code". Nodes were the fundamental building blocks in Drupal, they were like single drops of content. These days though, it's all about entities.

But hang on a minute, I'm using lots of buzz words, and worse, I'm using words that mean different things in different contexts. Jargon is one of the first hurdles you need to jump to understand the diverse worlds of the web. People who grow up multi-lingual learn that the meanings of words is somewhat arbitrary. They learn the same thing has different names. This is true for the web too. So the first thing to know about Site Building, is it means different things to different people. 

To me, it means being able to build a website with out knowing how to code. I also believe it means I can build a website without having to set up my own development environment. I know people who vehemently disagree with me about this. But that's ok. This is my blog, and these are my rules.

So - this is a post about site building, using SimplyTest.Me and Drupal 8 out of the box.

1. Go to

2. Type Drupal Core in the search field, and select "Drupal core" from the list

3. Choose the latest development branch, right at the bottom of the list.


For me, right now, that's 8.6.x, and here's a screenshot of what that looks like.

SimplyTest Me Screenshot, showing drop down fields described in the text.


4. Click "Launch sandbox".

Now wait.

In a few moments, you should see a fresh shiny Drupal 8 site, ready for you to explore.

For me today, it looks like this.  

Drupal 8.6.x front page screenshot


In the top right of the window, you should see a "Log in" link.

Click that, and enter admin/admin to login. 

You're now ready to practice some site building!

First, you'll need to create some content to play with.  Here's a short screencast that shows you how to login, add an article, and change the title using Quick Edit.

A guide to what's next

Follow the Drupal User guide to start building your site!

If you want to start at the beginning, you'll get a great overview of Drupal, and some important info on how to plan your site. But if you want to roll up your sleeves and get building, you can skip the chapter on site installation and jump straight to chapter 4, and dive into basic site configuration.



You have 24 hours to experiment with the sandbox - after that it disappears.


Get in touch

If you want something more permanent, you might want to "try drupal" or contact us at to discuss our Drupal services.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV February 2018 Workshop: Installing an Open Source OS on your tablet or phone

Feb 24 2018 12:30
Feb 24 2018 16:30
Feb 24 2018 12:30
Feb 24 2018 16:30
Infoxchange, 33 Elizabeth St. Richmond

Installing an Open Source OS on your tablet or phone

Andrew Pam will demonstrate how to install LineageOS, previously known as CyanogenMod and based on the Android Open Source Project, on tablets and phones.  Feel free to bring your own tablets and phones and have a go, but please ensure you back them up if there is anything you still need stored on them!

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

February 24, 2018 - 12:30

read more


CryptogramFriday Squid Blogging: Squid Pin

There's a squid pin on Kickstarter.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Rondam RamblingsYes, code is data, but that's not what makes Lisp cool

There has been some debate on Hacker News lately about what makes Lisp cool, in particular about whether the secret sauce is homo-iconicity, or the idea that "code is data", or something else.  I've read through a fair amount of the discussion, and there is a lot of misinformation and bad pedagogy floating around.  Because this is a topic that is near and dear to my heart, I thought I'd take a

CryptogramNew National Academies Report on Crypto Policy

The National Academies has just published "Decrypting the Encryption Debate: A Framework for Decision Makers." It looks really good, although I have not read it yet.

Not much news or analysis yet. Please post any links you find in the comments, and I will summarize them here.

Planet Linux AustraliaOpenSTEM: Australia at the Olympics

The modern Olympic games were started by Frenchman Henri de Baillot-Latour to promote international understanding. The first games of the modern era were held in 1896 in Athens, Greece. Australia has competed in all the Olympic games of the modern era, although our participation in the first one was almost by chance. Of course, the […]

Worse Than FailureError'd: Preparing for the Future

George B. wrote, "Wait, so is it done...or not done?"


George B. (different George, but is in good company) is seeing nearly the same thing with Crash Plan Pro where the backup is done ...maybe.


"I swear, that's the last time that I'm flying with Icarus Airlines" Allison V. writes.


"The best I can figure, someone wanted to see what the simulation app would do if executed in some far flung future where months don't matter and nothing makes any sense," writes M.C.


Joel C. wrote "I can't help it - Next time my train is late, I'm going to immediately think that it's because someone didn't click to dismiss a popup."


"I'm not sure what this means, but I guess it's to point out that there are website buttons, and then there are buttons on the website," Brian R. wrote.


[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.


Cory DoctorowDo We Need a New Internet?

I was one of the interview subjects on an episode of BBC’s Tomorrow’s World called Do We Need a New Internet? (MP3); it’s a fascinating documentary, including some very thoughtful commentary from Edward Snowden.

Cory DoctorowThe 2018 Locus Poll is open: choose your favorite science fiction of 2017!

Following the publication of its editorial board’s long-list of the best science fiction of 2017, science fiction publishing trade-journal Locus now invites its readers to vote for their favorites in the annual Locus Award. I’m honored to have won this award in the past, and doubly honored to see my novel Walkaway on the short list, and in very excellent company indeed.

While you’re thinking about your Locus List picks, you might also use the list as an aide-memoire in picking your nominees for the Hugo Awards.

Krebs on SecurityNew EU Privacy Law May Weaken Security

Companies around the globe are scrambling to comply with new European privacy regulations that take effect a little more than three months from now. But many security experts are worried that the changes being ushered in by the rush to adhere to the law may make it more difficult to track down cybercriminals and less likely that organizations will be willing to share data about new online threats.

On May 25, 2018, the General Data Protection Regulation (GDPR) takes effect. The law, enacted by the European Parliament, requires technology companies to get affirmative consent for any information they collect on people within the European Union. Organizations that violate the GDPR could face fines of up to four percent of global annual revenues.

In response, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — is poised to propose changes to the rules governing how much personal information Web site name registrars can collect and who should have access to the data.

Specifically, ICANN has been seeking feedback on a range of proposals to redact information provided in WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges (IP addresses).

Under current ICANN rules, domain name registrars should collect and display a variety of data points when someone performs a WHOIS lookup on a given domain, such as the registrant’s name, address, email address and phone number. (Most registrars offer a privacy protection service that shields this information from public WHOIS lookups; some registrars charge a nominal fee for this service, while others offer it for free).

In a bid to help domain registrars comply with the GDPR regulations, ICANN has floated several proposals, all of which would redact some of the registrant data from WHOIS records. Its mildest proposal would remove the registrant’s name, email, and phone number, while allowing self-certified 3rd parties to request access to said data at the approval of a higher authority — such as the registrar used to register the domain name.

The most restrictive proposal would remove all registrant data from public WHOIS records, and would require legal due process (such as a subpoena or court order) to reveal any information supplied by the domain registrant.

ICANN’s various proposed models for redacting information in WHOIS domain name records.

The full text of ICANN’s latest proposed models (from which the screenshot above was taken) can be found here (PDF). A diverse ICANN working group made up of privacy activists, technologists, lawyers, trademark holders and security experts has been arguing about these details since 2016. For the curious and/or intrepid, the entire archive of those debates up to the current day is available at this link.


To drastically simplify the discussions into two sides, those in the privacy camp say WHOIS records are being routinely plundered and abused by all manner of ne’er-do-wells, including spammers, scammers, phishers and stalkers. In short, their view seems to be that the availability of registrant data in the WHOIS records causes more problems than it is designed to solve.

Meanwhile, security experts are arguing that the data in WHOIS records has been indispensable in tracking down and bringing to justice those who seek to perpetrate said scams, spams, phishes and….er….stalks.

Many privacy advocates seem to take a dim view of any ICANN system by which third parties (and not just law enforcement officials) might be vetted or accredited to look at a domain registrant’s name, address, phone number, email address, etc. This sentiment is captured in public comments made by the Electronic Frontier Foundation‘s Jeremy Malcolm, who argued that — even if such information were only limited to anti-abuse professionals — this also wouldn’t work.

“There would be nothing to stop malicious actors from identifying as anti-abuse professionals – neither would want to have a system to ‘vet’ anti-abuse professionals, because that would be even more problematic,” Malcolm wrote in October 2017. “There is no added value in collecting personal information – after all, criminals are not going to provide correct information anyway, and if a domain has been compromised then the personal information of the original registrant isn’t going to help much, and its availability in the wild could cause significant harm to the registrant.”

Anti-abuse and security experts counter that there are endless examples of people involved in spam, phishing, malware attacks and other forms of cybercrime who include details in WHOIS records that are extremely useful for tracking down the perpetrators, disrupting their operations, or building reputation-based systems (such as anti-spam and anti-malware services) that seek to filter or block such activity.

Moreover, they point out that the overwhelming majority of phishing is performed with the help of compromised domains, and that the primary method for cleaning up those compromises is using WHOIS data to contact the victim and/or their hosting provider.

Many commentators observed that, in the end, ICANN is likely to proceed in a way that covers its own backside, and that of its primary constituency — domain registrars. Registrars pay a fee to ICANN for each domain a customer registers, although revenue from those fees has been falling of late, forcing ICANN to make significant budget cuts.

Some critics of the WHOIS privacy effort have voiced the opinion that the registrars generally view public WHOIS data as a nuisance issue for their domain registrant customers and an unwelcome cost-center (from being short-staffed to field a constant stream of abuse complaints from security experts, researchers and others in the anti-abuse community).

“Much of the registrar market is a race to the bottom, and the ability of ICANN to police the contractual relationships in that market effectively has not been well-demonstrated over time,” commenter Andrew Sullivan observed.

In any case, sources close to the debate tell KrebsOnSecurity that ICANN is poised to recommend a WHOIS model loosely based on Model 1 in the chart above.

Specifically, the system that ICANN is planning to recommend, according to sources, would ask registrars and registries to display just the domain name, city, state/province and country of the registrant in each record; the public email addresses would be replaced by a form or message relay link that allows users to contact the registrant. The source also said ICANN plans to leave it up to the registries/registrars to apply these changes globally or only to natural persons living in the European Economic Area (EEA).

In addition, sources say non-public WHOIS data would be accessible via a credentialing system to identify law enforcement agencies and intellectual property rights holders. However, it’s unlikely that such a system would be built and approved before the May 25, 2018 effectiveness date for the GDPR, so the rumor is that ICANN intends to propose a self-certification model in the meantime.

ICANN spokesman Brad White declined to confirm or deny any of the above, referring me instead to a blog post published Tuesday evening by ICANN CEO Göran Marby. That post does not, however, clarify which way ICANN may be leaning on the matter.

“Our conversations and work are on-going and not yet final,” White wrote in a statement shared with KrebsOnSecurity. “We are converging on a final interim model as we continue to engage, review and assess the input we receive from our stakeholders and Data Protection Authorities (PDAs).”

But with the GDPR compliance deadline looming, some registrars are moving forward with their own plans on WHOIS privacy. GoDaddy, one of the world’s largest domain registrars, recently began redacting most registrant data from WHOIS records for domains that are queried via third-party tools. And it seems likely that other registrars will follow GoDaddy’s lead.


For my part, I can say without hesitation that few resources are as critical to what I do here at KrebsOnSecurity than the data available in the public WHOIS records. WHOIS records are incredibly useful signposts for tracking cybercrime, and they frequently allow KrebsOnSecurity to break important stories about the connections between and identities behind various cybercriminal operations and the individuals/networks actively supporting or enabling those activities. I also very often rely on WHOIS records to locate contact information for potential sources or cybercrime victims who may not yet be aware of their victimization.

In a great many cases, I have found that clues about the identities of those who perpetrate cybercrime can be found by following a trail of information in WHOIS records that predates their cybercriminal careers. Also, even in cases where online abusers provide intentionally misleading or false information in WHOIS records, that information is still extremely useful in mapping the extent of their malware, phishing and scamming operations.

Anyone looking for copious examples of both need only to search this Web site for the term “WHOIS,” which yields dozens of stories and investigations that simply would not have been possible without the data currently available in the global WHOIS records.

Many privacy activists involved in to the WHOIS debate have argued that other data related to domain and Internet address registrations — such as name servers, Internet (IP) addresses and registration dates — should also be considered private information. My chief concern if this belief becomes more widely held is that security companies might stop sharing such information for fear of violating the GDPR, thus hampering the important work of anti-abuse and security professionals.

This is hardly a theoretical concern. Last month I heard from a security firm based in the European Union regarding a new Internet of Things (IoT) botnet they’d discovered that was unusually complex and advanced. Their outreach piqued my curiosity because I had already been working with a researcher here in the United States who was investigating a similar-sounding IoT botnet, and I wanted to know if my source and the security company were looking at the same thing.

But when I asked the security firm to share a list of Internet addresses related to their discovery, they told me they could not do so because IP addresses could be considered private data — even after I assured them I did not intend to publish the data.

“According to many forums, IPs should be considered personal data as it enters the scope of ‘online identifiers’,” the researcher wrote in an email to KrebsOnSecurity, declining to answer questions about whether their concern was related to provisions in the GDPR specifically.  “Either way, it’s IP addresses belonging to people with vulnerable/infected devices and sharing them may be perceived as bad practice on our end. We consider the list of IPs with infected victims to be private information at this point.”

Certainly as the Internet matures and big companies develop ever more intrusive ways to hoover up data on consumers, we also need to rein in the most egregious practices while giving Internet users more robust tools to protect and preserve their privacy. In the context of Internet security and the privacy principles envisioned in the GDPR, however, I’m worried that cybercriminals may end up being the biggest beneficiaries of this new law.

CryptogramElection Security

Good Washington Post op-ed on the need to use voter-verifiable paper ballots to secure elections, as well as risk-limiting audits.

Worse Than FailureIt's Called Abstraction, and It's a Good Thing

Steven worked for a company that sold “big iron” to big companies, for big bucks. These companies didn’t just want the machines, though, they wanted support. They wanted lots of support. With so many systems, processing so many transactions, installed at so many customer sites, Steven’s company needed a better way to analyze when things went squirrelly.

Thus was born a suite of applications called “DICS”- the Diagnostic Investigation Console System. It was, at its core, a processing pipeline. On one end, it would reach out to a customer’s site and download log files. The log files would pass through a series of analytic steps, and eventually reports would come out the other end. Steven mostly worked on the reporting side of things.

While working on reports, he’d sometimes hear about hiccups in the downloader portion of the pipeline, but as it was “not his circus, not his monkeys”, he didn’t pry too deeply. At least, he didn’t until one day, when his boss knocked on his cubicle divider.

“Hey, Steven. You know Perl, right?”

“Uh… sure.”

“And you’ve worked with XML files, right?”

“I… yes?”

“Great. Bob’s leaving. You’re going to need to take over the downloader portion of DICS. Talk to him ASAP. Great, thanks!”

Perl gets a reputation for being a “write only language”, which is at least partially undeserved. Bob was quite sensitive about that reputation, so he stressed, “I’ve worked really, really hard to keep the code as clean and clear as possible. Everything in the design is object oriented.”

Bob wasn’t kidding. Everything was wrapped up as a class. Everything. It was so class-happy it made the Spring framework jealous. JEE consultants would look at it and say, “Whoa, maybe slow down with the classes there.” A UML diagram of the architecture would drain ten printers worth of toner. The config file was stored in XML, and just for parsing out that file and storing the results, Bob had written 25 different classes, some as small as three lines. All in all, the whole downloader weighed in at about 5,000 lines of Perl code.

In the whirlwind tour, Steven asked Bob about the complexity. “It’s not complex. Each class is extremely simple. Well, aside from the config file wrapper, but it needs to have lots of methods because it has lots of data! There are so many fields in the XML file, and I needed to create getters and setters for them all! That way we can have Data Abstraction! That’s important! Data Abstraction is how we keep this project maintainable. What if the XML file format changes? It’s happened, you know. This will make it easy to keep our code in sync!”

Steven marveled at Bob’s ability to pronounce “data abstraction” as if it were in bold face, and resolved to touch the downloader script as little as possible. That resolution failed pretty much a week after Bob left, when the script fell down in production, leaving the DICS pipeline empty. Steven had to roll up his sleeves and get hands on with the code.

Now, one of Perl’s selling points is its rich library. While CPAN may have its own issues as a package manager, if you want to do something like parse an XML file, there’s a library that does it. There’s a dozen libraries that’ll do it. And they all follow a vaguely Perl-idiom, and instead of classes, they’ll favor associative arrays. That way, when you want to get something like the contents of the ip_addr tag from the config file, you could write code like this:

$ip_addr = $config->{hosts}[$n]{ip_addr}

This makes it easy to understand how the structure of the XML file relates to the Perl data structure, but that kind of mapping means that there isn’t any Data Abstraction, and thus was utterly the wrong approach. Instead, everything was done as a getter/setter method.

$ip_addr = $Config_object->host($n)->get_addr();

That doesn’t look too different, perhaps, but the devil is in the details. First, 90% of the getters were “thin”, so get_addr might look something like this:

sub get_addr { return $self->{Addr}; }

That raises questions about the value of these getters/setters for fetching config values, but the bigger problem was this: there was nothing in the config file called “Addr”. Does this method return the IP address? Or a string in the from “$ip_addr:$port”? Or maybe even an array, like [$ip_addr, $port].

Throughout the whole API, it was a bit of a crapshoot as to what any given method might return. And as for checking the documentation- they’d created a system that provided Data Abstraction, they didn’t need documentation, did they?

To track any given getter back to the actual field in the XML file it was getting, Steven had to trace through half a dozen different classes. It was frustrating and tedious, and Steven had half a mind to just throw the whole thing out and start over, consequences be damned. When he saw the “Translation” subsystem, he decided that it really did need to be thrown out, entirely.

You see, Bob’s goal with Data Abstraction was to make it so that, if the XML file changed, it would be easy to adapt the code. But the code was a mess. So when the XML file did change a few years back, Bob couldn’t update the config handling classes in any way that worked. So he did the next best thing- he wrote a “translation” module that would, using regular expressions, convert the new-style XML files back into the old-style XML files. Then his config-file classes could load and parse the old-style files.

Steven sums it up perfectly:

Bob’s classes weren’t data abstraction. It was just… data abstracturbation.

When Steven was done reimplementing Bob's work, he had about 500 lines of code, and the downloader stopped failing every few days.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!


Sociological ImagesWhat’s Trending? Feeling the Love

Valentine’s Day is upon us, but in a world of hookups and breakups many people are concerned about the state of romance. Where do Americans actually stand on sex and relationships? We took a look at some trends from the General Social Survey. They highlight an important point: while Americans are more accepting of things like divorce and premarital sex, that doesn’t necessarily mean that both are running rampant in society.

For example, since the mid 1970s, Americans have become much more accepting of sex before marriage. Today more than half of respondents say it isn’t wrong at all.

However, these attitudes don’t necessarily mean people are having more sex. Younger Americans today actually report having no sexual partners more frequently than people of the same age in earlier surveys.

And what about marriage? Americans are more accepting of divorce now, with more saying a divorce should be easier to obtain.

But again, this doesn’t necessarily mean everyone is flying the coop. While self-reported divorce rates had been on the rise since the mid 1970s, they have largely leveled off in recent years.

It is important to remember that for core social practices like love and marriage, we are extra susceptible to moral panics when faced with social change. These trends show how changes in attitudes don’t always line up with changes in behavior, and they remind us that sometimes we can save the drama for the rom-coms.

Inspired by demographic facts you should know cold, “What’s Trending?” is a post series at Sociological Images featuring quick looks at what’s up, what’s down, and what sociologists have to say about it.

Ryan Larson is a graduate student from the Department of Sociology, University of Minnesota – Twin Cities. He studies crime, punishment, and quantitative methodology. He is a member of the Graduate Editorial Board of The Society Pages, and his work has appeared in Poetics, Contexts, and Sociological Perspectives.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

CryptogramCan Consumers' Online Data Be Protected?

Everything online is hackable. This is true for Equifax's data and the federal Office of Personal Management's data, which was hacked in 2015. If information is on a computer connected to the Internet, it is vulnerable.

But just because everything is hackable doesn't mean everything will be hacked. The difference between the two is complex, and filled with defensive technologies, security best practices, consumer awareness, the motivation and skill of the hacker and the desirability of the data. The risks will be different if an attacker is a criminal who just wants credit card details ­ and doesn't care where he gets them from ­ or the Chinese military looking for specific data from a specific place.

The proper question isn't whether it's possible to protect consumer data, but whether a particular site protects our data well enough for the benefits provided by that site. And here, again, there are complications.

In most cases, it's impossible for consumers to make informed decisions about whether their data is protected. We have no idea what sorts of security measures Google uses to protect our highly intimate Web search data or our personal e-mails. We have no idea what sorts of security measures Facebook uses to protect our posts and conversations.

We have a feeling that these big companies do better than smaller ones. But we're also surprised when a lone individual publishes personal data hacked from the infidelity site, or when the North Korean government does the same with personal information in Sony's network.

Think about all the companies collecting personal data about you ­ the websites you visit, your smartphone and its apps, your Internet-connected car -- and how little you know about their security practices. Even worse, credit bureaus and data brokers like Equifax collect your personal information without your knowledge or consent.

So while it might be possible for companies to do a better job of protecting our data, you as a consumer are in no position to demand such protection.

Government policy is the missing ingredient. We need standards and a method for enforcement. We need liabilities and the ability to sue companies that poorly secure our data. The biggest reason companies don't protect our data online is that it's cheaper not to. Government policy is how we change that.

This essay appeared as half of a point/counterpoint with Priscilla Regan, in a CQ Researcher report titled "Privacy and the Internet."

Worse Than FailureCodeSOD: All the Rest Have Thirty One…

Aleksei received a bunch of notifications from their CI system, announcing a build failure. This was interesting, because no code had changed recently, so what was triggering the failure?

        private BillingRun CreateTestBillingRun(int billingRunGroupId, DateTime? billingDate, int? statusId)
            return new BillingRun
                BillingRunGroupId = billingRunGroupId,
                PeriodStart = new DateTime(DateTime.Today.Year, DateTime.Today.Month, 1),
                BillingDate = billingDate ?? new DateTime(DateTime.Today.Year, DateTime.Today.Month, 15),
                CreatedDate = new DateTime(DateTime.Today.Year, DateTime.Today.Month, 30),
                ItemsPreparedDate = new DateTime(2017, 4, 7),
                CompletedDate = new DateTime(2017, 4, 8),
                DueDate = new DateTime(DateTime.Today.Year, DateTime.Today.Month, 13),
                StatusId = statusId ?? BillingRunStatusConsts.Completed,
                ErrorCode = "ERR_CODE",
                Error = "Full error description",
                ModifiedOn = new DateTime(2017, 1, 1)

Take a look at the instantiation of CreatedDate. I imagine the developer’s internal monologue went something like this:

Okay, the Period Start is the beginning of the month, the Billing Date is the middle of the month, and Created Date is the end of the month. Um… okay, well, beginning is easy. That’s the 1st. Phew. Okay, but the middle of the month. That’s hard. Oh, wait, wait a second! It’s billing, so I bet the billing department has a day they always send out the bills. Let me send an email to Steve in billing… oh, look at that. It’s always the 15th. Great. Boy. This programming stuff is easy. Whew. Okay, so now the end of the month. This one’s tricky, because months have different lengths, sometimes 30 days, and sometimes 31. Let me ask Steve again, if they have any specific requirements there… oh, look at that. They don’t really care so long as it’s the last day or two of the month. Great. I’ll just use 30, then. Good thing there aren’t any months with a shorter length.
Y’know, I vaguely remember reading a thing that said tests should always use the same values, so that every run tests exactly the same combination of inputs. I think I saved a bookmark to read it later. Should I read it now? No! I should commit this code, let the CI build run, and then mark the requirement as complete.
Boy, this programming stuff is easy.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Sky CroeserMotherhood, hope, and 76% less snark

Oh, hi.

I had that baby I was growing in my last post. She’s an amazing little person. She’s learned to clap her hands in the last week, and I am full of wonder and delight. She’s been sick, and I fretted for hours about her rash. (Should I call the doctor? Should I not? Is it a purple rash? Is it getting worse.)

I’m back at work, sitting in my office, relieved to have time to read and write and teach, and missing her fiercely. I feel this all at once: the relief of time and space away, and the missing. I think about her all the time, but also get bored by the way motherhood enfolds me.

At home, we walk in endless circles around the house as she holds out a hand for mine, demands the other hand, then drags me off to open cupboards or visit each room in turn. (At the same time, I love to see her do this: so clearly show me what she wants, so clearly refuse if I put my right hand in her left, or give her only one hand.)


Motherhood has changed me, and I don’t know how I feel about that. (I don’t have much time to work out how I feel about anything.) It is almost physically painful to think of parents losing children to war or violence. Of wanting to feed a hungry child and not being able to. I have the luxury of being able to look away, to take a break from imagining these scenes.

For the last few months the change to my work has been in the time and energy available. Everything needs to be broken up into smaller, more digestible chunks, to manage in nap times and evenings and while so very tired most of the time.

As I finished my undergraduate, I decided to focus on researching movements that gave me hope. Imperfect, complex movements with many flaws, but nevertheless full of people trying to change things for the better. I wanted, and want, to believe that we have the potential to change this. That hungry children can be fed, that we can look after our neighbours, that we can resist and fight back against tides of hatred and fear.

Last year, I found myself writing a presentation and a book chapter that shifted to focusing on the flaws in these movements. I was tired, and I got snarky and impatient with the imperfection of activists (particularly white men) who didn’t listen and try to define what counts as ‘radical’ and what doesn’t. I still feel that impatience, but that work was depressing. The snark of it was satisfying, but I’m not sure of the use of it and frankly I am subject to many of the same critiques.

As I try to find my way back into research and writing, I’m trying to recommit to finding threads of hope. Critique is important, especially the critiques I need to listen to from the margins of academia and activism: of white women’s role in feminism(s), of settler societies, of academic power structures. In my own writing I want to be finding materials to stitch into alternatives. I want to be finding spaces where my voice can be useful, rather than just adding more noise.

And it’s a terrible cliche, but the urgency of it comes through when I look at this tiny person and imagine other parents doing the same, hoping for safety and flourishing and care for these wonders we are trying to nourish.



Krebs on SecurityMicrosoft Patch Tuesday, February 2018 Edition

Microsoft today released a bevy of security updates to tackle more than 50 serious weaknesses in Windows, Internet Explorer/Edge, Microsoft Office and Adobe Flash Player, among other products. A good number of the patches issued today ship with Microsoft’s “critical” rating, meaning the problems they fix could be exploited remotely by miscreants or malware to seize complete control over vulnerable systems — with little or no help from users.

February’s Patch Tuesday batch includes fixes for at least 55 security holes. Some of the scarier bugs include vulnerabilities in Microsoft Outlook, Edge and Office that could let bad guys or bad code into your Windows system just by getting you to click on a booby trapped link, document or visit a compromised/hacked Web page.

As per usual, the SANS Internet Storm Center has a handy rundown on the individual flaws, neatly indexing them by severity rating, exploitability and whether the problems have been publicly disclosed or exploited.

One of the updates addresses a pair of serious vulnerabilities in Adobe Flash Player (which ships with the latest version of Internet Explorer/Edge). As KrebsOnSecurity warned last week, there are active attacks ongoing against these Flash vulnerabilities.

Adobe is phasing out Flash entirely by 2020, but most of the major browsers already take steps to hobble Flash. And with good reason: It’s a major security liability. Chrome also bundles Flash, but blocks it from running on all but a handful of popular sites, and then only after user approval.

For Windows users with Mozilla Firefox installed, the browser prompts users to enable Flash on a per-site basis. Through the end of 2017 and into 2018, Microsoft Edge will continue to ask users for permission to run Flash on most sites the first time the site is visited, and will remember the user’s preference on subsequent visits.

The latest standalone version of Flash that addresses these bugs is for Windows, Mac, Linux and Chrome OS. But most users probably would be better off manually hobbling or removing Flash altogether, since so few sites actually require it still. Disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

People running Adobe Reader or Acrobat also need to update, as Adobe has shipped new versions of these products that fix at least 39 security holes. Adobe Reader users should know there are alternative PDF readers that aren’t so bloated or full of security issues. Sumatra PDF is a good, lightweight alternative.

Experience any issues, glitches or problems installing these updates? Sound off about it in the comments below.

TEDNew podcast alert: WorkLife with Adam Grant, a TED original, premieres Feb. 28

Adam Grant to Explore the Psychology of Unconventional Workplaces as Host of Upcoming New TED Original Podcast “WorkLife”

Organizational psychologist, professor, bestselling author and TED speaker Adam Grant is set to host a new TED original podcast series titled WorkLife with Adam Grant, which will explore unorthodox work cultures in search of surprising and actionable lessons for improving listeners’ work lives.

Beginning Wednesday, February 28, each weekly episode of WorkLife will center around one extraordinary workplace—from an award-winning TV writing team racing against the clock, to a sports team whose culture of humility propelled it to unexpected heights. In immersive interviews that take place in both the field and the studio, Adam brings his observations to vivid life – and distills useful insights in his friendly, accessible style.

“We spend a quarter of our lives in our jobs. This show is about making all that time worth your time,” says Adam, the bestselling author of OriginalsGive and Take, and Option B with Sheryl Sandberg. “In WorkLife, we’ll take listeners inside the minds of some fascinating people in some truly unusual workplaces, and mix in fresh social science to reveal how we can lead more creative, meaningful, and generous lives at work.”

Produced by TED in partnership with Pineapple Street Media and Transmitter Media, WorkLife is TED’s first original podcast created in partnership with a TED speaker. Its immersive, narrative format is designed to offer audiences a new way to explore TED speaker ideas in depth. Adam’s talks “Are you a giver or a taker?” and “The surprising habits of original thinkers” have together been viewed more than 11 million times in the past two years.

The show marks TED’s latest effort to test new content formats beyond the nonprofit’s signature first-person TED talk. Other recent TED original content experiments include Sincerely, X, an audio series featuring talks delivered anonymously;  Small Thing Big Idea, a Facebook Watch video series about everyday designs that changed the world; and the Indian prime-time live-audience television series TED Talks India: Nayi Soch, hosted by Bollywood star and TED speaker Shah Rukh Khan.

“We’re aggressively developing and testing a number of new audio and video programs that support TED’s mission of ‘Ideas Worth Spreading,’” said TED head of media and WorkLife co-executive producer Colin Helms. “In every case, our speakers and their ideas remain the focus, but with fresh formats, styles and lengths, we can reach and appeal to even more curious audiences, wherever they are.”

WorkLife debuts Wednesday, February 28 on Apple Podcasts, the TED Android app, or wherever you like to listen to podcasts. Season 1 features eight episodes, roughly 30 minutes each, plus two bonus episodes. It’s sponsored by Accenture, Bonobos, JPMorgan Chase & Co., and Warby Parker. New episodes will be made available every Wednesday.

CryptogramJumping Air Gaps

Nice profile of Mordechai Guri, who researches a variety of clever ways to steal data over air-gapped computers.

Guri and his fellow Ben-Gurion researchers have shown, for instance, that it's possible to trick a fully offline computer into leaking data to another nearby device via the noise its internal fan generates, by changing air temperatures in patterns that the receiving computer can detect with thermal sensors, or even by blinking out a stream of information from a computer hard drive LED to the camera on a quadcopter drone hovering outside a nearby window. In new research published today, the Ben-Gurion team has even shown that they can pull data off a computer protected by not only an air gap, but also a Faraday cage designed to block all radio signals.

Here's a page with all the research results.

BoingBoing post.

Worse Than FailureBudget Cuts

Xavier was the head of a 100+ person development team. Like many enterprise teams, they had to support a variety of vendor-specific platforms, each with their own vendor-specific development environment and its own licensing costs. All the licensing costs were budgeted for at year’s end, when Xavier would submit the costs to the CTO. The approval was a mere formality, ensuring his team would have everything they needed for another year.

Unfortunately, that CTO left to pursue another opportunity. Enter Greg, a new CTO who joined the company from the financial sector. Greg was a penny-pincher on a level that would make the novelty coin-smasher you find at zoos and highway rest-stops jealous. Greg started cutting costs left and right immediately. When the time came for budgeting development tool licensing, Greg threw down the gauntlet on Xavier’s “wild” spending.

Alan Rickman, in Galaxy Quest, delivering the line, 'By Grabthar's Hammer, what a savings' while looking like his soul is dying forever. "By Grabthar's Hammer, what a savings."

“Have a seat, X-man,” Greg offered, in a faux-friendly voice. “Let’s get to the point. I looked at your proposal for all of these tools, your team supposedly ‘needs’. $40,000 is absurd! Do you think we print money? If your team were any good,, they should be able to do everything they need without these expensive, gold-plated programs!”

Xavier was taken aback by Greg’s brashness, but he was prepared for a fight. “Greg, these tools are vital to our development efforts. There are maybe a few products we could do without, but most of them are absolutely required. Even the more ‘optional’ ones, like our refactoring and static analysis tools, they save us money and time and improve code quality. Not having them would be more expensive than the license.”

Greg scowled and tented his fingers. “There is no chance I’m approving this as it stands. Go back and figure out what you can do without. If you don’t cut this cost down, I’ll find an easier way to reduce expenses… like by cutting bonuses… or staff.”

Xavier spent the next few days having an extensive tool review with his lead developers. Many of the vendor-specific tools had no alternative, but there were a few third party tools they could do without, or use an open-source equivalent. Across the team of 100+ developers, the net cost savings would be $4,000, or 10%.

Xavier didn’t expect that to make Greg happy, but it was the best they could do. The following morning, Xavier presented his findings in Greg’s office, and it went smoother than expected. “Listen, X. I want this cost down even more, but we’re running out of time to approve this year’s budget. Since I did so much work cutting costs in other ways, I’ll submit this to finance. But enjoy your last year of all these fancy tools! Next year, things will be different!”

Xavier was relieved he didn’t have to fight further. Perhaps, over the next year, he could further demonstrate the necessity of their tooling. With the budget resolved, Xavier had some much-overdue vacation time. He had saved up enough PTO to spend a month in the Australian Outback. Development tools and budgets would be the furthest thing from his mind.

Three great weeks in the land down under were enhanced by being mostly cut off from communications from anyone in the company. During a trip through a town with cell phone reception, Xavier decided to check his voicemail, to make sure the sky wasn’t falling. Dave, his #2 in command, had left an urgent message two days prior.

“Xavier!” Dave shouted on the other end. “You need to get back here soon. Greg never paid the invoices for anything in our stack. We’re sitting here with a huge pile of unlicensed stuff. We’ve been racking up unlicensed usage and support costs, and Greg is going to flip when he sees our monthly statements.” With deep horror, Dave added, “One of the licenses he didn’t pay was for Oracle!”

Xavier reluctantly left the land of dingoes and wallabies to head back home. He arrived just about the same time the first vendor calls demanding payment did. The costs from just three weeks of unlicensed usage of enterprise software was astronomical. Certainly more than just buying the licenses would have been in the first place. Xavier scheduled a meeting with Greg to decide what to do next.

The following Monday, the dreaded meeting was on. “Sit,” Greg said. “I have some good news, and some bad news. The good news is that I’ve found a way to pay these ridiculous charges your team racked up.” Xavier leaned forward in his chair, eager to learn how Greg had pulled it off. “The bad news is that I’ve identified a redundant position- yours.”

Xavier slumped into his chair.

Greg continued. “While you were gone, I realized we were in quite capable hands with Dave, and his salary is quite a bit lower than yours. Coincidentally, the original costs and these ridiculous penalties add up to an amount just a little less than your annual salary. I guess you’re getting your wish: the development team can keep the tools you insist they need to do their jobs. It seems you were right about saving money in the long run, too.”

Xavier left Greg’s office, stunned. On his way out for the last time, he stopped by Dave to congratulate him on the new promotion.

“Oh,” Dave said, sourly, “it’s not a promotion. They’re just eliminating your position. What, you think Greg would give me a raise?”

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Don MartiTwo visions of GDPR

As far as I can tell, there are two sets of ambitious predictions about GDPR.

One is the VRM vision. Doc Searls writes, on ProjectVRM:

I am sure Google, Facebook and lesser purveyors of advertising online will find less icky ways to stay in business; but it is becoming clear that next May 25, when the GDPR goes into full effect, will be an extinction-level event for tracking-based advertising (aka adtech) as a business model.

Big impact? Not so fast. There's also a "business as usual" story, and that one, you'll find at Digital Advertising Consent.

Our complex ecosystem of companies must cooperate more closely than ever before to meet the transparency and consent requirements of European data protection law.

According to the adtech firms, well, maybe there will be more Bürokratie, more pointless dialogs that users have to click through, and one more line item, "GDPR compliance", to come out of the publisher's share, of course, but the second vision of GDPR is essentially just adtech/adfraud as usual. Upgrade to the new version of OpenRTB, and move along, nothing to see here.

Personally, I'm not buying either one of these GDPR visions. Because, just for fun and also because reasons, I run my own mail server.

And every little decision I have to make about how to configure the damn thing is based on playing a game with email spammers. Regulation is a part of my complete breakfast, but it's not the whole story.

The government doesn't give you freedom from spam. You have to take it for yourself, one filtering rule at a time. Or, do what most people do, and find a company that does it for you, but it has to be a company that you trust with your information.

A mail sender's decision to comply, or not comply, with some regulation is a bit of information. That feeds into the software that makes the final decision: inbox, spam folder, or reject. When a spam message complies with the regulations of some country, my mail server doesn't say, "Oh, wow, compliant! I can skip all the other checks and send this one straight to the inbox!" It uses the regulation compliance along with other information to make that decision.

So whatever extra consent forms that surveillance marketers are required to send by GDPR? They're not the final decision on What The User Must See. They're just data, coming over the network.

Some of that data will be interpreted to mean that this request is an obvious mismatch with how the user chooses to share their info. The user might not even see those consent forms, or the browser might pop up a notification:

4 requests to do creepy shit, that's obviously against your preferences, already denied. Isn't this the best browser ever?

(No, I don't write copy for browser notifications. But you get the idea.)

Browsers that implement tracking protection might end up with a feature where they detect requests for permission to do things that the user has already said no to—by turning on tracking protection in the first place—and auto-deny them.

Legit email senders had to learn "deliverability," the art and science of making legit mail look legit so that it can get past email spam filters. Legit advertisers will have to learn that users aren't identical and spherical, users choose tools to implement their data sharing preferences, and that regulatory compliance is only part of the job.

Should web browsers adopt Google’s new selective ad blocking tech?


Content recommendation services Outbrain and Taboola are no longer a guaranteed source of revenue for digital publishers

CryptogramCabinet of Secret Documents from Australia

This story of leaked Australian government secrets is unlike any other I've heard:

It begins at a second-hand shop in Canberra, where ex-government furniture is sold off cheaply.

The deals can be even cheaper when the items in question are two heavy filing cabinets to which no-one can find the keys.

They were purchased for small change and sat unopened for some months until the locks were attacked with a drill.

Inside was the trove of documents now known as The Cabinet Files.

The thousands of pages reveal the inner workings of five separate governments and span nearly a decade.

Nearly all the files are classified, some as "top secret" or "AUSTEO", which means they are to be seen by Australian eyes only.

Yes, that really happened. The person who bought and opened the file cabinets contacted the Australian Broadcasting Corp, who is now publishing a bunch of it.

There's lots of interesting (and embarassing) stuff in the documents, although most of it is local politics. I am more interested in the government's reaction to the incident: they're pushing for a law making it illegal for the press to publish government secrets it received through unofficial channels.

"The one thing I would point out about the legislation that does concern me particularly is that classified information is an element of the offence," he said.

"That is to say, if you've got a filing cabinet that is full of classified information ... that means all the Crown has to prove if they're prosecuting you is that it is classified ­ nothing else.

"They don't have to prove that you knew it was classified, so knowledge is beside the point."


Many groups have raised concerns, including media organisations who say they unfairly target journalists trying to do their job.

But really anyone could be prosecuted just for possessing classified information, regardless of whether they know about it.

That might include, for instance, if you stumbled across a folder of secret files in a regular skip bin while walking home and handed it over to a journalist.

This illustrates a fundamental misunderstanding of the threat. The Australian Broadcasting Corp gets their funding from the government, and was very restrained in what they published. They waited months before publishing as they coordinated with the Australian government. They allowed the government to secure the files, and then returned them. From the government's perspective, they were the best possible media outlet to receive this information. If the government makes it illegal for the Australian press to publish this sort of material, the next time it will be sent to the BBC, the Guardian, the New York Times, or Wikileaks. And since people no longer read their news from newspapers sold in stores but on the Internet, the result will be just as many people reading the stories with far fewer redactions.

The proposed law is older than this leak, but the leak is giving it new life. The Australian opposition party is being cagey on whether they will support the law. They don't want to appear weak on national security, so I'm not optimistic.

EDITED TO ADD (2/8): The Australian government backed down on that new security law.

EDITED TO ADD (2/13): Excellent political cartoon.

CryptogramPoor Security at the UK National Health Service

The Guardian is reporting that "every NHS trust assessed for cyber security vulnerabilities has failed to meet the standard required."

This is the same NHS that was debilitated by WannaCry.

EDITED TO ADD (2/13): More news.

And don't think that US hospitals are much better.

Cory DoctorowThe Man Who Sold the Moon, Part 04 [FIXED]

Here’s part four of my reading (MP3) (part three, part two, part one) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.



Sociological ImagesWhat’s That Fact? A Tricky Graph on Terror

The Star Tribune recently ran an article about a new study from George Washington University tracking cases of Americans who traveled to join jihadist groups in Syria and Iraq since 2011. The print version of the article was accompanied by a graph showing that Minnesota has the highest rate of cases in the study. TSP editor Chris Uggen tweeted the graph, noting that this rate represented a whopping seven cases in the last six years.

Here is the original data from the study next to the graph that the paper published:

(Click to Enlarge)

Social scientists often focus on rates when reporting events, because it make cases easier to compare. If one county has 300 cases of the flu, and another has 30,000, you wouldn’t panic about an epidemic in the second county if it had a city with many more people. But relying on rates to describe extremely rare cases can be misleading. 

For example, the data show this graph misses some key information. California and Texas had more individual cases than Minnesota, but their large populations hide this difference in the rates. Sorting by rates here makes Minnesota look a lot worse than other states, while the number of cases is not dramatically different. 

As far as I can tell, this chart only appeared in the print newspaper photographed above and not on the online story. If so, this chart only went to print audiences. Today we hear a lot of concern about the impact of “filter bubbles,” especially online, and the spread of misleading information. What concerns me most about this graph is how it shows the potential impact of offline filter bubbles in local communities, too.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

Krebs on SecurityDomain Theft Strands Thousands of Web Sites

Newtek Business Services Corp. [NASDAQ:NEWT], a Web services conglomerate that operates more than 100,000 business Web sites and some 40,000 managed technology accounts, had several of its core domain names stolen over the weekend. The theft shut off email and stranded Web sites for many of Newtek’s customers.

An email blast Newtek sent to customers late Saturday evening made no mention of a breach or incident, saying only that the company was changing domains due to “increased” security. A copy of that message can be read here (PDF).

In reality, three of their core domains were hijacked by a Vietnamese hacker, who replaced the login page many Newtek customers used to remotely manage their Web sites (webcontrolcenter[dot]com) with a live Web chat service. As a result, Newtek customers seeking answers to why their Web sites no longer resolved correctly ended up chatting with the hijacker instead.

The PHP Web chat client that the intruder installed on Webcontrolcenter[dot]com, a domain that many Newtek customers used to manage their Web sites with the company. The perpetrator can be seen in this chat using the name “admin.” Click to enlarge.

In a follow-up email sent to customers 10 hours later (PDF), Newtek acknowledged the outage was the result of a “dispute” over three domains, webcontrolcenter[dot]com, thesba[dot]com, and crystaltech[dot]com.

“We strongly request that you eliminate these domain names from all your corporate or personal browsers, and avoid clicking on them,” the company warned its customers. “At this hour, it has become apparent that as a result over the dispute for these three domain names, we do not currently have control over the domains or email coming from them.”

The warning continued: “There is an unidentified third party that is attempting to chat and may engage with clients when visiting the three domains. It is imperative that you do not communicate or provide any sensitive data at these locations.”

Newtek did not respond to requests for comment.

Domain hijacking is not a new problem, but it can be potentially devastating to the victim organization. In control of a hijacked domain, a malicious attacker could seamlessly conduct phishing attacks to steal personal information, or use the domain to foist malicious software on visitors.

Newtek is not just a large Web hosting firm: It aims to be a one-stop shop for almost any online service a small business might need. As such, it’s a mix of very different business units rolled up into one since its founding in 1998, including lending solutions, HR, payroll, managed cloud solutions, group health insurance and disaster recovery solutions.

“NEWT’s tentacles go deep into their client’s businesses through providing data security, human resources, employee benefits, payments technology, web design and hosting, a multitude of insurance solutions, and a suite of IT services,” reads a Sept. 2017 profile of the company at SeekingAlpha, a crowdsourced market analysis publication.

Newtek’s various business lines. Source: Newtek.

Reached via the Web chat client he installed at webcontrolcenter[dot]com, the person who claimed responsibility for the hijack said he notified Newtek five days ago about a “bug” he found in the company’s online operations, but that he received no reply.

A Newtek customer who resells the company’s products to his clients said he had to spend much of the weekend helping clients regain access to email accounts and domains as a result of the incident. The customer, who asked to remain anonymous, said he was shocked that Newtek made little effort to convey the gravity of the hijack to its customers — noting that the company’s home page still makes no mention of the incident.

“They also fail to make it clear that any data sent to any host under the domain could be recorded (email passwords, web credentials, etc.) by the attacker,” he said. “I’m floored at how bad their communication was to their users. I’m not surprised, but concerned, that they didn’t publish the content in the emails directly on their website.”

The source said that at a minimum Newtek should have expired all passwords immediately and required resets through non-compromised hosts.

“And maybe put a notice about this on their home page instead of relying on email, because a lot of my customers can’t get email right now as a result of this,” the source said.

There are a few clues that suggest the perpetrator of these domain hijacks is indeed being truthful about both his nationality and that he located a bug in Newtek’s service. Two of the hijacked domains were moved to a Vietnamese domain registrar (

This individual gave me an email address to contact him at — — although he has so far not responded to questions beyond promising to reply in Vietenamese. The email is tied to two different Vietnamese-language social networking profiles.

A search at Domaintools indicates that this address is linked to the registration records for four domains, including one (giakiemnew[dot]com) that was recently hosted on a dedicated server operated by Newtek’s legacy business unit Crystaltek [full disclosure: Domaintools is an advertiser on this site]. Recall that Crystaltek[dot]com was among the three hijacked domains.

In addition, the domain giakiemnew[dot]com was registered through Newtek Technology Services, a domain registration service offered by Newtek. This suggests that the perpetrator was in fact a customer of Newtek, and perhaps did discover a vulnerability while using the service.

CryptogramInternet Security Threats at the Olympics

There are a lot:

The cybersecurity company McAfee recently uncovered a cyber operation, dubbed Operation GoldDragon, attacking South Korean organizations related to the Winter Olympics. McAfee believes the attack came from a nation state that speaks Korean, although it has no definitive proof that this is a North Korean operation. The victim organizations include ice hockey teams, ski suppliers, ski resorts, tourist organizations in Pyeongchang, and departments organizing the Pyeongchang Olympics.

Meanwhile, a Russia-linked cyber attack has already stolen and leaked documents from other Olympic organizations. The so-called Fancy Bear group, or APT28, began its operations in late 2017 --­ according to Trend Micro and Threat Connect, two private cybersecurity firms­ -- eventually publishing documents in 2018 outlining the political tensions between IOC officials and World Anti-Doping Agency (WADA) officials who are policing Olympic athletes. It also released documents specifying exceptions to anti-doping regulations granted to specific athletes (for instance, one athlete was given an exception because of his asthma medication). The most recent Fancy Bear leak exposed details about a Canadian pole vaulter's positive results for cocaine. This group has targeted WADA in the past, specifically during the 2016 Rio de Janeiro Olympics. Assuming the attribution is right, the action appears to be Russian retaliation for the punitive steps against Russia.

A senior analyst at McAfee warned that the Olympics may experience more cyber attacks before closing ceremonies. A researcher at ThreatConnect asserted that organizations like Fancy Bear have no reason to stop operations just because they've already stolen and released documents. Even the United States Department of Homeland Security has issued a notice to those traveling to South Korea to remind them to protect themselves against cyber risks.

One presumes the Olympics network is sufficiently protected against the more pedestrian DDoS attacks and the like, but who knows?

EDITED TO ADD: There was already one attack.

Worse Than FailureCoded Smorgasbord: If It's Stupid and It Works

On a certain level, if code works, it can only be so wrong. For today, we have a series of code blocks that work… mostly. Despite that, each one leaves you scratching your head, wondering how, exactly this happened.

Lisa works at a web dev firm that just picked up a web app from a client. They didn’t have much knowledge about what it was or how it worked beyond, “It uses JQuery?”

Well, they’re technically correct:

if ($(document.getElementById("really_long_id_of_client_side_element")).checked) {
    $(document.getElementById("xxxx1")).css({ "background-color": "#FFFFFF", "color": "Black" });
    $(document.getElementById("xxxx2")).css({ "background-color": "#FFFFFF", "color": "Black" });
    $(document.getElementById("xxxx3")).css({ "background-color": "#FFFFFF", "color": "Black" });
    $(document.getElementById("xxxx4")).css({ "background-color": "#FFFFFF", "color": "Black" });

In this case, they’re ignoring the main reason people use jQuery- the ability to easily and clearly fetch DOM elements with CSS selectors. But they do use the css function as intended, giving them an object-oriented way to control styles. Then again, one probably shouldn’t set style properties directly from JS anyway, that’s what CSS classes are for. Then again, why mix #FFFFFF and Black, when you could use white or #000000

Regardless, it does in fact use JQuery.

Dave A was recently trying to debug a test in Ruby, and found this unique construct:

if status == status = 1 || status = 2 || status = 3
  @msg.stubs(:is_reply?).returns true
  @msg.stubs(:is_reply?).returns false

This is an interesting case of syntactically correct nonsense that looks incorrect. status = 1 returns a 1, a “truthy” value, thus short circuiting the || operator. In this code, if status is undefined, it returns true and sets status equal to 1. The rest of the time it returns false and sets status equal to 1.

What the developer meant to do was check if status was 1, 2 or 3, e.g. if status == 1 || status == 2…, or, to use a more Ruby idiom: if [1, 2, 3].include? status. Still, given the setup for the test, the code actually worked until Dave changed the pre-conditions.

Meanwhile, Leonardo Scur came across this JavaScript reinvention of an array:

tags = {
  "tags": {
    "0": {"id": "asdf"},
    "1": {"id": "1234"},
    "2": {"id": "etc"}
  "tagsCounter": 3,
  // … below this are reimplementations of common array methods built to work on `tags`

This was part of a trendy front-end framework he was using, and it’s obvious that arrays indexed by integers are simply too mainstream. Strings are where it’s at.

This library is in wide use, meant to add simple tagging widgets to an AngularJS application. It also demonstrates a strange way to reinvent the array.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Cory DoctorowHey, Australia and New Zealand, I’m coming to visit you!

I’m about to embark on a tour of Australia and New Zealand to support my novel Walkaway, with stops in Perth, Melbourne, Sydney, Adelaide, and Wellington! I really hope you’ll come out and say hello!

Perth: Feb 24-25, Perth Festival

Melbourne: Feb 27: An expansive conversation about the imperfect present and foreseeable future with CS Pascat, St Kilda Town Hall, 19h

Melbourne: Feb 28: How do writers get paid?, Wheeler Centre, 1815h

Sydney: Mar 1: What should we do about democracy?, City Recital Hall, 1930h

Adelaide: Mar 4-6: Adelaide Festival

Wellington: Mar 9-11: Writers and Readers Week

Wellington: Mar 12: NetHui one-day event on copyright


Don MartiTeam A vs. Team B

Let's run a technical challenge on the Internet. Team A vs. Team B.

Team A gets to work where they want, when they want. Team B has to work in an open-plan office, with people walking behind them, talking on the phone, doing all that annoying office stuff.

Members of Team A get paid for successful work within weeks or months. Members of Team B get a base salary that they have to spend on rent in an expensive location, but just might get paid extra for successful work in four years.

Team A will let anyone try to join, and those who aren't successful have to drop out quickly. Team B will only let members who are a "good cultural fit" join, and it takes a while to get rid of an unsuccessful member.

Team A can deploy unproven work for real-world testing, using infrastructure that they get for free on the Internet. Team B can only deploy their work when production-ready, on infrastructure they have to pay for.

If Team A breaks the rules, the penalty is that they have to spend a little money to register new domain names. If Team B breaks the rules, they risk lengthy regulatory and/or legal consequences.

Team A scores a win any time they can beat whoever is the weakest member of Team B at that time. Team B can only score a win when they can consistently defeat all of the most active members of Team A.

Team A is adfraud.

Why is so much marketing money being bet on Team B?


Rondam RamblingsA Multilogue on Free Will

[Inspired by this comment thread.] The Tortoise is standing next to a railroad track when Achilles, an ancient Greek warrior, happens by.  In the distance, a train whistle sounds. Tortoise: Greetings, friend Achilles.  You have impeccable timing.  I could use your assistance. Achilles: Hello, Mr. T.  Always happy to help.  What seems to be the trouble? Tortoise: Look there. Achilles: Why, it

Don MartiFOSDEM videos

Check it out. The videos from the Mozilla room at FOSDEM are up, and here's me, talking about bug futures.

All FOSDEM videos

And, yes, the video link Just Works. Bonus link to some background on that: The Fight For Patent-Unencumbered Media Codecs Is Nearly Won by Robert O'Callahan

Another bonus link: FOSDEM video project, including what those custom boxes do.


CryptogramCalling Squid "Calamari" Makes It More Appetizing

Research shows that what a food is called affects how we think about it.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Google AdsenseAdSense now supports Tamil

Continuing our commitment to support more languages and encourage content creation on the web, we’re excited to announce the addition of Tamil, a language spoken by millions of Indians, to the family of AdSense supported languages.

AdSense provides an easy way for publishers to monetize the content they create in Tamil, and help advertisers looking to connect with a Tamil-speaking audience with relevant ads.

To start monetizing your Tamil content website with Google AdSense:

  1. Check the AdSense program policies and make sure your website is compliant.
  2. Sign up for an AdSense account
  3. Add the AdSense code to start displaying relevant ads to your users.

Welcome to AdSense! Sign Up now.

Posted by: The AdSense Internationalization Team

CryptogramLiving in a Smart Home

In "The House that Spied on Me," Kashmir Hill outfits her home to be as "smart" as possible and writes about the results.

Worse Than FailureError'd: Whatever Happened to January 2nd?

"Skype for Business is trying to tell me something...but I'm not sure exactly what," writes Jeremy W.


"I was looking for a tactile switch. And yes, I absolutely do want an operating switch," writes Michael B.


Chris D. wrote, "While booking a hair appointment online, I found that the calendar on the website was a little confused as to how calendars work."


"Don't be fooled by the image on the left," wrote Dan D., "If you get caught in the line of fire, you will assuredly get soaked!"


Jonathan G. writes, "My local bar's Facebook ad shows that, depending on how the viewer frames it, even an error message can look appealing."


"I'll have to check my calendar - I may or may not have plans on the Nanth," wrote Brian.


[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Planet Linux AustraliaOpenSTEM: Australia Day in the early 20th century

Australia Day and its commemoration on 26 January, has long been a controversial topic. This year has seen calls once again for the date to be changed. Similar calls have been made for a long time. As early as 1938, Aboriginal civil rights leaders declared a “Day of Mourning” to highlight issues in the Aboriginal […]


Krebs on SecurityU.S. Arrests 13, Charges 36 in ‘Infraud’ Cybercrime Forum Bust

The U.S. Justice Department announced charges on Wednesday against three dozen individuals thought to be key members of ‘Infraud,” a long-running cybercrime forum that federal prosecutors say cost consumers more than a half billion dollars. In conjunction with the forum takedown, 13 alleged Infraud members from the United States and six other countries were arrested.

A screenshot of the Infraud forum, circa Oct. 2014. Like most other crime forums, it had special sections dedicated to vendors of virtually every kind of cybercriminal goods or services imaginable. Click to enlarge.

Started in October 2010, Infraud was short for “In Fraud We Trust,” and collectively the forum referred to itself as the “Ministry of Fraudulently [sic] Affairs.” As a mostly English-language fraud forum, Infraud attracted nearly 11,000 members from around the globe who sold, traded and bought everything from stolen identities and credit card accounts to ATM skimmers, botnet hosting and malicious software.

“Today’s indictment and arrests mark one of the largest cyberfraud enterprise prosecutions ever undertaken by the Department of Justice,” said John P. Cronan, acting assistant attorney general of the Justice Department’s criminal division. “As alleged in the indictment, Infraud operated like a business to facilitate cyberfraud on a global scale.”

The complaint released by the DOJ lists 36 Infraud members — some only by their hacker nicknames, others by their alleged real names and handles, and still others just as “John Does.” Having been a fairly regular lurker on Infraud over the past seven years who has sought to independently identify many of these individuals, I can say that some of these names and nick associations sound accurate but several do not.

The government says the founder and top member of Infraud was Svyatoslav Bondarenko, a hacker from Ukraine who used the nicknames “Rector” and “Helkern.” The first nickname is well supported by copies of the forum obtained by this author several years back; indeed, Rector’s profile listed him an administrator, and Rector can be seen on countless Infraud discussion threads vouching for sellers who had paid the monthly fee to advertise their services in “sticky” threads on the forum.

However, I’m not sure the Helkern association with Bondarenko is accurate. In December 2014, just days after breaking the story about the theft of some 40 million credit and debit cards from retail giant Target, KrebsOnSecurity posted a lengthy investigation into the identity of “Rescator” — the hacker whose cybercrime shop was identified as the primary vendor of cards stolen from Target.

That story showed that Rescator changed his nickname from Helkern after Helkern’s previous cybercrime forum (Darklife) got massively hacked, and it presented clues indicating that Rescator/Helkern was a different Ukrainian man named Andrey Hodirevski. For more on that connection, see Who’s Selling Cards from Target.

Also, Rescator was a separate vendor on Infraud, and there are no indications that I could find suggesting that Rector and Rescator were the same people. Here is Rescator’s most recent sales thread for his credit card shop on Infraud — dated almost a year after the Target breach. Notice the last comment on that thread alleges that Rescator had recently been arrested and that his shop was being run by law enforcement officials: 

Another top administrator of Infraud used the nickname “Stells.” According to the Justice Department, Stells’ real name is Sergey Medvedev. The government doesn’t describe his exact role, but it appears to have been administering the forum’s escrow service (see screenshot below).

Most large cybercrime forums have an escrow service, which holds the buyer’s virtual currency until forum administrators can confirm the seller has consummated the transaction acceptably to both parties. The escrow feature is designed to cut down on members ripping one another off — but it also can add considerably to the final price of the item(s) for sale.

In April 2016, Medvedev would take over as the “admin and owner” of Infraud, after he posted a note online saying that Bondarenko had gone missing, the Justice Department said.

One defendant in the case, a well-known vendor of stolen credit and debit cards who goes by the nickname “Zo0mer,” is listed as a John Doe. But according to a New York Times story from 2006, Zo0mer’s real name is Sergey Kozerev, and he hails from St. Petersburg, Russia.

The indictments also list two other major vendors of stolen credit and debit cards: hackers who went by the nicknames “Unicc” and “TonyMontana” (the latter being a reference to the fictional gangster character played by Al Pacino in the 1983 movie Scarface). Both hackers have long operated and operate to this day their own carding shops:

Unicc shop, which sells stolen credit card data as well as Social Security numbers and other consumer information that can be used for identity theft.

The government says Unicc’s real name is Andrey Sergeevich Novak. TonyMontana is listed in the complaint as John Doe #1.

TonyMontana’s carding shop.

Perhaps the most successful vendor of skimming devices made to be affixed to ATMs and fuel pumps was a hacker known on Infraud and other crime forums as “Rafael101.” Several of my early stories about new skimming innovations came from discussions with Rafael in which this author posed as an interested buyer and asked for videos, pictures and technical descriptions of his skimming devices.

A confidential source who asked not to be named told me a few years back that Rafael had used the same password for his skimming sales accounts on multiple competing cybercrime forums. When one of those forums got hacked, it enabled this source to read Rafael’s emails (Rafael evidently used the same password for his email account as well).

The source said the emails showed Rafael was ordering the parts for his skimmers in bulk from Chinese e-commerce giant Alibaba, and that he charged a significant markup on the final product. The source said Rafael had the packages all shipped to a Jose Gamboa in Norwalk, Calif — a suburb of Los Angeles. Sure enough, the indictment unsealed this week says Rafael’s real name is Jose Gamboa and that he is from Los Angeles.

A private message from the skimmer vendor Rafael101, from on a competing cybercrime forum ( in 2012.

The Justice Department says the arrests in this case took place in Australia, France, Italy, Kosovo, Serbia, the United Kingdom and the United States. The defendants face a variety of criminal charges, including identity theft, bank fraud, wire fraud and money laundering. A copy of the indictment is available here.

CryptogramWater Utility Infected by Cryptocurrency Mining Software

A water utility in Europe has been infected by cryptocurrency mining software. This is a relatively new attack: hackers compromise computers and force them to mine cryptocurrency for them. This is the first time I've seen it infect SCADA systems, though.

It seems that this mining software is benign, and doesn't affect the performance of the hacked computer. (A smart virus doesn't kill its host.) But that's not going to always be the case.

Worse Than FailureCodeSOD: I Take Exception

We've all seen code that ignores errors. We've all seen code that simply rethrows an exception. We've all seen code that wraps one exception for another. The submitter, Mr. O, took exception to this exceptionally exceptional exception handling code.

I was particularly amused by the OutOfMemoryException handler that allocates another exception object, and if it fails, another layer of exception trapping catches that and attempts to allocate yet another exception object. if that fails, it doesn't even try. So that makes this an exceptionally unexceptional exception handler?! (ouch, my head hurts)

It contains a modest amount of fairly straightforward code to read config files and write assorted XML documents. And it handles exceptions in all of the above ways.

You might note that the exception handling code was unformatted, unaligned and substantially larger than the code it is attempting to protect. To help you out, I've stripped out the fairly straightforward code being protected, and formatted the exception handling code to make it easier to see this exceptional piece of code (you may need to go full-screen to get the full impact).

After all, it's not like exceptions can contain explanatory text, or stack context information...

namespace HotfolderMerger {
  public class Merger : IDisposable {
    public Merger() {
      try {
          object section = ConfigurationManager.GetSection("HFMSettings/DataSettings");
          if (section == null) throw new MergerSetupException();
          _settings = (DataSettings)section;
      } catch (MergerSetupException) {
      } catch (ConfigurationErrorsException ex){
        throw new MergerSetupException("Error in configuration", ex);
      } catch (Exception ex) {
        throw new MergerSetupException("Unexpected error while loading configuration",ex);

    // A whole bunch of regex about as complex as this one...
    private readonly Regex _fileNameRegex = new Regex(@"^(?<System>[A-Za-z0-9]{1,10})_(?<DesignName>[A-Za-z0-9]{1,})_(?<DocumentID>\d{1,})_(?<FileTimeUTC>\d{1,})(_(?<BAMID>\d+))?\.(?<extension>\w{0,3})$");

    public void MergeFiles() {
      try {
          foreach (FileElement filElement in _settings.Filelist) {
            // Lots of declarations here...
            foreach (FileInfo fi in hotfolder.GetFiles()) {
              try {
                  // 35 lines of innocuous code..
              } catch (ArgumentException ex) {
                throw new BasisException(ex, int.Parse(ErrorCodes.MergePreRunArgumentException),     ErrorMessages.MergePreRunArgumentException);
              } catch (ConfigurationException ex) {
                throw new BasisException(ex, int.Parse(ErrorCodes.MergePreRunConfigurationException),ErrorMessages.MergePreRunConfigurationException);
              } catch (Exception ex) {
                throw new UnexpectedMergerException("Unexpected exception while setting up for merge!", ex);
              try {
                  // 23 lines of StreamReader code to load some XML from a file...
              } catch (OutOfMemoryException ex) {
                // OP: so if we're out of memory, how is this new exception going to be allocated? 
                //     Maybe in the wrapping "try/catch Exception" - which allocates a new UnexpectedMergerException object??? Oh, wait...
                throw new BasisException(  ex,int.Parse(ErrorCodes.MergeRunOutOfMemoryException),   ErrorMessages.MergeRunOutOfMemoryException);
              } catch (ConfigurationException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunConfigurationException),ErrorMessages.MergeRunConfigurationException);
              } catch (FormatException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunFormatException),       ErrorMessages.MergeRunFormatException);
              } catch (ArgumentException ex) { 
                throw new BasisException(    ex, int.Parse(ErrorCodes.MergeRunArgumentException),   ErrorMessages.MergeRunArgumentException);
              } catch (SecurityException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunSecurityException),     ErrorMessages.MergeRunSecurityException);
              } catch (IOException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunIOException),           ErrorMessages.MergeRunIOException);
              } catch (NotSupportedException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunNotSupportedException), ErrorMessages.MergeRunNotSupportedException);
              } catch (Exception ex) {
                throw new UnexpectedMergerException("Unexpected exception while merging!", ex);
            // ...
      } catch (UnexpectedMergerException) {
      } catch (BasisException ex) {
      } catch (Exception ex) {
        throw new UnexpectedMergerException("Unexpected error while attempting to parse settings prior to merge", ex);

    private static void prepareNewMergeFile(ref XmlTextWriter xtw, string filename, int numfiles) {
      if (string.IsNullOrEmpty(filename))
         throw new BasisException(    int.Parse(ErrorCodes.MergeSetupNullReferenceException),       ErrorMessages.MergeSetupNullReferenceException, "filename parameter was null or empty");
      try {
          // Use XmlTextWriter to concatenate ~30 lines of canned XML...
      } catch (InvalidOperationException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupInvalidOperationException),     ErrorMessages.MergeSetupInvalidOperationException);
      } catch (ArgumentException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupArgumentException),             ErrorMessages.MergeSetupArgumentException);
      } catch (IOException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupIOException),                   ErrorMessages.MergeSetupIOException);
      } catch (UnauthorizedAccessException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupUnauthorizedAccessException),   ErrorMessages.MergeSetupUnauthorizedAccessException);
      } catch (SecurityException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupSecurityException),             ErrorMessages.MergeSetupSecurityException);
      } catch (Exception ex) {
        throw new UnexpectedMergerException("Unexpected exception while setting up for merge!", ex);

    private void closeMergeFile(ref XmlTextWriter xtw, ref List<FileInfo> filesComplete, string filename, double i) {
      if (xtw == null)
         throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeCleanupNullReferenceException, "xtw ref parameter was null");
      if (filesComplete == null)
         throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeSetupNullReferenceException,   "filesComplete ref parameter was null");
      if (string.IsNullOrEmpty(filename))
         throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeSetupNullReferenceException,   "filename parameter was null or empty");

      try {
          // ~ 30 lines of XmlTextWriter, StreamWriter and File IO...
      } catch (ArgumentException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupArgumentException),           ErrorMessages.MergeCleanupArgumentException);
      } catch (InvalidOperationException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupInvalidOperationException),   ErrorMessages.MergeCleanupInvalidOperationException);
      } catch (UnauthorizedAccessException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupUnauthorizedAccessException), ErrorMessages.MergeCleanupUnauthorizedAccessException);
      } catch (IOException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupIOException),                 ErrorMessages.MergeCleanupIOException);
      } catch (NullReferenceException ex) {
        throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeCleanupNullReferenceException, "unknown exception details");
      } catch (NotSupportedException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupNotSupportedException),       ErrorMessages.MergeCleanupNotSupportedException);
      } catch (MergerException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupMergerException),             ErrorMessages.MergeCleanupMergerException);
      } catch (SecurityException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupSecurityException),           ErrorMessages.MergeCleanupSecurityException);
      } catch (Exception ex) {
        throw new UnexpectedMergerException("Unexpected exception while merging!", ex);
[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet Linux AustraliaRussell Coker: Thinkpad X1 Carbon

I just bought a Thinkpad X1 Carbon to replace my Thinkpad X301 [1]. It cost me $289 with free shipping from an eBay merchant which is a great deal, a new battery for the Thinkpad X301 would have cost about $100.

It seems that laptops aren’t depreciating in value as much as they used to. Grays Online used to reliably have refurbished Thinkpads with manufacturer’s warranty selling for about $300. Now they only have IdeaPads (a cheaper low-end line from Lenovo) at good prices, admittedly $100 to $200 for an IdeaPad is a very nice deal if you want a cheap laptop and don’t need something too powerful. But if you want something for doing software development on the go then you are looking at well in excess of $400. So I ended up buying a second-hand system from an eBay merchant.


I was quite excited to read the specs that it has an i7 CPU, but now I have it I discovered that the i7-3667U CPU scores 3990 according to passmark ( [2]. While that is much better than the U9400 in the Thinkpad X301 that scored 968, it’s only slightly better than the i5-2520M in my Thinkpad T420 that scored 3582 [3]. I bought the Thinkpad T420 in August 2013 [4], I had hoped that Moore’s Law would result in me getting a system at least twice as fast as my last one. But buying second-hand meant I got a slower CPU. Also the small form factor of the X series limits the heat dissipation and therefore limits the CPU performance.


Thinkpads have traditionally had the best keyboards, but they are losing that advantage. This system has a keyboard that feels like an Apple laptop keyboard not like a traditional Thinkpad. It still has the Trackpoint which is a major feature if you like it (I do). The biggest downside is that they rearranged the keys. The PgUp/PgDn keys are now by the arrow keys, this could end up being useful if you like the SHIFT-PgUp/SHIFT-PgDn combinations used in the Linux VC and some Xterms like Konsole. But I like to keep my keys by the home keys and I can’t do that unless I use the little finger of my right hand for PgUp/PgDn. They also moved the Home, End, and Delete keys which is really annoying. It’s not just that the positions are different to previous Thinkpads (including X series like the X301), they are different to desktop keyboards. So every time I move between my Thinkpad and a desktop system I need to change key usage.

Did Lenovo not consider that touch typists might use their products?

The keyboard moved the PrtSc key, and lacks ScrLk and Pause keys, but I hardly ever use the PrtSc key, and never use the other 2. The lack of those keys would only be of interest to people who have mapped them to useful functions and people who actually use PrtSc. It’s impractical to have a key as annoying to accidentally press as PrtSc between the Ctrl and Alt keys.

One significant benefit of the keyboard in this Thinkpad is that it has a backlight instead of having a light on the top of the screen that shines on the keyboard. It might work better than the light above the keyboard and looks much cooler! As an aside I discovered that my Thinkpad X301 has a light above the keyboard, but the key combination to activate it sometimes needs to be pressed several times.


X1 Carbon 1600*900
T420 1600*900
T61 1680*1050
X301 1440*900

Above are the screen resolutions for all my Thinkpads of the last 8 years. The X301 is an anomaly as I got it from a rubbish pile and it was significantly older than Thinkpads usually are when I get them. It’s a bit disappointing that laptop screen resolution isn’t increasing much over the years. I know some people have laptops with resolutions as high as 2560*1600 (as high as a high end phone) it seems that most laptops are below phone resolution.

Kogan is currently selling the Agora 8+ phone new for $239, including postage that would still be cheaper than the $289 I paid for this Thinkpad. There’s no reason why new phones should have lower prices and higher screen resolutions than second-hand laptops. The Thinkpad is designed to be a high-end brand, other brands like IdeaPad are for low end devices. Really 1600*900 is a low-end resolution by today’s standards, 1920*1080 should be the minimum for high-end systems. Now I could have bought one of the X series models with a higher screen resolution, but most of them have the lower resolution and hunting for a second hand system with the rare high resolution screen would mean missing the best prices.

I wonder if there’s an Android app to make a phone run as a second monitor for a Linux laptop, that way you could use a high resolution phone screen to display data from a laptop.

This display is unreasonably bright by default. So bright it hurt my eyes. The xbacklight program doesn’t support my display but the command “xrandr –output LVDS-1 –brightness 0.4” sets the brightness to 40%. The Fn key combination to set brightness doesn’t work. Below a brightness of about 70% the screen looks grainy.


This Thinkpad has a 180G SSD that supports contiguous reads at 500MB/s. It has 8G of RAM which is the minimum for a usable desktop system nowadays and while not really fast the CPU is fast enough. Generally this is a nice system.

It doesn’t have an Ethernet port which is really annoying. Now I have to pack a USB Ethernet device whenever I go anywhere. It also has mini-DisplayPort as the only video connector, as that is almost never available at a conference venue (VGA and HDMI are the common ones) I’ll have to pack an adaptor when I give a lecture. It also only has 2 USB ports, the X301 has 3. I know that not having HDMI, VGA, and Ethernet ports allows designing a thinner laptop. But I would be happier with a slightly thicker laptop that has more connectivity options. The Thinkpad X301 has about the same mass and is only slightly thicker and has all those ports. I blame Apple for starting this trend of laptops lacking IO options.

This might be the last laptop I own that doesn’t have USB-C. Currently not having USB-C is not a big deal, but devices other than phones supporting it will probably be released soon and fast phone charging from a laptop would be a good feature to have.

This laptop has no removable battery. I don’t know if it will be practical to replace the battery if the old one wears out. But given that replacing the battery may be more than the laptop is worth this isn’t a serious issue. One significant issue is that there’s no option to buy a second battery if I need to have it run without mains power for a significant amount of time. When I was travelling between Australia and Europe often I used to pack a second battery so I could spend twice as much time coding on the plane. I know it’s an engineering trade-off, but they did it with the X301 and could have done it again with this model.


This isn’t a great laptop. The X1 Carbon is described as a flagship for the Thinkpad brand and the display is letting down the image of the brand. The CPU is a little disappointing, but it’s a trade-off that I can deal with.

The keyboard is really annoying and will continue to annoy me for as long as I own it. The X301 managed to fit a better keyboard layout into the same space, there’s no reason that they couldn’t have done the same with the X1 Carbon.

But it’s great value for money and works well.

Geek FeminismBringing the blog to a close

We’re bringing the Geek Feminism blog to a close.

First, some logistics; then some reasons and reminiscences; then, some thanks.


The site will still be up for at least several years, barring Internet catastrophe. We won’t post to it anymore and comments will be closed, but we intend to keep the archives up and available at their current URLs, or to have durable redirects from the current URLs to the archive.

This doesn’t affect the Geek Feminism wiki, which will keep going.

There’s a Twitter feed and a Facebook page; after our last blog post, we won’t post to those again.

We don’t have a definite date yet for when we’ll post for the last time. It’ll almost certainly be this year.

I might add to this, or post in the comments, to add stuff. And this isn’t the absolute last post on the blog; it’d be nice to re-run a few of our best-of posts, for instance, like the ones Tim Chevalier linked to here. We’re figuring that out.

Reasons and reminiscences

Alex Bayley and a bunch of their peers — myself included — started posting on this blog in 2009. We coalesced around feminist issues in scifi/fantasy fandom, open culture projects like Wikipedia, gaming, the sciences, the tech industry and open source software development, Internet culture, and so on. Alex gave a talk at Open Source Bridge 2014 about our history to that point, and our meta tag has some further background on what we were up to over those years.

You’ve probably seen a number of these kinds of volunteer group efforts end. People’s lives shift, our priorities change as we adapt to new challenges, and so on. And we’ve seen the birth or growth of other independent media; there are quite a lot of places to go, for a feminist take on the issues I mentioned. For example:

We did some interesting, useful, and cool stuff for several years; I try to keep myself from dwelling too much in the sad half of “bittersweet” by thinking of the many communities that have already been carrying on without waiting for us to pass any torches.


Thanks of course to all our contributors, past and present, and those who provided the theme, logo, and technical support and built or provided infrastructure, social and digital and financial, for this blog. Thanks to our readers and commenters. Thanks to everyone who did neat stuff for us to write about. And thanks to anyone who used things we said to go make the world happier.

More later; thanks.

Sociological ImagesBeyond Racial Binaries: How ‘White’ Latinos Can Experience Racism

Recent reports indicated that FEMA was cuttingand then not cutting—hurricane relief aid to Puerto Rico. When Donald Trump recently slandered Puerto Ricans as lazy and too dependent on aid after Hurricane Maria, Fox News host Tucker Carlson stated that Trump’s criticism could not be racist because “Puerto Rico is 75 percent white, according to the U.S. Census.”

Photo Credit: Coast Guard News, Flickr CC

This statement presents racism as a false choice between nonwhite people who experience racism and white people who don’t. It ignores the fact that someone can be classed as white by one organization but treated as non-white by another, due to the way ‘race’ is socially constructed across time, regions and social contexts.

Whiteness for Puerto Ricans is a contradiction. Racial labels that developed in Puerto Rico were much more fluid than on the U.S. mainland, with at least twenty categories. But the island came under U.S. rule at the height of American nativism and biological racism, which relied on a dichotomy between a privileged white race and a stigmatized black one that was designed to protect the privileges of slavery and segregation. So the U.S. portrayed the islanders with racist caricatures in cartoons like this one:

Clara Rodriguez has shown how Puerto Ricans who migrated to the mainland had to conform to this white-black duality that bore no relation to their self-identifications. The Census only gave two options, white or non-white, so respondents who would have identified themselves as “indio, moreno, mulato, prieto, jabao, and the most common term, trigueño (literally, ‘wheat-colored’)” chose white by default, simply to avoid the disadvantage and stigma of being seen as black bodied.

Choosing the white option did not protect Puerto Ricans from discrimination. Those who came to the mainland to work in agriculture found themselves cast as ‘alien labor’ despite their US citizenship. When the federal government gave loans to white home buyers after 1945, Puerto Ricans were usually excluded on zonal grounds, being subjected to ‘redlining’ alongside African Americans. Redlining was also found to be operating on Puerto Rico itself in the insurance market as late as 1998, suggesting it may have even contributed to the destitution faced by islanders after natural disasters.

The racist treatment of Puerto Ricans shows how it is possible to “be white” without white privilege. There have been historical advantages in being “not black” and “not Mexican”, but they have not included the freedom to seek employment, housing and insurance without fear of exclusion or disadvantage. When a hurricane strikes, Puerto Rico finds itself closer to New Orleans than to Florida.

An earlier version of this post appeared at History News Network

Jonathan Harrison, PhD, is an adjunct Professor in Sociology at Florida Gulf Coast University, Florida SouthWestern State College and Hodges University whose PhD was in the field of racism and antisemitism.

(View original at


Worse Than FailureCodeSOD: How To Creat Socket?

JR earned a bit of a reputation as the developer who could solve anything. Like most reputations, this was worse than it sounded, and it meant he got the weird heisenbugs. The weirdest and the worst heisenbugs came from Gerry, a developer who had worked for the company for many, many years, and left behind many, many landmines.

Once upon a time, in those Bad Old Days, Gerry wrote a C++ socket-server. In those days, the socket-server would crash any time there was an issue with network connectivity. Crashing services were bad, so Gerry “fixed” it. Whatever Gerry did fell into the category of “good enough”, but it had one problem: after any sort of network hiccup, the server wouldn’t crash, but it would take a very long time to start servicing requests again. Long enough that other processes would sometime fail. It was infrequent enough that the bug had stuck around for years, but finally, someone wanted Something Done™.

JR got Something Done™, and he did it by looking at the CreatSocket method, buried deep in a "God" class of 15,000 lines.

void UglyClassThatDidEverything::CreatSocket() {
    while (true) {
                try {
                        m_pSocket = new Socket((ip + ":55043").c_str());
                        if (m_pSocket != null) {
                                //"Creat socket");
                        } else {
                                //"Creat socket failed");
                                // usleep(1000);
                                // sleep(1);
                } catch (...) {
                    if (m_pSocket == null) {
                                //"Creat socket failed");

The try portion of the code provides an… interesting take on handling socket creation. Create a socket, and grab a handle. If you don’t get a socket for some reason, sleep for 5 seconds, and then the infinite while loop means that it’ll try again. Eventually, this will hopefully get a socket. It might take until the heat death of the universe, or at least until the half-created-but-never-cleaned-up sockets consume all the filehandles on the OS, but eventually.

Unless of course, there’s an exception thrown. In that case, we drop down into the catch, where we sleep for 5 seconds, and then call CreatSocket recursively. If that succeeds, we still have that extra call to sleep which guarantees a little nap, presumably to congratulate ourselves for finally creating a socket.

JR had a simple fix for this code: burn it to the ground and replace it with a more normal approach to creating sockets. Unfortunately, management was a bit gun-shy about making any major changes to Gerry’s work. That recursive call might be more important than anyone imagined.

JR had a simpler, if stupider fix: remove the final call to sleep(5) after creating the socket in the exception handler. It wouldn’t make this code any less terrible, but it would mean that it wouldn’t spend all that time waiting to proceed even after it had created a socket, thus solving the initial problem: that it takes a long time to recover after failure.

Unfortunately, management balked at removing a line of code. “It wouldn’t be there if it weren’t important. Instead of removing it, can you just comment it out?”

JR commented it out, closed VIM, and hoped never to touch this service again.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

CryptogramSensitive Super Bowl Security Documents Left on an Airplane

A CNN reporter found some sensitive -- but, technically, not classified -- documents about Super Bowl security in the front pocket of an airplane seat.


TEDThe Big Idea: How to find and hire the best employees

So, you want to hire the best employee for the job? Or perhaps you’re the employee looking to be hired. Here’s some counterintuitive and hyper-intuitive advice that could get the right foot in the door.

Expand your definition of the “right” resume

Here’s the hypothetical situation: a position opens up at your company, applications start rolling in and qualified candidates are identified. Who do you choose? Person A: Ivy League, flawless resume, great recommendations — or Person B: state school, fair amount of job hopping, with odd jobs like cashier and singing waitress thrown in the mix. Both are qualified — but have you already formed a decision?

Well, you might want to take a second look at Person B.

Human resources executive Regina Hartley describes these candidates as “The Silver Spoon” (Person A), the one who clearly had advantages and was set up for success, and “The Scrapper” (Person B), who had to fight tremendous odds to get to the same point.

“To be clear, I don’t hold anything against the Silver Spoon; getting into and graduating from an elite university takes a lot of hard work and sacrifice,” she says. But if it so happens that someone’s whole life has been engineered toward success, how will that person handle the low times? Do they seem like they’re driven by passion and purpose?


Take this resume. This guy never finishes college. He job-hops quite a bit, goes on a sojourn to India for a year, and to top it off, he has dyslexia. Would you hire this guy? His name is Steve Jobs.

That’s not to say every person who has a similar story will ultimately become Steve Jobs, but it’s about extending opportunity to those whose lives have resulted in transformation and growth. Companies that are committed to diversity and inclusive practices tend to support Scrappers and outperform their peers. According to DiversityInc, a study of their top 50 companies for diversity outperformed the S&P 500 by 25 percent.

(Check out Regina’s TED Talk: Why the best hire might not have the perfect resume for more advice and a fantastic suggested reading list full of helpful resources.)

Shake up the face-to-face time

Once you choose candidates to meet in-person, scrap that old hand-me-down list of interview questions — or if you can’t simply toss them, think about adding a couple more.

TED Ideas interview questions

Generally, these conversations ping-pong between two basic questions: one of competency or a one of character. To identify the candidates who have substance and not just smarts, business consultant Anthony Tjan recommends that interviewers ask these five questions to illuminate not just skills and abilities, but intrinsic values and personality traits too.

  1. What are the one or two traits from your parents that you most want to ensure you and your kids have for the rest of your life? A rehearsal is not the result you want. This question calls for a bit more thought on the applicant’s end and sheds light on the things they most value. After hearing the person’s initial response, Tjan says you should immediately follow up with “Can you tell me more?” This is essential if you want to elicit an answer with real depth and substance.
  2. What is 25 times 25? Yes, it sounds ridiculous but trust us — the math adds up. How people react under real-time pressure, and their response can show you how they’ll approach challenging or awkward situations. “It’s about whether they can roll with the embarrassment and discomfort and work with me. When a person is in a job, they’re not always going to be in situations that are in their alley,” he says.
  3. Tell me about three people whose lives you positively changed. What would they say if I called them tomorrow? If a person can’t think of single person, that may say a lot for the role you’re trying to fill. Organizations need employees who can lift each other up. When a person is naturally inclined toward compassionate mentorship, it can have a domino effect in an institution.
  4. After an interview, ask yourself (and other team members, if relevant) “Can I imagine taking this person home with me for the holidays?” This may seem overly personal (because, yes it is), but you’ll most likely trigger a gut reaction.
  5. After an interview, ask security or the receptionist: “How was the candidate’s interaction with you?” How a person treats someone they don’t feel they need to impress is important and telling. It speaks to whether they act with compassion and openness and view others as equals.

(Maybe ask them if they played a lot of Dungeons & Dragons in their life?)

The New York Times’ Adam Bryant suggests getting away from the standard job interview entirely. Reject the played-out choreography — the conference room, the resume, the “Where do you want to be in five years?” — and feel free to shake it up. Instead, get up and move about to observe how they behave in (and out of) the workplace wild.

Take them on a tour of the office (if you can’t take them out for a meal), he proposes, and if you feel so inclined, introduce them to some colleagues. Shake off that stress, walk-and-talk (as TED speaker Nilofer Merchant also advises) and most important, pay attention!

Are they curious about how everything happens? Do they show interest in what your colleagues do? These markers could be the difference between someone you work with and someone you want to work with. Monster has a series of good questions to asks yourself post-meeting potential candidates.

Ultimately, Tjan and Bryant seem to agree, the art of the interview is a tricky but not impossible balance to strike.

Hire for your company’s values, not its internal culture

Culture fit is important, of course, but it can also be used as a shield. The bottom line is hire for diversity — in all its forms.

There’s a chance you may be tired of reading about diversity and inclusion, that you get the point and we don’t need to keep addressing it. Well, tough. Suck it up. Because we do need to talk about it until there’s literally no need to talk about, until this fundamental issue becomes an overarching non-issue (and preferably before we all sink into the sea). This is a concept that can’t just exist in science fictional universes.

Example A: a sci-fi universe featuring a group of people that could be seen working together in a non-fictional universe.


MIT Media Lab director Joi Ito and writer Jeff Howe explain that the best way to prepare for a future of unknown complexity is to build on the strength of our differences. Race, gender,  sexual orientation, socioeconomic background and disciplinary training are all important, as are life experiences that produce cognitive diversity (aka different ways of thinking).

Thanks to an increasing body of research, diversity is becoming a strategic imperative for schools, firms and other institutions. It may be good politics and good PR and, depending on an individual’s commitment to racial and gender equity, good for the soul, say Ito and Howe. But in an era in which your challenges are likely to feature maximum complexity as well, it’s simply good management — which marks a striking departure from an age when diversity was presumed to come at the expense of ability.

As TED speaker Mellody Hobson (TED Talk: Color blind or color brave?) says: “I’m actually asking you to do something really simple.  I’m asking you to look at the people around you purposefully and intentionally. Invite people into your life who don’t look like you, don’t think like you, don’t act like you, don’t come from where you come from, and you might find that they will challenge your assumptions.”

So, in conclusion, go out and hire someone and give them the opportunity to change the world. Or at least, give them the opportunity to prove that they have the wherewithal to change something for the better.



Krebs on SecurityWould You Have Spotted This Skimmer?

When you realize how easy it is for thieves to compromise an ATM or credit card terminal with skimming devices, it’s difficult not to inspect or even pull on these machines when you’re forced to use them personally — half expecting something will come detached. For those unfamiliar with the stealth of these skimming devices and the thieves who install them, read on.

Police in Lower Pottsgrove, PA are searching for a pair of men who’ve spent the last few months installing card and PIN skimmers at checkout lanes inside of Aldi supermarkets in the region. These are “overlay” skimmers, in that they’re designed to be installed in the blink of an eye just by placing them over top of the customer-facing card terminal.

The top of the overlay skimmer models removed from several Aldi grocery story locations in Pennsylvania over the past few months.

The underside of the skimmer hides the brains of this little beauty, which is configured to capture the personal identification number (PIN) of shoppers who pay for their purchases with a debit card. This likely describes a great number of loyal customers at Aldi; the discount grocery chain only in 2016 started accepting credit cards, and previously only took cash, debit cards, SNAP, and EBT cards.

The underside of this skimmer found at Aldi is designed to record PINs.

The Lower Pottsgrove police have been asking local citizens for help in identifying the men spotted on surveillance cameras installing the skimming devices, noting that multiple victims have seen their checking accounts cleaned out after paying at compromised checkout lanes.

Local police released the following video footage showing one of the suspects installing an overlay skimmer exactly like the one pictured above. The man is clearly nervous and fidgety with his feet, but the cashier can’t see his little dance and certainly doesn’t notice the half second or so that it takes him to slip the skimming device over top of the payment terminal.

I realize a great many people use debit cards for everyday purchases, but I’ve never been interested in assuming the added risk and so pay for everything with cash or a credit card. Armed with your PIN and debit card data, thieves can clone the card and pull money out of your account at an ATM. Having your checking account emptied of cash while your bank sorts out the situation can be a huge hassle and create secondary problems (bounced checks, for instance).

The Lower Pottsgrove Police have been admonishing people for blaming Aldi for the incidents, saying the thieves are extremely stealthy and that this type of crime could hit virtually any grocery chain.

While Aldi payment terminals in the United States are capable of accepting more secure chip-based card transactions, the company has yet to enable chip payments (although it does accept mobile contactless payment methods such as Apple Pay and Google Pay). This is important because these overlay skimmers are designed to steal card data stored on the magnetic stripe when customers swipe their cards.

However, many stores that have chip-enabled terminals are still forcing customers to swipe the stripe instead of dip the chip.

Want to learn more about self-checkout skimmers? Check out these other posts:

How to Spot Ingenico Self-Checkout Skimmers

Self-Checkout Skimmers Go Bluetooth

More on Bluetooth Ingenico Overlay Skimmers

Safeway Self-Checkout Skimmers Up Close

Skimmers Found at Wal-Mart: A Closer Look

Worse Than FailureFor Want of a CR…

A few years ago I was hired as an architect to help design some massive changes to a melange of existing systems so a northern foreign bank could meet some new regulatory requirements. As a development team, they gave me one junior developer with almost a year of experience. There were very few requirements and most of it would be guesswork to fill in the blanks. OK, typical Wall Street BS.

Horseshoe nails, because 'for want of a nail, the shoe was lost…

The junior developer was, well, junior, but bright, and he remembered what you taught him, so there was a chance we could succeed.

The setup was that what little requirements there were would come from the Almighty Project Architect down to me and a few of my peers. We would design our respective pieces in as generic a way as possible, and then oversee and help with the coding.

One day, my boss+1 has my boss have the junior guy develop a web service; something the guy had never done before. Since I was busy, it was deemed unnecessary to tell me about it. The guy Googled a bit and put something together. However, he was unsure of how the response was sent back to the browser (e.g.: what sort of line endings to use) and admitted he had questions. Our boss said not to worry about it and had him install it on the dev server so boss+1 could demo it to users.

Demo time came, and the resulting output lines needed an extra newline between them to make the output look nice.

The boss+1 was incensed and started telling the users and other teams that our work was crap, inferior and not to be trusted.


When this got back to me, I went to have a chat with him about a) going behind my back and leaving me entirely out of the loop, b) having a junior developer do something in an unfamiliar technology and then deploying it without having someone more experienced even look at it, c) running his mouth with unjustified caustic comments ... to the world.

He was not amused and informed us that the work should be perfect every time! I pointed out that while everyone strives for just that, that it was an unreasonable response, and doesn't do much to foster team morale or cooperation.

This went back and forth for a while until I decided that this idiot simply wasn't worth my time.

A few days later, I hear one of my peers having the same conversation with our boss+1. A few days later, someone else. Each time, the architect had been bypassed and some junior developer missed something; it was always some ridiculous trivial facet of the implementation.

I got together with my peers and discussed possibly instituting mandatory testing - by US - to prevent them from bypassing us to get junior developers to do stuff and then having it thrown into a user-visible environment. We agreed, and were promptly overruled by boss+1. Apparently, all programmers, even juniors, were expected to produce perfect code (even without requirements) every time, without exception, and anyone who couldn't cut it should be exposed as incompetent.

We just shot each other the expected Are you f'g kidding me? looks.

After a few weeks of this, we had all had enough of the abuse and went to boss+2, who was totally disinterested.

We all found other jobs, and made sure to bring the better junior devs with us.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Don MartiFun with numbers

(I work for Mozilla. None of this is secret. None of this is official Mozilla policy. Not speaking for Mozilla here.)

Guess what? According to Emil Protalinski at VentureBeat, the browser wars are back on.

Google is doubling down on the user experience by focusing on ads and performance, an opportunity I’ve argued its competitors have completely missed.

Good point. Jonathan Mendez has some good background on that.

The IAB road blocked the W3C Do Not Track initiative in 2012 that was led by a cross functional group that most importantly included the browser makers. In hindsight this was the only real chance for the industry to solve consumer needs around data privacy and advertising technology. The IAB wanted self-regulation. In the end, DNT died as the IAB hoped.

As third-party tracking made the ad experience crappier and crappier, browser makers tried to play nice. Browser makers tried to work in the open and build consensus.

That didn't work, which shouldn't be a surprise. Imagine if email providers had decided to build consensus with spammers about spam filtering rules. The spammers would have been all like, "It replaces the principle of consumer choice with an arrogant 'Hotmail knows best' system." Any sensible email provider would ignore the spammers but listen to deliverability concerns from senders of legit opt-in newsletters. Spammers depend on sneaking around the user's intent to get their stuff through, so email providers that want to get and keep users should stay on the user's side. Fortunately for legit mail senders and recipients, that's what happened.

On the web, though, not so much.

But now Apple Safari has Intelligent Tracking Prevention. Industry consensus achieved? No way. Safari's developers put users first and, like the man said, if you're not first you're last.

And now Google is doing their own thing. Some positive parts about it, but by focusing on filtering annoying types of ad units they're closer to the Adblock Plus "Acceptable Ads" racket than to a real solution. So it's better to let Ben Williams at Adblock Plus explain that one. I still don't get how it is that so many otherwise capable people come up with "let's filter superficial annoyances and not fundamental issues" and "let's shake down legit publishers for cash" as solutions to the web advertising problem, though. Especially when $16 billion in adfraud is just sitting there. It's almost as if the Lumascape doesn't care about fraud because it's priced in so it comes out of the publisher's share anyway.

So with all the money going to fraud and the intermediaries that facilitate it, local digital news publishers are looking for money in other places and writing off ads. That's good news for the surviving web ad optimists (like me) because any time Management stops caring about something you get a big opportunity to do something transformative.

Small victories

The web advertising problem looks big, but I want to think positive about it.

  • billions of web users

  • visiting hundreds of web sites

  • with tens of third-party trackers per site.

That's trillions of opportunities for tiny victories against adfraud.

Right now most browsers and most fraudbots are hard to tell apart. Both maintain a single "cookie jar" across trusted and untrusted sites, and both are subject to fingerprinting.

For fraudbots, cross-site trackability is a feature. A fraudbot can only produce valuable ad impressions on a fraud site if it is somehow trackable from a legit site.

For browsers, cross-site trackability is a bug, for two reasons.

  • Leaking activity from one context to another violates widely held user norms.

  • Because users enjoy ad-supported content, it is in the interest of users to reduce the fraction of ad budgets that go to fraud and intermediaries.

Browsers don't have the solve the whole web advertising problem to make a meaningful difference. As soon as a trustworthy site's real users look diffferent enough from fraudbots, because fraudbots make themselves more trackable than users running tracking-protected browsers do, then low-reputation and fraud sites claiming to offer the same audience will have a harder and harder time trying to sell impressions to agencies that can see it's not the same people.

Of course, the browser market share numbers will still over-represent any undetected fraudbots and under-represent the "conscious chooser" users who choose to turn on extra tracking protection options. But that's an opportunity for creative ad agencies that can buy underpriced post-creepy ad impressions and stay away from overvalued or worthless bot impressions. I expect that data on who has legit users—made more accurate by including tracking protection measurements—will be proprietary to certain agencies and brands that are going after customer segments with high tracking protection adoption, at least for a while.

Now even YouTube serves ads with CPU-draining cryptocurrency miners … by @dangoodin001

Remarks delivered at the World Economic Forum

Improving privacy without breaking the web

Greater control with new features in your Ads Settings

PageFair’s long letter to the Article 29 Working Party

‘Never get high on your own supply’ – why social media bosses don’t use social media

Can you detect WebDriver sessions from inside a web page? … via @wordpressdotcom

Making WebAssembly even faster: Firefox’s new streaming and tiering compiler

Newsonomics: Inside L.A.’s journalistic collapse

The State of Ad Fraud

The more Facebook examines itself, the more fault it finds

In-N-Out managers earn triple the industry average

Five loopholes in the GDPR

Why ads keep redirecting you to scammy sites and what we’re doing about it

Website operators are in the dark about privacy violations by third-party scripts

Mark Zuckerberg's former mentor says 'parasitic' Facebook threatens our health and democracy

Craft Beer Is the Strangest, Happiest Economic Story in America

The 29 Stages Of A Twitterstorm In 2018

How Facebook Helped Ruin Cambodia's Democracy

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

Firefox 57 delays requests to tracking domains

Direct ad buys are back in fashion as programmatic declines

‘Data arbitrage is as big a problem as media arbitrage’: Confessions of a media exec

Why publishers don’t name and shame vendors over ad fraud

News UK finds high levels of domain spoofing to the tune of $1 million a month in lost revenue • Digiday

The Finish Line in the Race to the Bottom

Something doesn’t ad up about America’s advertising market

Fraud filters don't work

Ad retargeters scramble to get consumer consent


Krebs on SecurityAlleged Spam Kingpin ‘Severa’ Extradited to US

Peter Yuryevich Levashov, a 37-year-old Russian computer programmer thought to be one of the world’s most notorious spam kingpins, has been extradited to the United States to face federal hacking and spamming charges.

Levashov, in an undated photo.

Levashov, who allegedly went by the hacker names “Peter Severa,” and “Peter of the North,” hails from St. Petersburg in northern Russia, but he was arrested last year while in Barcelona, Spain with his family.

Authorities have long suspected he is the cybercriminal behind the once powerful spam botnet known as Waledac (a.k.a. “Kelihos”), a now-defunct malware strain responsible for sending more than 1.5 billion spam, phishing and malware attacks each day.

According to a statement released by the U.S. Justice Department, Levashov was arraigned last Friday in a federal court in New Haven, Ct. Levashov’s New York attorney Igor Litvak said he is eager to review the evidence against Mr. Levashov, and that while the indictment against his client is available, the complaint in the case remains sealed.

“We haven’t received any discovery, we have no idea what the government is relying on to bring these allegations,” Litvak said. “Mr. Levashov maintains his innocence and is looking forward to resolving this case, clearing his name, and returning home to his wife and 5-year-old son in Spain.”

In 2010, Microsoft — in tandem with a number of security researchers — launched a combined technical and legal sneak attack on the Waledac botnet, successfully dismantling it. The company would later do the same to the Kelihos botnet, a global spam machine which shared a great deal of computer code with Waledac.

Severa routinely rented out segments of his Waledac botnet to anyone seeking a vehicle for sending spam. For $200, vetted users could hire his botnet to blast one million pieces of spam. Junk email campaigns touting employment or “money mule” scams cost $300 per million, and phishing emails could be blasted out through Severa’s botnet for the bargain price of $500 per million.

Waledac first surfaced in April 2008, but many experts believe the spam-spewing machine was merely an update to the Storm worm, the engine behind another massive spam botnet that first surfaced in 2007. Both Waledac and Storm were major distributors of pharmaceutical and malware spam.

According to Microsoft, in one month alone approximately 651 million spam emails attributable to Waledac/Kelihos were directed to Hotmail accounts, including offers and scams related to online pharmacies, imitation goods, jobs, penny stocks, and more. The Storm worm botnet also sent billions of messages daily and infected an estimated one million computers worldwide.

Both Waledac/Kelihos and Storm were hugely innovative because they each included self-defense mechanisms designed specifically to stymie security researchers who might try to dismantle the crime machines.

Waledac and Storm sent updates and other instructions via a peer-to-peer communications system not unlike popular music and file-sharing services. Thus, even if security researchers or law-enforcement officials manage to seize the botnet’s back-end control servers and clean up huge numbers of infected PCs, the botnets could respawn themselves by relaying software updates from one infected PC to another.


According to a lengthy April 2017 story in about Levashov’s arrest and the takedown of Waledac, Levashov got caught because he violated a basic security no-no: He used the same log-in credentials to both run his criminal enterprise and log into sites like iTunes.

After Levashov’s arrest, numerous media outlets quoted his wife saying he was being rounded up as part of a dragnet targeting Russian hackers thought to be involved in alleged interference in the 2016 U.S. election. Russian news media outlets made much hay over this claim. In contesting his extradition to the United States, Levashov even reportedly told the RIA Russian news agency that he worked for Russian President Vladimir Putin‘s United Russia party, and that he would die within a year of being extradited to the United States.

“If I go to the U.S., I will die in a year,” Levashov is quoted as saying. “They want to get information of a military nature and about the United Russia party. I will be tortured, within a year I will be killed, or I will kill myself.”

But there is so far zero evidence that anyone has accused Levashov of being involved in election meddling. However, the Waledac/Kelihos botnet does have a historic association with election meddling: It was used during the Russian election in 2012 to send political messages to email accounts on computers with Russian Internet addresses. Those emails linked to fake news stories saying that Mikhail D. Prokhorov, a businessman who was running for president against Putin, had come out as gay.


If Levashov was to plead guilty in the case being prosecuted by U.S. authorities, it could shed light on the real-life identities of other top spammers.

Severa worked very closely with two major purveyors of spam. One was Alan Ralsky, an American spammer who was convicted in 2009 of paying him and other spammers to promote the pump-and-dump stock scams.

The other was a spammer who went by the nickname “Cosma,” the cybercriminal thought to be responsible for managing the Rustock botnet (so named because it was a Russian botnet frequently used to send pump-and-dump stock spam). In 2011, Microsoft offered a still-unclaimed $250,000 reward for information leading to the arrest and conviction of the Rustock author. moderator Severa listing prices to rent his Waledac spam botnet.

Microsoft believes Cosma’s real name may be Dmitri A. SergeevArtem Sergeev, or Sergey Vladomirovich Sergeev. In June 2011, KrebsOnSecurity published a brief profile of Cosma that included Sergeev’s resume and photo, both of which indicated he is a Belorussian programmer who once sought a job at Google. For more on Cosma, see “Flashy Car Got Spam Kingpin Mugged.”

Severa and Cosma had met one another several times in their years together in the stock spamming business, and they appear to have known each other intimately enough to be on a first-name basis. Both of these titans of junk email are featured prominently in “Meet the Spammers,” the 7th chapter of my book, Spam Nation: The Inside Story of Organized Cybercrime.

Much like his close associate — Cosma, the Rustock botmaster — Severa may also have a $250,000 bounty on his head, albeit indirectly. The Conficker worm, a global contagion launched in 2009 that quickly spread to an estimated 9 to 15 million computers worldwide, prompted an unprecedented international response from security experts. This group of experts, dubbed the “Conficker Cabal,” sought in vain to corral the spread of the worm.

But despite infecting huge numbers of Microsoft Windows systems, Conficker was never once used to send spam. In fact, the only thing that Conficker-infected systems ever did was download and spread a new version of the the malware that powered the Waledac botnet. Later that year, Microsoft announced it was offering a $250,000 reward for information leading to the arrest and conviction of the Conficker author(s). Some security experts believe this proves a link between Severa and Conficker.

Both Cosma and Severa were quite active on Spamit[dot]com, a once closely-guarded forum for Russian spammers. In 2010, Spamit was hacked, and a copy of its database was shared with this author. In that database were all private messages between Spamit members, including many between Cosma and Severa. For more on those conversations, see “A Closer Look at Two Big Time Botmasters.

In addition to renting out his spam botnet, Severa also managed multiple affiliate programs in which he paid other cybercriminals to distribute so-called fake antivirus products. Also known as “scareware,” fake antivirus was at one time a major scourge, using false and misleading pop-up alerts to trick and mousetrap unsuspecting computer users into purchasing worthless (and in many cases outright harmful) software disguised as antivirus software.

A screenshot of the eponymous scareware affiliate program run by “Severa,” allegedly the cybercriminal alias of Peter Levashov.

In 2011, KrebsOnSecurity published Spam & Fake AV: Like Ham & Eggs, which sought to illustrate the many ways in which the spam industry and fake antivirus overlapped. That analysis included data from Brett Stone-Gross, a cybercrime expert who later would assist Microsoft and other researchers in their successful efforts to dismantle the Waledac/Kelihos botnet.

Levashov faces federal criminal charges on eight counts, including aggravated identity theft, wire fraud, conspiracy, and intentional damage to protected computers. The indictment in his case is available here (PDF).

Further reading: Mr Waledac — The Peter North of Spamming

Cory DoctorowNominations for the Hugo Awards are now open

If you were a voting member of the World Science Fiction Convention in 2017, or are registered as a voting member for the upcoming conventions in 2018 or 2019, you are eligible to nominate for the Hugo Awards; the Locus List is a great way to jog your memory about your favorite works from last year — and may I humbly remind you that my novel Walkaway is eligible for your nomination?


Adam recently tried to claim a rebate for a purchase. Rebate websites, of course, are awful. The vendor doesn’t really want you to claim the rebate, after all, so even if they don’t actively try and make it user hostile, they’re also not going to go out of their way to make the experience pleasant.

In Adam’s case, it just didn’t work. It attempted to use a custom-built auto-complete textbox, which errored out and in some cases popped up an alert which read: [object Object]. Determined to get his $9.99 rebate, Adam did what any of us would do: he started trying to debug the page.

The HTML, of course, was a layout built from nested tables, complete with 1px transparent GIFs for spacing. But there were a few bits of JavaScript code which caught Adam’s eye.

function doTheme(myclass) {
         if ( document.getElementById ) {
                if(document.getElementById("divLog").className=="princess") {
                } else {
                        if(document.getElementById("divLog").className=="death") {
                        } else {
                                if(document.getElementById("divLog").className=="clowns") {
        } else if ( document.all ) {
                if(document.all["divLog"].className=="princess") {
                } else {
                        if(document.all["divLog"].className=="death") {
                        } else {
                                if(document.all["divLog"].className=="clowns") {

This implements some sort of state machine. If the state is “princess”, become “death”. If the state is “death”, become “clowns”. If the state is “clowns”, go back to being a “princess”. Death before clowns is a pretty safe rule.

This code also will work gracefully if document.getElementById is unavailable, meaning it works all the way back to IE4. That’s backwards compatibility. Since it doesn't work in Adam's browser, it missed out on the forward compatibility, but it's got backwards compatibility.

To round out the meal, Adam also provides a little bit of dessert for this entry of awful code.


function over(myimage,str) {  


Adam used some google-fu and found an alternate site that allowed him to redeem his rebate.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Sam VargheseCricket Australia needs to get player availability policies sorted

Australian cricket authorities are short-charging fans of the national Twenty20 competition, the Big Bash League, through their policies on releasing players from national duty when needed by their BBL sides for crucial encounters.

The Adelaide Strikers and the Hobart Hurricanes, who contested Sunday’s final, were both affected by this policy.

Adelaide won, but had they failed to do so, no doubt there would have been attention drawn to the fact that their main fast bowler, Billy Stanlake, did not play as he was on national duty to play in a tri-nation tournament involving New Zealand and England.

Even though Cricket Australia released some other players – Alex Carey of the Strikers and Darcy Short of the Hurricanes – for the BBL final, it was clear that the travel from Sydney (where a game in the tri-nation tournament was played on Saturday night) to Adelaide (where the BBL final took place on Sunday afternoon) had affected them.

Carey, whose batting has been one of the Strikers’ strengths, was out cheaply, while Short, normally an ebullient six-hitter who had rung up two 90s and a century in the BBL league stage, was totally off his game. He made a a listless 68 and his strike rate was much lower than normal, something which made a big difference as his team was chasing 203 for a win. Both Carey and Short had played for the national team the previous night.

Strikers captain Travis Head was also in Sydney on Saturday but did not play in the game. He rushed back to Adelaide for the final and made a rather subdued 44, playing second fiddle to opener Jake Weatherald who made a quick century.

Given that the international cricket season clashes with the BBL, the good folk at Cricket Australia need to develop some consistent policies about player involvement in both forms of the game.

Weakening the BBL sides at crucial stages of the tournament will mean that the game becomes that much less competitive. And that will affect the crowds, who are already diminishing in numbers. Sunday’s final involved the home team and yet could not fill the Adelaide stadium.

With grandiose plans to expand the BBL next year so that each team plays each other both at home and away, and to also add an AFL style finals process – where the teams that finish higher up get a second chance at qualifying for the final – Cricket Australia would do well to pay heed to player availability policies.

Else, what was once a golden goose may be found to have no more eggs to lay.


Planet Linux AustraliaJonathan Adamczewski: Watch as the OS rewrites my buggy program.

I didn’t know that SetErrorMode(SEM_NOALIGNMENTFAULTEXCEPT) was a thing, until I wrote a bad test that wouldn’t crash.

Digging into it, I found that a movaps instruction was being rewritten as movups, which was a thoroughly confusing thing to see.

The one clue I had was that a fault due to an unaligned load had been observed in non-test code, but did not reproduce when written as a test using the google-test framework. A short hunt later (including a failed attempt at writing a small repro case), I found an explanation: google test suppresses this class of failure.

The code below will successfully demonstrate the behavior, printing out the SIMD load instruction before and after calling the function with an unaligned pointer.


View the code on Gist.


CryptogramFriday Squid Blogging: Kraken Pie

Pretty, but contains no actual squid ingredients.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityAttackers Exploiting Unpatched Flaw in Flash

Adobe warned on Thursday that attackers are exploiting a previously unknown security hole in its Flash Player software to break into Microsoft Windows computers. Adobe said it plans to issue a fix for the flaw in the next few days, but now might be a good time to check your exposure to this still-ubiquitous program and harden your defenses.

Adobe said a critical vulnerability (CVE-2018-4878) exists in Adobe Flash Player and earlier versions. Successful exploitation could allow an attacker to take control of the affected system.

The software company warns that an exploit for the flaw is being used in the wild, and that so far the attacks leverage Microsoft Office documents with embedded malicious Flash content. Adobe said it plans to address this vulnerability in a release planned for the week of February 5.

According to Adobe’s advisory, beginning with Flash Player 27, administrators have the ability to change Flash Player’s behavior when running on Internet Explorer on Windows 7 and below by prompting the user before playing Flash content. A guide on how to do that is here (PDF). Administrators may also consider implementing Protected View for Office. Protected View opens a file marked as potentially unsafe in Read-only mode.

Hopefully, most readers here have taken my longstanding advice to disable or at least hobble Flash, a buggy and insecure component that nonetheless ships by default with Google Chrome and Internet Explorer. More on that approach (as well as slightly less radical solutions) can be found in A Month Without Adobe Flash Player. The short version is that you can probably get by without Flash installed and not miss it at all.

For readers still unwilling to cut the Flash cord, there are half-measures that work almost as well. Fortunately, disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

By default, Mozilla Firefox on Windows computers with Flash installed runs Flash in a “protected mode,” which prompts the user to decide if they want to enable the plugin before Flash content runs on a Web site.

Another, perhaps less elegant, alternative to wholesale kicking Flash to the curb is to keeping it installed in a browser that you don’t normally use, and then only using that browser on sites that require Flash.

CryptogramSigned Malware

Stuxnet famously used legitimate digital certificates to sign its malware. A research paper from last year found that the practice is much more common than previously thought.

Now, researchers have presented proof that digitally signed malware is much more common than previously believed. What's more, it predated Stuxnet, with the first known instance occurring in 2003. The researchers said they found 189 malware samples bearing valid digital signatures that were created using compromised certificates issued by recognized certificate authorities and used to sign legitimate software. In total, 109 of those abused certificates remain valid. The researchers, who presented their findings Wednesday at the ACM Conference on Computer and Communications Security, found another 136 malware samples signed by legitimate CA-issued certificates, although the signatures were malformed.

The results are significant because digitally signed software is often able to bypass User Account Control and other Windows measures designed to prevent malicious code from being installed. Forged signatures also represent a significant breach of trust because certificates provide what's supposed to be an unassailable assurance to end users that the software was developed by the company named in the certificate and hasn't been modified by anyone else. The forgeries also allow malware to evade antivirus protections. Surprisingly, weaknesses in the majority of available AV programs prevented them from detecting known malware that was digitally signed even though the signatures weren't valid.

Worse Than FailureError'd: The Biggest Loser

"I don't know what's more surprising - losing $2,000,000 or that Yahoo! thought I had $2,000,000 to lose," writes Bruce W.


"Autodesk sent out an email about my account's password being changed recently. Now it's up to me to guess which $productName it is!" wrote Tom G.


Kurt C. writes, "I kept repeating my mantra: 'Must not click forbidden radio buttons...'"


"My son boarded a bus in Toronto and got a free ride when the driver showed him this crash message," Ari S. writes.


"For those who are in denial about global warming, may I please direct you to conditions in Wisconsin," wrote Chelsie S.


Billie J. wrote, "Sorry there, Walmart, but that's not how math works."


[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaOpenSTEM: Welcome Back!

Well, most of our schools are back, or about to start the new year. Did you know that there are schools using OpenSTEM materials in every state and territory of Australia? Our wide range of resources, especially those on Australian history, give detailed information about the history of all our states and territories. We pride […]


Cory DoctorowThe 2017 Locus List: a must-read list of the best science fiction and fantasy of the past year

Every year, Locus Magazine’s panel of editors reviews the entire field of science fiction and fantasy and produces its Recommended Reading List; the 2017 list is now out, and I’m proud to say that it features my novel Walkaway, in excellent company with dozens of other works I enjoyed in the past year.

2017 Locus Recommended Reading List
[Locus Magazine]