Planet Russell


Planet Linux AustraliaDonna Benjamin: DrupalCon Nashville

I'm going to Nashville!!

That is all. Carry on. Or... better yet - you should come too!

Planet DebianLouis-Philippe Véronneau: Minimal SQL privileges

Lately, I have been working pretty hard on a paper I have to hand out at the end of my university semester for the machine learning class I'm taking. I will probably do a long blog post about this paper in May if it turns out to be good, but for the time being I have some time to kill while my latest boosting model runs.

So let's talk about something I've started doing lately: creating issues on FOSS webapp project trackers when their documentation tells people to grant all privileges to the database user.

You know, something like:

GRANT ALL PRIVILEGES ON database.* TO 'username'@'localhost' IDENTIFIED BY 'password';

I'd like to say I've never done this and always took time to specify a restricted subset of privileges on my servers, but I'd be lying. To be honest, I woke up last Christmas when someone told me it was an insecure practice.

When you take a few seconds to think about it, there are quite a few database level SQL privileges and I don't see why I should grant them all to a webapp if it only needs a few of them.

So I started asking projects to do something about this and update their documentation with a minimal set of SQL privileges needed to run correctly. The Drupal project does this quite well and tells you to:


When I first reached out to the upstream devs of these projects, I was sure I'd be seen as some zealous nuisance. To my surprise, everyone thought it was a good idea and fixed it.

Shout out to Nextcloud, Mattermost and KanBoard for taking this seriously!

If you are using a webapp and the documentation states you should grant all privileges to the database user, here is a template you can use to create an issue and ask them to change it:


The installation documentation says that you should grant all SQL privileges to
the database user:

    GRANT ALL PRIVILEGES ON database.* TO 'username'@'localhost' IDENTIFIED BY 'password';

I was wondering what are the true minimal SQL privileges WEBAPP needs to run

I don't normally like to grant all privileges for security reasons and would
really appreciate it if you could publish a minimal SQL database privileges

I guess I'm expecting something like [Drupal][drupal] does.


At the database level, [MySQL/MariaDB][mariadb] supports:

* `DROP`

Does WEBAPP really need database level privileges like EVENT or CREATE ROUTINE?
If not, why should I grant them?

Thanks for your work on WEBAPP!



Planet DebianDirk Eddelbuettel: RcppClassicExamples 0.1.2

Per a CRAN email sent to 300+ maintainers, this package (just like many others) was asked to please register its S3 method. So we did, and also overhauled a few other packagaging standards which changed since the previous uploads in December of 2012 (!!).

No new code or features. Full details below. And as a reminder, don't use the old RcppClassic -- use Rcpp instead.

Changes in version 0.1.2 (2018-03-15)

  • Registered S3 print method [per CRAN request]

  • Added src/init.c with registration and updated all .Call usages taking advantage of it

  • Updated http references to https

  • Updated DESCRIPTION conventions

Thanks to CRANberries, you can also look at a diff to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: RDieHarder 0.1.4

Per a CRAN email sent to 300+ maintainers, this package (just like many others) was asked to please register its S3 method. So we did, and also overhauled a few other packagaging standards which changed since the last upload in 2014.

No NEWS.Rd file to take a summary from, but the top of the ChangeLog has details.

Thanks to CRANberries, you can also look at a diff to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramFriday Squid Blogging: New Squid Species Discovered in Australia

A new species of pygmy squid was discovered in Western Australia. It's pretty cute.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Rondam RamblingsTrader Joe's Thai yellow curry sauce contains no curry

Cory DoctorowPodcast: The Man Who Sold the Moon, Part 06

Here’s part six of my reading (MP3) (part five, part four, part three, part two, part one) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.


Sociological ImagesGender, Bitcoin and Altcoins

Despite the fact that women played a key role in the development of modern technology, the digital domain is a disproportionately male space. Recent stories about the politics of GamerGate, “tech bros” in Silicon Valley, and resistance to diversity routinely surface despite efforts of companies such as Google to clean up their act by firing reactionary male employees.

The big tech story of the past year is unquestionably cryptocurrencies such as Bitcoin. So it’s a good time to look at how cryptos replicate the gender politics of digital spaces and where they might complicate them.

Women’s Representation

Crypto holders are not evenly divided between men and women. One recent survey shows that 71% of Bitcoin holders are male. The first challenge for women is simply their representation within the crypto space.

There are various efforts on the part of individual women to address the imbalance. For example, Stacy Herbert, co-host of The Keiser Report, has recently been discussing the possibility of a women’s crypto conference noting, “I know so so many really smart women in the space but you go to these events and it’s panels of all the same guys again and again.” Technology commentator Alexia Tsotsis recently tweeted, “Women, consider crypto. Otherwise the men are going to get all the wealth, again.”

Clearly, the macho nature of the crypto community can feel exclusionary to women. Recently Bloomberg reported on a Bitcoin conference in Miami that invited attendees to an after-hours networking event held in a strip club. As one female attendee noted, “There was a message being sent to women, that, ‘OK, this isn’t really your place … this is where the boys roll.’”

The image of women as presented by altcoins (cryptocurrencies other than Bitcoin) is also telling. One can buy into TittieCoin or BigBoobsCoin, which need no further explanation. There is also an altcoin designed to resist this tendency, Women Coin: “Women coin will become the ultimate business coin for women. We all know that this altcoin market is mainly operated by men, just like the entire world. We want to stop this.”


The male dominance of cryptos suggests it is a space that celebrates normative masculinity. Certain celebrity endorsements of crypto projects have added to this mood, such as heavyweight boxer Floyd Mayweather, actor Steven Seagal and rapper Ghostface Killah. Crypto evangelist John McAfee routinely posts comments and pictures concerning guns, hookers and drugs. Reactionary responses to feminism can also be found: for example, patriarchal revivalist website Return of Kings published an article claiming, “Bitcoin proves that that ‘glass ceiling’ keeping women down is a myth.” Homophobia also occurs: when leading Bitcoin advocate Andreas Antonopoulos announced he was making a donation to the LGBTQ-focused Lambda Legal he received an array of homophobic comments.

However, it would be wrong to assume the masculinity promoted in the crypto space is monolithic. In particular, it is possible to identify a division between Bitcoin and altcoin holders. Consider the following image:

This image was tweeted with the caption “Bitcoin and Ethereum community can’t be anymore different.” On the left we have a MAGA hat-wearing, gun-toting Bitcoin holder; on the right the supposedly effeminate Vitalik Buterin, co-founder of the blockchain platform Ethereum. The longer you spend reading user-generated content in the crypto space, the more you get the sense that Bitcoin is “for men” while altcoins are framed as for snowflakes and SJWs.

There is an exception to this Bitcoin/altcoin gendered distinction: privacy coins such as Monero and Zcash appear to be deemed acceptably manly. Perhaps it is a coincidence that such altcoins are favored by Julian Assange, who has his own checkered history with gender politics ranging from his famed “masculinity test” through to the recent quips about feminists reported by The Intercept.

In conclusion, it is not surprising that the crypto space appears to be predominantly male and even outright resistant to fair representations of women. Certainly, it is not too dramatic to state that Bitcoin has a hyper-masculine culture, but Bitcoin does not represent the whole crypto space, and as both altcoins and other blockchain-based services become more diverse it is likely that so too will its representations of gender.

Joseph Gelfer is a researcher of men and masculinities. His books include Numen, Old Men: Contemporary Masculine Spiritualities and The Problem of Patriarchy and Masculinities in a Global Era. He is currently developing a new model for understanding masculinity, The Five Stages of Masculinity.

(View original at

Krebs on SecurityWho Is Afraid of More Spams and Scams?

Security researchers who rely on data included in Web site domain name records to combat spammers and scammers will likely lose access to that information for at least six months starting at the end of May 2018, under a new proposal that seeks to bring the system in line with new European privacy laws. The result, some experts warn, will likely mean more spams and scams landing in your inbox.

On May 25, the General Data Protection Regulation (GDPR) takes effect. The law, enacted by the European Parliament, requires companies to get affirmative consent for any personal information they collect on people within the European Union. Organizations that violate the GDPR could face fines of up to four percent of global annual revenues.

In response, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — has proposed redacting key bits of personal data from WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges (IP addresses).

Under current ICANN rules, domain name registrars should collect and display a variety of data points when someone performs a WHOIS lookup on a given domain, such as the registrant’s name, address, email address and phone number. Most registrars offer a privacy protection service that shields this information from public WHOIS lookups; some registrars charge a nominal fee for this service, while others offer it for free.

But in a bid to help registrars comply with the GDPR, ICANN is moving forward on a plan to remove critical data elements from all public WHOIS records. Under the new system, registrars would collect all the same data points about their customers, yet limit how much of that information is made available via public WHOIS lookups.

The data to be redacted includes the name of the person who registered the domain, as well as their phone number, physical address and email address. The new rules would apply to all domain name registrars globally.

ICANN has proposed creating an “accreditation system” that would vet access to personal data in WHOIS records for several groups, including journalists, security researchers, and law enforcement officials, as well as intellectual property rights holders who routinely use WHOIS records to combat piracy and trademark abuse.

But at an ICANN meeting in San Juan, Puerto Rico on Thursday, ICANN representatives conceded that a proposal for how such a vetting system might work probably would not be ready until December 2018. Assuming ICANN meets that deadline, it could be many months after that before the hundreds of domain registrars around the world take steps to adopt the new measures.

Gregory Mounier, head of outreach at EUROPOL‘s European Cybercrime Center and member of ICANN’s Public Safety Working Group, said the new WHOIS plan could leave security researchers in the lurch — at least in the short run.

“If you don’t have an accreditation system by 25 May then there’s no means for cybersecurity folks to get access to this information,” Mounier told KrebsOnSecurity. “Let’s say you’re monitoring a botnet and have 10.000 domains connected to that and you want to find information about them in the WHOIS records, you won’t be able to do that anymore. It probably won’t be implemented before December 2018 or January 2019, and that may mean security gaps for many months.”

Rod Rasmussen, chair of ICANN’s Security and Stability Advisory Committee, said ICANN does not have a history of getting things done before or on set deadlines, meaning it may be well more than six months before researchers and others can get vetted to access personal information in WHOIS data.

Asked for his take on the chances that ICANN and the registrar community might still be designing the vetting system this time next year, Rasmussen said “100 percent.”

“A lot of people who are using this data won’t be able to get access to it, and it’s not going to be pretty,” Rasmussen said. “Once things start going dark it will have a cascading effect. Email deliverability is going to be one issue, and the amount of spam that shows up in peoples’ inboxes will be climbing rapidly because a lot of anti-spam technologies rely on WHOIS for their algorithms.”

As I noted in last month’s story on this topic, WHOIS is probably the single most useful tool we have right now for tracking down cybercrooks and/or for disrupting their operations. On any given day I probably perform 20-30 different WHOIS queries; on days I’ve set aside for deep-dive research, I may run hundreds of WHOIS searches.

WHOIS records are a key way that researchers reach out to Web site owners when their sites are hacked to host phishing pages or to foist malware on visitors. These records also are indispensable for tracking down cybercrime victims, sources and the cybercrooks themselves. I remain extremely concerned about the potential impact of WHOIS records going dark across the board.

There is one last possible “out” that could help registrars temporarily sidestep the new privacy regulations: ICANN board members told attendees at Thursday’s gathering in Puerto Rico that they had asked European regulators for a “forbearance” — basically, permission to be temporarily exempted from the new privacy regulations during the time it takes to draw up and implement a WHOIS accreditation system.

But so far there has been no reply, and several attendees at ICANN’s meeting Thursday observed that European regulators rarely grant such requests.

Some registrars are already moving forward with their own plans on WHOIS privacy. GoDaddy, one of the world’s largest domain registrars, recently began redacting most registrant data from WHOIS records for domains that are queried via third-party tools. And experts say it seems likely that other registrars will follow GoDaddy’s lead before the May 25 GDPR implementation date, if they haven’t already.

Planet DebianRussell Coker: Racism in the Office

Today I was at an office party and the conversation turned to race, specifically the incidence of unarmed Afro-American men and boys who are shot by police. Apparently the idea that white people (even in other countries) might treat non-white people badly offends some people, so we had a man try to explain that Afro-Americans commit more crime and therefore are more likely to get shot. This part of the discussion isn’t even noteworthy, it’s the sort of thing that happens all the time.

I and another man pointed out that crime is correlated with poverty and racism causes non-white people to be disproportionately poor. We also pointed out that US police seem capable of arresting proven violent white criminals without shooting them (he cited arrests of Mafia members I cited mass murderers like the one who shot up the cinema). This part of the discussion isn’t particularly noteworthy either. Usually when someone tries explaining some racist ideas and gets firm disagreement they back down. But not this time.

The next step was the issue of whether black people are inherently violent. He cited all of Africa as evidence. There’s a meme that you shouldn’t accuse someone of being racist, it’s apparently very offensive. I find racism very offensive and speak the truth about it. So all the following discussion was peppered with him complaining about how offended he was and me not caring (stop saying racist things if you don’t want me to call you racist).

Next was an appeal to “statistics” and “facts”. He said that he was only citing statistics and facts, clearly not understanding that saying “Africans are violent” is not a statistic. I told him to get his phone and Google for some statistics as he hadn’t cited any. I thought that might make him just go away, it was clear that we were long past the possibility of agreeing on these issues. I don’t go to parties seeking out such arguments, in fact I’d rather avoid such people altogether if possible.

So he found an article about recent immigrants from Somalia in Melbourne (not about the US or Africa, the previous topics of discussion). We are having ongoing discussions in Australia about violent crime, mainly due to conservatives who want to break international agreements regarding the treatment of refugees. For the record I support stronger jail sentences for violent crime, but this is an idea that is not well accepted by conservatives presumably because the vast majority of violent criminals are white (due to the vast majority of the Australian population being white).

His next claim was that Africans are genetically violent due to DNA changes from violence in the past. He specifically said that if someone was a witness to violence it would change their DNA to make them and their children more violent. He also specifically said that this was due to thousands of years of violence in Africa (he mentioned two thousand and three thousand years on different occasions). I pointed out that European history has plenty of violence that is well documented and also that DNA just doesn’t work the way he thinks it does.

Of course he tried to shout me down about the issue of DNA, telling me that he studied Psychology at a university in London and knows how DNA works, demanding to know my qualifications, and asserting that any scientist would support him. I don’t have a medical degree, but I have spent quite a lot of time attending lectures on medical research including from researchers who deliberately change DNA to study how this changes the biological processes of the organism in question.

I offered him the opportunity to star in a Youtube video about this, I’d record everything he wants to say about DNA. But he regarded that offer as an attempt to “shame” him because of his “controversial” views. It was a strange and sudden change from “any scientist will support me” to “it’s controversial”. Unfortunately he didn’t give up on his attempts to convince me that he wasn’t racist and that black people are lesser.

The next odd thing was when he asked me “what do you call them” (black people), “do you call them Afro-Americans when they are here”. I explained that if an American of African ancestry visits Australia then you would call them Afro-American, otherwise not. It’s strange that someone goes from being so certain of so many things to not knowing the basics. In retrospect I should have asked whether he was aware that there are black people who aren’t African.

Then I sought opinions from other people at the party regarding DNA modifications. While I didn’t expect to immediately convince him of the error of his ways it should at least demonstrate that I’m not the one who’s in a minority regarding this issue. As expected there was no support for the ideas of DNA modifying. During that discussion I mentioned radiation as a cause of DNA changes. He then came up with the idea that radiation from someone’s mouth when they shout at you could change your DNA. This was the subject of some jokes, one man said something like “my parents shouted at me a lot but didn’t make me a mutant”.

The other people had some sensible things to say, pointing out that psychological trauma changes the way people raise children and can have multi-generational effects. But the idea of events 3000 years ago having such effects was ridiculed.

By this time people were starting to leave. A heated discussion of racism tends to kill the party atmosphere. There might be some people who think I should have just avoided the discussion to keep the party going (really I didn’t want it and tried to end it). But I’m not going to allow a racist to think that I agree with them, and if having a party requires any form of agreement to racism then it’s not a party I care about.

As I was getting ready to leave the man said that he thought he didn’t explain things well because he was tipsy. I disagree, I think he explained some things very well. When someone goes to such extraordinary lengths to criticise all black people after a discussion of white cops killing unarmed black people I think it shows their character. But I did offer some friendly advice, “don’t drink with people you work with or for or any other people you want to impress”, I suggested that maybe quitting alcohol altogether is the right thing to do if this is what it causes. But he still thought it was wrong of me to call him racist, and I still don’t care. Alcohol doesn’t make anyone suddenly think that black people are inherently dangerous (even when unarmed) and therefore deserving of being shot by police (disregarding the fact that police can take members of the Mafia alive). But it does make people less inhibited about sharing such views even when it’s clear that they don’t have an accepting audience.

Some Final Notes

I was not looking for an argument or trying to entrap him in any way. I refrained from asking him about other races who have experienced violence in the past, maybe he would have made similar claims about other non-white races and maybe he wouldn’t, I didn’t try to broaden the scope of the dispute.

I am not going to do anything that might be taken as agreement or support of racism unless faced with the threat of violence. He did not threaten me so I wasn’t going to back down from the debate.

I gave him multiple opportunities to leave the debate. When I insisted that he find statistics to support his cause I hoped and expected that he would depart. Instead he came back with a page about the latest racist dog-whistle in Australian politics which had no correlation with anything we had previously discussed.

I think the fact that this debate happened says something about Australian and British culture. This man apparently hadn’t had people push back on such ideas before.

CryptogramInteresting Article on Marcus Hutchins

This is a good article on the complicated story of hacker Marcus Hutchins.

Worse Than FailureError'd: Drunken Parsing

"Hi, $(lookup(BOOZE_SHOP_OF_LEAST_MISTRUST))$ Have you been drinking while parsing your variables?" Tom G. writes.


"Alright, so, I can access this website at more than an hour...Yeah. Okay," wrote Robin.


Mark W. writes, "Of course, Apple, I downloaded @@itemName@@. I mean, how could I not? It got @@starCount@@ stars in the app store!"


"One would hope that IEEE knows how to do this engineering thing," Chris S. wrote.


Mike H. writes, "I don't know what language this is in, but if I had to guess, it appears to be in Yak."


"Sexy ladies AND inline variables? YES! I WANT TO LEARN MORE!" wrote Chris.


[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianDaniel Pocock: OSCAL'18, call for speakers, radio hams, hackers & sponsors reminder

The OSCAL organizers have given a reminder about their call for papers, booths and sponsors (ask questions here). The deadline is imminent but you may not be too late.

OSCAL is the Open Source Conference of Albania. OSCAL attracts visitors from far beyond Albania (OpenStreetmap), as the biggest Free Software conference in the Balkans, people come from many neighboring countries including Kosovo, Montenegro, Macedonia, Greece and Italy. OSCAL has a unique character unlike any other event I've visited in Europe and many international guests keep returning every year.

A bigger ham radio presence in 2018?

My ham radio / SDR demo worked there in 2017 and was very popular. This year I submitted a fresh proposal for a ham radio / SDR booth and sought out local radio hams in the region with an aim of producing an even more elaborate demo for OSCAL'18.

If you are a ham and would like to participate please get in touch using this forum topic or email me personally.

Why go?

There are many reasons to go to OSCAL:

  • We can all learn from their success with diversity. One of the finalists for Red Hat's Women in Open Source Award, Jona Azizaj, is a key part of their team: if she is announced the winner at Red Hat Summit the week before OSCAL, wouldn't you want to be in Tirana when she arrives back home for the party?
  • Warm weather to help people from northern Europe to thaw out.
  • For many young people in the region, their only opportunity to learn from people in the free software community is when we visit them. Many people from the region can't travel to major events like FOSDEM due to the ongoing outbreak of immigration bureaucracy and the travel costs. Many Balkan countries are not EU members and incomes are comparatively low.
  • Due to the low living costs in the region and the proximity to larger European countries, many companies are finding compelling opportunities to work with local developers there and OSCAL is a great place to make contacts informally.

Sponsors sought

Like many free software communities, Open Labs is a registered non-profit organization.

Anybody interested in helping can contact the team and ask them for whatever details you need. The Open Labs Manifesto expresses a strong commitment to transparency which hopefully makes it easy for other organizations to contribute and understand their impact.

Due to the low costs in Albania, even a small sponsorship or donation makes a big impact there.

If you can't make a direct payment to Open Labs, you could also potentially help them with benefits in kind or by contributing money to one of the larger organizations supporting OSCAL.

Getting there without direct service from Ryanair or Easyjet

These notes about budget airline routes might help you plan your journey. It is particularly easy to get there from major airports in Italy. If you will also have a vacation at another location in the region it may be easier and cheaper to fly to that location and then use a bus to Tirana.

Making it a vacation

For people who like to combine conferences with their vacations, the Balkans (WikiTravel) offer many opportunities, including beaches, mountains, cities and even a pyramid (in Tirana itself).

It is very easy to reach neighboring countries like Montenegro and Kosovo by coach in just 3-4 hours. For example, there is the historic city of Prizren in Kosovo and many beach resorts in Montenegro.

If you go to Kosovo, don't miss the Prishtina hackerspace.

Tirana Pyramid: a future hackerspace?

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, February 2018

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In February, about 196 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change but a new platinum sponsor is about to join our project.

The security tracker currently lists 60 packages with a known CVE and the dla-needed.txt file 33. The number of open issues increased significantly and we seem to be behind in terms of CVE triaging.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianNorbert Preining: TeX Live 2018 (pretest) hits Debian/experimental

TeX Live 2017 has been frozen and we have entered into the preparation phase for the release of TeX Live 2018. Time to update also the Debian packages to the current status.

The other day I have uploaded the following set of packages to Debian/experimental:

  • texlive-bin 2018.20180313.46939-1
  • texlive-base, texlive-lang, texlive-extra 2018.20180313-1
  • biber 2.11-1

This brings Debian/experimental on par with the current status of TeX Live’s tlpretest. After a bit of testing and the sources have stabilized a bit more I will upload all the stuff to unstable for broader testing.

This year hasn’t seen any big changes, see the above linked post for details. Testing and feedback would be greatly appreciated.


Planet DebianLouis-Philippe Véronneau: Roundcube fr_FEM locale 1.3.5

Roundcube 1.3.5 was released today and with it, I've released version 1.3.5 of my fr_FEM (French gender-neutral) locale.

This latest version is actually the first one that can be used with a production version of Roundcube: the first versions I released were based on the latest commit in the master branch at the time instead of an actual release. Not sure why I did that.

I've also changed the versioning scheme to follow Roundcube's. Version 1.3.5 of my localisation is thus compatible with Roundcube 1.3.5. Again, I should have done that from the start.

The fine folks at Riseup actually started using fr_FEM as the default French locale on their instance and I'm happy to say the UI integration seems to be working pretty well.

Sandro Knauß (hefee), who is working on the Debian Roundcube package, also told me he'd like to replace the default Roundcube French locale by fr_FEM in Debian. Nice to see people think a gender-neutral locale is a good idea!

Finally, since this was the first time I had to compare two different releases of Roundcube to see if the 20 files I care about had changed, I decided to write a simple script that leverages git to do this automatically. Running ./ -p git_repo -i 1.3.4 -f 1.3.5 -l fr_FR -o roundcube_diff.txt outputs a nice file that tells you if new localisation files have been added and displays what changed in the old ones.

You can find the locale here.

Planet Linux AustraliaOpenSTEM: Vale Stephen Hawking

Stephen Hawking was born on the 300th anniversary of Galileo Galilei‘s death (8 March 1942), and died on the anniversary of Albert Einstein‘s birth (14 March).   Having both reached the age of 76, Hawking actually lived a few months longer than Einstein, in spite of his health problems.  By the way, what do you call it when […]


Cory DoctorowHugo nominations close tomorrow!

If you attended either of the past two World Science Fiction Conventions or are registered for the next one in San Jose, California, you’re eligible to nominate for the Hugo Awards, which you can do here — you’ve only got until midnight tomorrow!

The 2017 Locus Recommended Reading List is a great place to start if you’re looking to refresh your memory about the sf/f you enjoyed last year.

May I humbly remind you that my novel Walkaway is eligible in the Best Novel category?

(via Scalzi)

Planet DebianClint Adams: Don't feed them after midnight

“Hello,” said Adrian, but Adrian was lying.

“My name is Adrian,” said Adrian, but Adrian was lying.

“Hold on while I fellate this 魔鬼,” announced Adrian.

Spaniard doing his thing

Posted on 2018-03-15
Tags: bgs

TEDChase your dreams now: 4 questions with Matilda Ho

Cartier and TED believe in the power of bold ideas to empower local initiatives to have global impact. To celebrate Cartier’s dedication to launching the ideas of female entrepreneurs into concrete change, TED has curated a special session of talks around the theme “Bold Alchemy” for the Cartier Women’s Initiative Awards, featuring a selection of favorite TED speakers.

Leading up to the session, TED talked with entrepreneur, investor and TED Fellow Matilda Ho about what inspires her work to bring local, organically grown food to families that need it.

TED: Tell us who you are.
Matilda Ho: I am a serial food entrepreneur and investor in China, driving to create more sustainable food systems by combining profit and purpose. I founded Bits x Bites, China’s first food-tech startup acceleration platform and venture capital fund, which invests in entrepreneurs tackling global food system challenges. Before then, I founded Yimishiji, one of China’s first online farmers markets, which has engineered food education and transparency into the entire supply chain and customer experience.

TED: What’s a bold move you’ve made in your career?
MH: In the early stages of Yimishiji, the online farmers market I founded, we had a hard time finding qualified Chinese produce that would pass our pesticide-free and chemical-free standards. We were at risk of not having enough products to list on our platform. When some team members lobbied to reduce the required standards, I decided we should keep looking until we have met every farmer in China. Articulating your mission is one thing. How do you translate a vision into day-to-day operations? How do you hold fast to your values and energize your team when the going gets tough? These are all challenges I frequently discuss with other founders we work with.

TED: Tell us about a woman who inspires you.
MH: I have been fortunate to be surrounded by courageous and inspiring female role models throughout my career as a business consultant and later as an entrepreneur in the sustainable food movement. As these women rise through the ranks and quietly break ceilings in their own ways, they are not only setting great examples for others to follow but also promoting an inclusive work culture that rewards passionate hard work without gender as a barrier. Having seen their success, I feel empowered to take the same responsibility in my roles.

TED: If you could go back in time, what would you tell your 18-year-old self?
MH: Become an entrepreneur earlier! Nothing you can do will fully prepare you to become a great founder and CEO. Fail early. Learn from mistakes. Most success stories take years of commitment to materialize. Develop your mental toughness to be emotionally resilient.

The private TED session at Cartier takes place April 26 in Singapore. It will feature talks from a diverse range of global leaders, entrepreneurs and change-makers, exploring topics ranging from the changing global workforce to maternal health to data literacy, and it will include a performance from the only female double violinist in the world.

CryptogramArtificial Intelligence and the Attack/Defense Balance

Artificial intelligence technologies have the potential to upend the longstanding advantage that attack has over defense on the Internet. This has to do with the relative strengths and weaknesses of people and computers, how those all interplay in Internet security, and where AI technologies might change things.

You can divide Internet security tasks into two sets: what humans do well and what computers do well. Traditionally, computers excel at speed, scale, and scope. They can launch attacks in milliseconds and infect millions of computers. They can scan computer code to look for particular kinds of vulnerabilities, and data packets to identify particular kinds of attacks.

Humans, conversely, excel at thinking and reasoning. They can look at the data and distinguish a real attack from a false alarm, understand the attack as it's happening, and respond to it. They can find new sorts of vulnerabilities in systems. Humans are creative and adaptive, and can understand context.

Computers -- so far, at least -- are bad at what humans do well. They're not creative or adaptive. They don't understand context. They can behave irrationally because of those things.

Humans are slow, and get bored at repetitive tasks. They're terrible at big data analysis. They use cognitive shortcuts, and can only keep a few data points in their head at a time. They can also behave irrationally because of those things.

AI will allow computers to take over Internet security tasks from humans, and then do them faster and at scale. Here are possible AI capabilities:

  • Discovering new vulnerabilities­ -- and, more importantly, new types of vulnerabilities­ in systems, both by the offense to exploit and by the defense to patch, and then automatically exploiting or patching them.
  • Reacting and adapting to an adversary's actions, again both on the offense and defense sides. This includes reasoning about those actions and what they mean in the context of the attack and the environment.
  • Abstracting lessons from individual incidents, generalizing them across systems and networks, and applying those lessons to increase attack and defense effectiveness elsewhere.
  • Identifying strategic and tactical trends from large datasets and using those trends to adapt attack and defense tactics.

That's an incomplete list. I don't think anyone can predict what AI technologies will be capable of. But it's not unreasonable to look at what humans do today and imagine a future where AIs are doing the same things, only at computer speeds, scale, and scope.

Both attack and defense will benefit from AI technologies, but I believe that AI has the capability to tip the scales more toward defense. There will be better offensive and defensive AI techniques. But here's the thing: defense is currently in a worse position than offense precisely because of the human components. Present-day attacks pit the relative advantages of computers and humans against the relative weaknesses of computers and humans. Computers moving into what are traditionally human areas will rebalance that equation.

Roy Amara famously said that we overestimate the short-term effects of new technologies, but underestimate their long-term effects. AI is notoriously hard to predict, so many of the details I speculate about are likely to be wrong­ -- and AI is likely to introduce new asymmetries that we can't foresee. But AI is the most promising technology I've seen for bringing defense up to par with offense. For Internet security, that will change everything.

This essay previously appeared in the March/April 2018 issue of IEEE Security & Privacy.

Planet DebianDaniel Powell: Mentorship within software development teams

In response to: This email I wrote a short blog post with some insight about the subject of mentorship.


In my journey to find an internship opportunity through Google Summer of Code, I wanted to give input about the relationship between a mentor and an intern/apprentice. My time as a service manager in the automotive repair industry gave me insight into the design of these relationships.

My recommendation for mentoring programs within a software development team are to have a dual group and private messaging environment for teams of 3 mentors guiding 2 or 3 interns based on their comfort and experience in a group setting. My rationale for this is ass follows:

Every personality does not necessarily engage well with each other. While it's important to learn to work with people who you disagree with, I have found that when given the opportunity to float between mentors for different issues, apprentices will learn more from those who they get along with the best. If the end goal is for the pupil to learn the most during this experience, and hence increase also their productivity on a project then having the dual ability to use a group setting or PM to a specific mentor is ideal. This also gives the opportunity for a mentor to recommend asking a question to another mentor because their specialty in the topic area is better, which in turn can help assuage a conflict of personality simply from the shared introduction. (Just think about when someone you like or respect recommends you work with someone who you thought you didn't get along with - it's a more comfortable situation when you are introduced in this circumstance, when it's done in a transparent and positive light).

Our most successful ratio of mentors to apprentices was 3:2 for technicians who were short on shop experience, but in the scope of this project a 3:3 ratio could be appropriate. I would, however, avoid assigning a mentor as a lead for a student in this format. It makes the barrier for reaching out to the other two mentors too high (especially for those who are relatively new to a team dynamic). You may also change the ratio based on the experience of the students that you accept and their team experience. For example, if you have two students who have never worked in a team environment it may be prudent to move to a 3:2 ratio as to not overwhelm the mentors. It's nice to have that flexibility, so it may be good to avoid such a rigid structuring of teams.

Worse Than FailureRepresentative Line: Flushed Down the Pipe

No matter how much I personally like functional programming, I know that it is not a one-size fits all solution for every problem.

Vald M knows this too. Which is why they sent us an email that simply said: “We have a functional programmer on the team”, with this representative line attached.

function groupByBreakdown (breakdowns) {
        return R.pipe(
      , [R.prop, R.identity])))

The use of R.pipe is a good argument about why the proposed pipeline operator is a terrible idea whose time isn’t going to come. Ever. Yes, there’s value in being able to easily compose functions into a pipeline. Shells support pipelining for a reaason, after all. But it's an awkward fit into imperative languages, and unfortunately, the tool is too dangerous to be used in real-world code. Even functional languages, like Elixir, warn you against abusing the pipeline. Mixing opaque functional styles with JavaScript is just flirting with disaster. I mean, more disaster than JavaScript already is.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!


Planet DebianSven Hoexter: aput - simple upload script for a flat artifactory Debian repository

At work we're using Jfrog Artifactory to provide a Debian repository (among other kinds of repository). Using the WebUI sucks, uploading by cut&pasting a curl command is annoying too, so I just wrote down a few lines of shell to upload a single Debian binary package.

Expectation is a flat repository and that you edit the variables at the top to provide the repository URL, name and your API Key. So no magic involved.

Rondam RamblingsWell, that didn't take long

I was contemplating whether or not to write about how incredibly stupid it is to try to solve the school shooting problem by arming teachers.  I was waffling because I don't really like to belabor the obvious.  And then this happened: A teacher accidentally fired a pistol inside a California classroom while lecturing about public safety and injured three students, according to police.  Dennis

Sociological ImagesWhat’s Trending? The Popularity of Gun Control

Today students across the country are walking out of school to protest violence and demand gun control reform. Where do Americans stand on this issue, and have their views changed over time? Government policy makes it difficult to research gun violence in the United States, but we do have some trend data from the General Social Survey that offers important context about how Americans view this issue.

For over forty years, the GSS has been asking its respondents whether they “favor or oppose a law which would require a person to obtain a police permit before he or she could buy a gun”—a simple measure to take the temperature on basic support for gun control. Compared to other controversial social policies, there is actually widespread and consistent support for this kind of gun control.

(Click to Enlarge)

In light of the Second Amendment, however, the U.S. has a reputation for having a strong pro-gun culture. Is this true? It turns out there has been a dramatic shift in the proportion of respondents who report even having a gun in their homes. Despite this trend, gun sales are still high, suggesting that those sales are concentrated among people who already own a gun.

(Click to Enlarge)

Recent controversies over gun control can make it seem like the nation is deeply and evenly divided. These data provide an important reminder that gun control is actually pretty popular, even though views on the issue have become more politically polarized over time.

Inspired by demographic facts you should know cold, “What’s Trending?” is a post series at Sociological Images featuring quick looks at what’s up, what’s down, and what sociologists have to say about it.

Ryan Larson is a graduate student from the Department of Sociology, University of Minnesota – Twin Cities. He studies crime, punishment, and quantitative methodology. He is a member of the Graduate Editorial Board of The Society Pages, and his work has appeared in Poetics, Contexts, and Sociological Perspectives.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

Planet DebianAbhijith PA: Going to FOSSASIA 2018

I will be attending FOSSASIA summit 2018 happening at Singapore. Thanks to Daniel Pocock, we have a Debian booth there. If you are attending please add your name to this wiki page or contact me personally. We can hangout at the booth.

CryptogramThe 600+ Companies PayPal Shares Your Data With

One of the effects of GDPR -- the new EU General Data Protection Regulation -- is that we're all going to be learning a lot more about who collects our data and what they do with it. Consider PayPal, that just released a list of over 600 companies they share customer data with. Here's a good visualization of that data.

Is 600 companies unusual? Is it more than average? Less? We'll soon know.

Worse Than FailureCodeSOD: Lightweight Date Handling

Darlene has a co-worker who discovered a problem: they didn’t know or understand any of the C++ libraries for manipulating dates and times. Checking the documentation or googling it is way too much to ask, so instead they opted to use the tools they already understood- a database. We’ve seen that before.

There was just one other problem: this application wasn’t data-driven, and thus didn’t have a database to query.

Darlene’s co-worker had the solution to that: create an in-memory Sqlite database!

std::string foo::getTimeStamp()
    static const char *sqlStmt =
            "SELECT strftime( '%Y-%m-%dT%H:%M:%fZ', CURRENT_TIMESTAMP );";

    sqlite3 *db = 0;
    int sqliteRC = SQLITE_OK;
    char *sqlErr = 0;

    // Well we should return something that can be used, so picked an
    // arbitrary date, which I believe is the time of the first armistice
    // for the First World War
    std::string rval = "1918-11-11T11:11:00.000Z";

    sqliteRC = sqlite3_open( ":memory:", &db );
    if( sqliteRC != SQLITE_OK )
        LOG( Log::Warn ) << "Failed to open sqlite memory DB, with error ["
                          << sqlite3_errmsg( db ) << "]";
        return rval;

    sqliteRC = sqlite3_exec( db, sqlStmt, &::populate, (void*) &rval, &sqlErr );
    if( sqliteRC != SQLITE_OK )
        LOG( Log::Warn )
            << "Failed to gather current time stamp"
            << " from sqlite memory DB with error [" << sqlErr << "]";
        sqlite3_free( sqlErr );

    sqliteRC = sqlite3_close( db );
    if( sqliteRC != SQLITE_OK )
        // We may leak some memory if this happens
        LOG( Log::Warn )
            << "Failed to close sqlite memory DB with error ["
            << sqlite3_errmsg( db ) << "]";
    db = 0;

    return rval;

This is very lightweight- it's Sqlite, after all. There's nothing light about strftime or its ilk. Just look at the names.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet DebianLaura Arjona Reina: WordPress for Android and short blog posts

I use for my social network interactions and from time to time I post short thoughts there.

I usually reserve my blog for longer posts including links etc.

That means that it’s harder for me to publish in my blog.

OTOH my daily commute time may be enough to craft short posts. I bring my laptop with me but it’s common that I
open kate, begin to write, and arrive my destination with my post almost finished but unpublished. Or, second variant, I cannot sit so I cannot type in the metro and pass the time reading or thinking.

I’ve just installed WordPress for Android and hopefully that helps me to write short posts in my commute time and publish quicker. Let’s try and see what happens 🙂


Comment about this post in this thread.

Planet DebianNorbert Preining: Replacing a lost Yubikey

Some weeks ago I lost my purse with everything in there, from residency card, driving license, credit cards, cash cards, all kind of ID cards, and last but not least my Yubikey NEO. Being Japan I did expect that the purse will show up in a few days, most probably the money gone but all the cards intact. Unfortunately not this time. So after having finally reissued most of the cards, I also took the necessary procedures concerning the Yubikey, which contained my GnuPG subkeys, and was used as second factor for several services (see here and here).

Although the GnuPG keys on the Yubikey are considered safe from extraction, I still decided to revoke them and create new subkeys – one of the big advantage of subkeys, one does not start at zero but just creates new subkeys instead of running around trying to get signatures again.

Other things that have to be made is removing the old Yubikey from all the services where it has been used as second factor. In my case that were quite a lot (Google, Github, Dropbox, NextCloud, WordPress, …). BTW, you have a set of backup keys saved somewhere for all the services you are using, right? It helps a lot getting into the system.

GnuPG keys renewal

To remind myself of what is necessary, here are the steps:

  • Get your master key from the backup USB stick
  • revoke the three subkeys that are on the Yubikey
  • create new subkeys
  • install the new subkeys onto a new Yubikey, update keyservers

All of that is quite straight-forward: Use gpg --expert --edit-key YOUR_KEY_ID, after this you select the subkey with key N, followed by a revkey. You can select all three subkeys and revoke them at the same time: just type key N for each of the subkeys (where N is the index starting from 0 of the key).

Next create new subkeys, here you can follow the steps laid out in the original blog. In the same way you can move them to a new Yubikey Neo (good that I bought three of them back then!).

Last but not least you have to update the key-servers with your new public key, which is normally done with gpg --send-keys (again see the original blog).

The most tricky part was setting up and distributing the keys on my various computers: The master key remains as usual on offline media only. On my main desktop at home I have the subkeys available, while on my laptop I only have stubs pointing at the Yubikey. This needs a bit of shuffling around, but should be obvious somehow when looking at the previous blogs.

Full disk encryption

I had my Yubikey also registered as unlock device for the LUKS based full disk encryption. The status before the update was as follows:

$ cryptsetup luksDump /dev/sdaN

Version:       	1
Cipher name:   	aes

Key Slot 0: ENABLED
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: ENABLED

I was pretty sure that the Slot for the old Yubikey was Slot 7, but I wasn’t sure. So I first registered the new Yubikey in slot 6 with

yubikey-luks-enroll -s 6 -d /dev/sdaN

and checked that I can unlock during boot using the new Yubikey. Then I cleared the slot information in slot 7 with

cryptsetup luksKillSlot /dev/sdaN 7

and again made sure that I can boot using my passphrase (in slot 0) and the new Yubikey (in slot6).

TOTP/U2F second factor authentication

The last step was re-registering the new Yubikey with all the favorite services as second factor, removing the old key on the way. In my case the list comprises several WordPress sites, GitHub, Google, NextCloud, Dropbox and what else I have forgotten.

Although this is the nearly worst case scenario (ok, the main key was not compromised!), everything went very smooth and easy, to my big surprise. Even my Debian upload ability was not interrupted considerably. All in all it shows that having subkeys on a Yubikey is a very useful and effective solution.

Planet DebianLouis-Philippe Véronneau: Playing with water

H2o Flow gradient boosting job

I'm currently taking a machine learning class and although it is an insane amount of work, I like it a lot. I initially had planned to use R to play around with the database I have, but the teacher recommended I use H2o, a FOSS machine learning framework.

I was a bit sceptical at first since I'm already pretty good with R, but then I found out you could simply import H2o as an R library. H2o replaces most R functions by its own parallelized ones to cut down on processing time (no more doParallel calls) and uses an "external" server you have to run on the side instead of running R calls directly.

H2o Flow gradient boosting model

I was pretty happy with this situation, that is until I actually started using H2o in R. With the huge database I'm playing with, the library felt clunky and I had a hard time doing anything useful. Most of the time, I just ended up with long Java traceback calls. Much love.

I'm sure in the right hands using H2o as a library could have been incredibly powerful, but sadly it seems I haven't earned my black belt in R-fu yet.

H2o Flow variable importance weights

I was pissed for at least a whole day - not being able to achieve what I wanted to do - until I realised H2o comes with a WebUI called Flow. I'm normally not very fond of using web thingies to do important work like writing code, but Flow is simply incredible.

Automated graphing functions, integrated ETA when running resource intensive models, descriptions for each and every model parameters (the parameters are even divided in sections based on your familiarly with the statistical models in question), Flow seemingly has it all. In no time I was able to run 3 basic machine learning models and get actual interpretable results.

So yeah, if you've been itching to analyse very large databases using state of the art machine learning models, I would recommend using H2o. Try Flow at first instead of the Python or R hooks to see what it's capable of doing.

The only downside to all of this is that H2o is written in Java and depends on Java 1.7 to run... That, and be warned: it requires a metric fuckton of processing power and RAM. My poor server struggled quite a bit even with 10 available cores and 10Gb of RAM...

Planet DebianDirk Eddelbuettel: Rcpp 0.12.16: A small update

The sixteenth update the 0.12.* series of Rcpp landed on CRAN earlier this evening after a few days of gestation in incoming/ at CRAN.

Once again, this release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, the 0.12.12 release in July 2017, the 0.12.13.release in late September 2017, the 0.12.14.release in November 2017, and the 0.12.15.release in January 2018 making it the twentieth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1316 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

Compared to other releases, this release contains a relatively small change set, but between Kirill, Kevin and myself a few things got cleaned up and solidified. Full details are below.

Changes in Rcpp version 0.12.16 (2018-03-08)

  • Changes in Rcpp API:

    • Rcpp now sets and puts the RNG state upon each entry to an Rcpp function, ensuring that nested invocations of Rcpp functions manage the RNG state as expected (Kevin in #825 addressing #823).

    • The R::pythag wrapper has been commented out; the underlying function has been gone from R since 2.14.0, and ::hypot() (part of C99) is now used unconditionally for complex numbers (Dirk in #826).

    • The long long type can now be used on 64-bit Windows (Kevin in #811 and again in #829 addressing #804).

  • Changes in Rcpp Attributes:

    • Code generated with cppFunction() now uses .Call() directly (Kirill Mueller in #813 addressing #795).
  • Changes in Rcpp Documentation:

    • The Rcpp FAQ vignette is now indexed as 'Rcpp-FAQ'; a stale Gmane reference was removed and entry for getting compilers under Conda was added.

    • The top-level now has a Support section.

    • The Rcpp.bib reference file was refreshed to current versions.

Thanks to CRANberries, you can also look at a diff to the previous release. As always, details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #150

Here's what happened in the Reproducible Builds effort between Sunday March 4 and Saturday March 10 2018:

diffoscope development

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

Mattia Rizzolo backported version 91 to the Debian backports repository.

In addition, Juliana — our Outreachy intern — continued her work on parallel processing.

Bugs filed

In addition, package reviews have been added, 44 have been updated and 26 have been removed in this week, adding to our knowledge about identified issues.

Lastly, two issue classification types have been added: development

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (49)
  • Antonio Terceiro (1)
  • James Cowgill (1)
  • Ole Streicher (1)


This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Krebs on SecurityFlash, Windows Users: It’s Time to Patch

Adobe and Microsoft each pushed critical security updates to their products today. Adobe’s got a new version of Flash Player available, and Microsoft released 14 updates covering more than 75 vulnerabilities, two of which were publicly disclosed prior to today’s patch release.

The Microsoft updates affect all supported Windows operating systems, as well as all supported versions of Internet Explorer/Edge, Office, Sharepoint and Exchange Server.

All of the critical vulnerabilities from Microsoft are in browsers and browser-related technologies, according to a post from security firm Qualys.

“It is recommended that these be prioritized for workstation-type devices,” wrote Jimmy Graham, director of product management at Qualys. “Any system that accesses the Internet via a browser should be patched.”

The Microsoft vulnerabilities that were publicly disclosed prior to today involve Microsoft Exchange Server 2010 through 2016 editions (CVE-2018-0940) and ASP.NET Core 2.0 (CVE-2018-0808), said Chris Goettl at Ivanti. Microsoft says it has no evidence that attackers have exploited either flaw in active attacks online.

But Goettl says public disclosure means enough information was released publicly for an attacker to get a jump start or potentially to have access to proof-of-concept code making an exploit more likely. “Both of the disclosed vulnerabilities are rated as Important, so not as severe, but the risk of exploit is higher due to the disclosure,” Goettl said.

Microsoft says by default, Windows 10 receives updates automatically, “and for customers running previous versions, we recommend they turn on automatic updates as a best practice.” Microsoft doesn’t make it easy for Windows 10 users to change this setting, but it is possible. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

Adobe’s Flash Player update fixes at least two critical bugs in the program. Adobe said it is not aware of any active exploits in the wild against either flaw, but if you’re not using Flash routinely for many sites, you probably want to disable or remove this awfully buggy program.

Just last month Adobe issued a Flash update to fix two vulnerabilities that were being used in active attacks in which merely tricking a victim into viewing a booby-trapped Web site or file could give attackers complete control over the vulnerable machine. It would be one thing if these zero-day flaws in Flash were rare, but this is hardly an isolated occurrence.

Adobe is phasing out Flash entirely by 2020, but most of the major browsers already take steps to hobble Flash. And with good reason: It’s a major security liability. Chrome also bundles Flash, but blocks it from running on all but a handful of popular sites, and then only after user approval.

For Windows users with Mozilla Firefox installed, the browser prompts users to enable Flash on a per-site basis. Through the end of 2017 and into 2018, Microsoft Edge will continue to ask users for permission to run Flash on most sites the first time the site is visited, and will remember the user’s preference on subsequent visits.

The latest standalone version of Flash that addresses these bugs is  for Windows, Mac, Linux and Chrome OS. But most users probably would be better off manually hobbling or removing Flash altogether, since so few sites actually require it still. Disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

Planet DebianThomas Lange: build service now supports creation of VM disk images

A few days ago, I've added a new feature to the build service.

Additionally to creating an installation image, can now build bootable disk images. These disk images can be booted in a VM like KVM, Virtualbox or VMware or openstack.

You can define a disk image size, select a language, set a user and root password, select a Debian distribution and enable backports just by one click. It's possible to add your public key for access to the root account without a password. This can also be done by just specifying your GitHub account. Several disk formats are supports, like raw (compressed with xz or zstd), qcow2, vdi, vhdx and vmdk. And you can add your own list of packages, you want to have inside this OS. After a few minutes the disk image is created and you will get a download link, including a log the the creation process and a link to the FAI configuration that was used to create your customized image.

The new service is available at

If you have any comments, feature requests or feedback, do not hesitate to contact me.

Planet DebianPetter Reinholdtsen: First rough draft Norwegian and Spanish edition of the book Made with Creative Commons

I am working on publishing yet another book related to Creative Commons. This time it is a book filled with interviews and histories from those around the globe making a living using Creative Commons.

Yesterday, after many months of hard work by several volunteer translators, the first draft of a Norwegian Bokmål edition of the book Made with Creative Commons from 2017 was complete. The Spanish translation is also complete, while the Dutch, Polish, German and Ukraine edition need a lot of work. Get in touch if you want to help make those happen, or would like to translate into your mother tongue.

The whole book project started when Gunnar Wolf announced that he was going to make a Spanish edition of the book. I noticed, and offered some input on how to make a book, based on my experience with translating the Free Culture and The Debian Administrator's Handbook books to Norwegian Bokmål. To make a long story short, we ended up working on a Bokmål edition, and now the first rough translation is complete, thanks to the hard work of Ole-Erik Yrvin, Ingrid Yrvin, Allan Nordhøy and myself. The first proof reading is almost done, and only the second and third proof reading remains. We will also need to translate the 14 figures and create a book cover. Once it is done we will publish the book on paper, as well as in PDF, ePub and possibly Mobi formats.

The book itself originates as a manuscript on Google Docs, is downloaded as ODT from there and converted to Markdown using pandoc. The Markdown is modified by a script before is converted to DocBook using pandoc. The DocBook is modified again using a script before it is used to create a Gettext POT file for translators. The translated PO file is then combined with the earlier mentioned DocBook file to create a translated DocBook file, which finally is given to dblatex to create the final PDF. The end result is a set of editions of the manuscript, one English and one for each of the translations.

The translation is conducted using the Weblate web based translation system. Please have a look there and get in touch if you would like to help out with proof reading. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

CryptogramE-Mailing Private HTTPS Keys

I don't know what to make of this story:

The email was sent on Tuesday by the CEO of Trustico, a UK-based reseller of TLS certificates issued by the browser-trusted certificate authorities Comodo and, until recently, Symantec. It was sent to Jeremy Rowley, an executive vice president at DigiCert, a certificate authority that acquired Symantec's certificate issuance business after Symantec was caught flouting binding industry rules, prompting Google to distrust Symantec certificates in its Chrome browser. In communications earlier this month, Trustico notified DigiCert that 50,000 Symantec-issued certificates Trustico had resold should be mass revoked because of security concerns.

When Rowley asked for proof the certificates were compromised, the Trustico CEO emailed the private keys of 23,000 certificates, according to an account posted to a Mozilla security policy forum. The report produced a collective gasp among many security practitioners who said it demonstrated a shockingly cavalier treatment of the digital certificates that form one of the most basic foundations of website security.

Generally speaking, private keys for TLS certificates should never be archived by resellers, and, even in the rare cases where such storage is permissible, they should be tightly safeguarded. A CEO being able to attach the keys for 23,000 certificates to an email raises troubling concerns that those types of best practices weren't followed.

I am croggled by the multiple layers of insecurity here.

BoingBoing post.

Worse Than FailureCodeSOD: And Now You Have Two Problems

We all know the old saying: “Some people, when confronted with a problem, think ‘I know, I’ll use regular expressions.’ Now they have two problems.” The quote has a long and storied history, but Roger A’s co-worker decided to take it quite literally.

Specifically, they wanted to be able to build validation rules which could apply a regular expression to the input. Thus, they wrote the RegExpConstraint class:

public class RegExpConstraint
        private readonly Regex _pattern;

        private readonly string _unmatchedErrorMessage;
        protected string UnmatchedErrorMessage => _unmatchedErrorMessage;

        public RegExpConstraint(string pattern, string unmatchedErrorMessage)
                _pattern = new Regex(pattern);
                _unmatchedErrorMessage = unmatchedErrorMessage;

        /// <summary>
        /// Check if the given value match the RegExp. Return the unmatched error message if it doesn't, null otherwise.
        /// </summary>
        public virtual string CheckMatch(string value)
                if (!_pattern.IsMatch(value))
                        return _unmatchedErrorMessage;
                return null;

This “neatly” solved the problem of making sure that an input string matched a regex, if by “neatly” you mean, “returns a string instead of a boolean value”, but it introduced a new problem: what if you wanted to make certain that it absolutely didn’t match a certain subset of characters. For example, if you wanted “\:*<>|@” to be illegal characters, how could you do that with the RegExpConstraint? By writing a regex like this: [^\:*<>|@]? Don’t be absurd. You need a new class.

public class RegExpExcludeConstraint : RegExpConstraint
        private Regex _antiRegex;
        public Regex AntiRegex => _antiRegex;

        public RegExpExcludeConstraint()
                : base(null, null)

        /// <summary>
        /// Constructor
        /// </summary>
        /// <param name="pattern">Regex expression to validate</param>
        /// <param name="antiPattern">Regex expression to invalidate</param>
        /// <param name="unmatchedErrorMessage">Error message in case of invalidation</param>
        public RegExpExcludeConstraint(string pattern, string antiPattern, string unmatchedErrorMessage)
                : base(pattern, unmatchedErrorMessage)
                _antiRegex = new Regex(antiPattern);

        /// <summary>
        /// Check if the constraint match
        /// </summary>
        public override string CheckMatch(string value)
                var baseMatch = base.CheckMatch(value);
                if (baseMatch != null || _antiRegex.IsMatch(value))
                        return UnmatchedErrorMessage;
                return null;

Not only does this programmer not fully understand regular expressions, they also haven’t fully mastered inheritance. Or maybe they know that this code is bad, as they named one of their parameters antiPattern. The RegExpExcludeConstraint accepts two regexes, requires that the first one matches, and the second one doesn’t, helpfully continuing the pattern of returning null when there’s nothing wrong with the input.

Perhaps the old saying is wrong. I don’t see two problems. I see one problem: the person who wrote this code.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Planet DebianAntoine Beaupré: The cost of hosting in the cloud

This is one part of my coverage of KubeCon Austin 2017. Other articles include:

Should we host in the cloud or on our own servers? This question was at the center of Dmytro Dyachuk's talk, given during KubeCon + CloudNativeCon last November. While many services simply launch in the cloud without the organizations behind them considering other options, large content-hosting services have actually moved back to their own data centers: Dropbox migrated in 2016 and Instagram in 2014. Because such transitions can be expensive and risky, understanding the economics of hosting is a critical part of launching a new service. Actual hosting costs are often misunderstood, or secret, so it is sometimes difficult to get the numbers right. In this article, we'll use Dyachuk's talk to try to answer the "million dollar question": "buy or rent?"

Computing the cost of compute

So how much does hosting cost these days? To answer that apparently trivial question, Dyachuk presented a detailed analysis made from a spreadsheet that compares the costs of "colocation" (running your own hardware in somebody else's data center) versus those of hosting in the cloud. For the latter, Dyachuk chose Amazon Web Services (AWS) as a standard, reminding the audience that "63% of Kubernetes deployments actually run off AWS". Dyachuk focused only on the cloud and colocation services, discarding the option of building your own data center as too complex and expensive. The question is whether it still makes sense to operate your own servers when, as Dyachuk explained, "CPU and memory have become a utility", a transition that Kubernetes is also helping push forward.

Another assumption of his talk is that server uptime isn't that critical anymore; there used to be a time when system administrators would proudly brandish multi-year uptime counters as a proof of server stability. As an example, Dyachuk performed a quick survey in the room and the record was an uptime of 5 years. In response, Dyachuk asked: "how many security patches were missed because of that uptime?" The answer was, of course "all of them". Kubernetes helps with security upgrades, in that it provides a self-healing mechanism to automatically re-provision failed services or rotate nodes when rebooting. This changes hardware designs; instead of building custom, application-specific machines, system administrators now deploy large, general-purpose servers that use virtualization technologies to host arbitrary applications in high-density clusters.

When presenting his calculations, Dyachuk explained that "pricing is complicated" and, indeed, his spreadsheet includes hundreds of parameters. However, after reviewing his numbers, I can say that the list is impressively exhaustive, covering server memory, disk, and bandwidth, but also backups, storage, staffing, and networking infrastructure.

For servers, he picked a Supermicro chassis with 224 cores and 512GB of memory from the first result of a Google search. Once amortized over an aggressive three-year rotation plan, the $25,000 machine ends up costing about $8,300 yearly. To compare with Amazon, he picked the m4.10xlarge instance as a commonly used standard, which currently offers 40 cores, 160GB of RAM, and 4Gbps of dedicated storage bandwidth. At the time he did his estimates, the going rate for such a server was $2 per hour or $17,000 per year. So, at first, the physical server looks like a much better deal: half the price and close to quadruple the capacity. But, of course, we also need to factor in networking, power usage, space rental, and staff costs. And this is where things get complicated.

First, colocation rates will vary a lot depending on location. While bandwidth costs are often much lower in large urban centers because of proximity to fast network links, real estate and power prices are often much higher. Bandwidth costs are now the main driver in hosting costs.

For the purpose of his calculation, Dyachuk picked a real-estate figure of $500 per standard cabinet (42U). His calculations yielded a monthly power cost of $4,200 for a full rack, at $0.50/kWh. Those rates seem rather high for my local data center, where that rate is closer to $350 for the cabinet and $0.12/kWh for power. Dyachuk took into account that power is usually not "metered billing", when you pay for the actual power usage, but "stepped billing" where you pay for a circuit with a (say) 25-amp breaker regardless of how much power you use in said circuit. This accounts for some of the discrepancy, but the estimate still seems rather too high to be accurate according to my calculations.

Then there's networking: all those machines need to connect to each other and to an uplink. This means finding a bandwidth provider, which Dyachuk pinned at a reasonable average cost of $1/Mbps. But the most expensive part is not the bandwidth; the cost of managing network infrastructure includes not only installing switches and connecting them, but also tracing misplaced wires, dealing with denial-of-service attacks, and so on. Cabling, a seemingly innocuous task, is actually the majority of hardware expenses in data centers, as previously reported. From networking, Dyachuk went on to detail the remaining cost estimates, including storage and backups, where the physical world is again cheaper than the cloud. All this is, of course, assuming that crafty system administrators can figure out how to glue all the hardware together into a meaningful package.

Which brings us to the sensitive question of staff costs; Dyachuk described those as "substantial". These costs are for the system and network administrators who are needed to buy, order, test, configure, and deploy everything. Evaluating those costs is subjective: for example, salaries will vary between different countries. He fixed the person yearly salary costs at $250,000 (counting overhead and an actual $150,000 salary) and accounted for three people on staff. Those costs may also vary with the colocation service; some will include remote hands and networking, but he assumed in his calculations that the costs would end up being roughly the same because providers will charge extra for those services.

Dyachuk also observed that staff costs are the majority of the expenses in a colocation environment: "hardware is cheaper, but requires a lot more people". In the cloud, it's the opposite; most of the costs consist of computation, storage, and bandwidth. Staff also introduce a human factor of instability in the equation: in a small team, there can be a lot of variability in ability levels. This means there is more uncertainty in colocation cost estimates.

In our discussions after the conference, Dyachuk pointed out a social aspect to consider: cloud providers are operating a virtual oligopoly. Dyachuk worries about the impact of Amazon's growing power over different markets:

A lot of businesses are in direct competition with Amazon. A fear of losing commercial secrets and being spied upon has not been confirmed by any incidents yet. But Walmart, for example, moved out of AWS and requested that its suppliers do the same.

Demand management

Once the extra costs described are factored in, colocation still would appear to be the cheaper option. But that doesn't take into account the question of capacity: a key feature of cloud providers is that they pool together large clusters of machines, which allow individual tenants to scale up their services quickly in response to demand spikes. Self-hosted servers need extra capacity to cover for future demand. That means paying for hardware that stays idle waiting for usage spikes, while cloud providers are free to re-provision those resources elsewhere.

Satisfying demand in the cloud is easy: allocate new instances automatically and pay the bill at the end of the month. In a colocation, provisioning is much slower and hardware must be systematically over-provisioned. Those extra resources might be used for preemptible batch jobs in certain cases, but workloads are often "transaction-oriented" or "realtime" which require extra resources to deal with spikes. So the "spike to average" ratio is an important metric to evaluate when making the decision between the cloud and colocation.

Cost reductions are possible by improving analytics to reduce over-provisioning. Kubernetes makes it easier to estimate demand; before containerized applications, estimates were per application, each with its margin of error. By pooling together all applications in a cluster, the problem is generalized and individual workloads balance out in aggregate, even if they fluctuate individually. Therefore Dyachuk recommends to use the cloud when future growth cannot be forecast, to avoid the risk of under-provisioning. He also recommended "The Art of Capacity Planning" as a good forecasting resource; even though the book is old, the basic math hasn't changed so it is still useful.

The golden ratio

Colocation prices finally overshoot cloud prices after adding extra capacity and staff costs. In closing, Dyachuk identified the crossover point where colocation becomes cheaper at around $100,000 per month, or 150 Amazon m4.2xlarge instances, which can be seen in the graph below. Note that he picked a different instance type for the actual calculations: instead of the largest instance (m4.10xlarge), he chose the more commonly used m4.2xlarge instance. Because Amazon pricing scales linearly, the math works out to about the same once reserved instances, storage, load balancing, and other costs are taken into account.

He also added that the figure will change based on the workload; Amazon is more attractive with more CPU and less I/O. Inversely, I/O-heavy deployments can be a problem on Amazon; disk and network bandwidth are much more expensive in the cloud. For example, bandwidth can sometimes be more than triple what you can easily find in a data center.

Your mileage may vary; those numbers shouldn't be taken as an absolute. They are a baseline that needs to be tweaked according to your situation, workload and requirements. For some, Amazon will be cheaper, for others, colocation is still the best option.

He also emphasized that the graph stops at 500 instances; beyond that lies another "wall" of investment due to networking constraints. At around the equivalent of 2000-3000 Amazon instances, networking becomes a significant bottleneck and demands larger investments in networking equipment to upgrade internal bandwidth, which may make Amazon affordable again. It might also be that application design should shift to a multi-cluster setup, but that implies increases in staff costs.

Finally, we should note that some organizations simply cannot host in the cloud. In our discussions, Dyachuk specifically expressed concerns about Canada's government services moving to the cloud, for example: what is the impact on state sovereignty when confidential data about its citizen ends up in the hands of private contractors? So far, Canada's approach has been to only move "public data" to the cloud, but Dyachuk pointed out this already includes sensitive departments like correctional services.

In Dyachuk's model, the cloud offers significant cost reduction over traditional hosting in small clusters, at least until a deployment reaches a certain size. However, different workloads significantly change that model and can make colocation attractive again: I/O and bandwidth intensive services with well-planned growth rates are clear colocation candidates. His model is just a start; any project manager would be wise to make their own calculations to confirm the cloud really delivers the cost savings it promises. Furthermore, while Dyachuk wisely avoided political discussions surrounding the impact of hosting in the cloud, data ownership and sovereignty remain important considerations that shouldn't be overlooked.

A YouTube video and the slides [PDF] from Dyachuk's talk are available online.

This article first appeared in the Linux Weekly News, under the title "The true costs of hosting in the cloud".

Worse Than FailureDaylight Losing Time

The second Sunday of March has come to pass, which means if you're a North American reader, you're getting this an hour earlier than normal. What a bonus! That's right, we all got to experience the mandatory clock-changing event known as Daylight Saving Time. While the sun, farm animals, toddlers, etc. don't care about an arbitrary changing of the clock, computers definitely do.

Early in my QA career, I had the great (dis)pleasure of fully regression testing electronic punch clocks on every possible software version every time a DST change was looming. It was every bit as miserable as it sounds but was necessary because if punches were an hour off for thousands of employees, it would wreak havoc on our clients' payroll processing.

Submitter Iain would know this all too well after the financial services company he worked for experienced a DST-related disaster. As a network engineer, Iain was in charge of the monitoring systems. Since their financial transactions were very dependent on accurate time, he created a monitor that would send him an alert if any of the servers drifted three or more seconds from what the domain controllers said the time should be. It rarely ever went off since the magic of NTP was in use to keep all the server clocks correct.

Victory! Congress passes daylight saving bill, a early 20th century propaganda poster, featuring Uncle Sam telling you to get your hoe ready

One fateful early morning of the 2nd Sunday in March, Iain's phone exploded with alerts from the monitor. Two load-balanced web servers were alternately complaining about being an entire hour off from the actual time. The servers in question were added in recent months and had never caused an issue before.

He rolled out of bed to grab his laptop to begin troubleshooting. The servers were supposed to connect to time sync with their domain controller, which would NTP with an external stratum 1 time server. He figured one or more of the servers were having network connectivity issues when the time change occurred and were now confused as to who had the right time.

Iain sent an NTP packet to each of the troubled servers expecting to see the domain controller as the reference server. Instead, he saw the IP addresses of TroublesomeServer1 and TroublesomeServer2. Thinking he did something wrong in an early morning fog, he ran it again only to get the same result. It seemed that the two servers were pointed to each other for NTP.

While that was a ridiculous setup, it wouldn't explain why they were off by an entire hour and kept switching their times. Iain noticed that the old-fashioned clock on his desk showed the time was a bit after 2 AM, while the time on his laptop was a bit after 3 AM. It dawned on him that the time issues had to be related to the Daylight Saving Time change. The settings for that were kept in the load balancer, which he had read-only access to.

In the load balancer console, he found that TroublesomeServer1 was correctly set to update its time for Daylight Saving, while TroublesomeServer2 was not. Since they were incorrectly set to each other for NTP, when TroublesomeServer1 jumped ahead an hour, TroublesomeServer2 would follow. But then TroublesomeServer2 would realize it wasn't supposed to adjust for DST, so it would jump back an hour, bringing TroublesomeServer1 with it. This kept repeating itself, which explained the volume of alerts Iain got.

Since he was powerless to correct the setting on the load balancer, he made a call to his manager, who escalated to another manager and so on until they tracked down who had access to make the setting change. Three hours later, the servers were on the correct time. But the mess of correcting all the overnight transactions that happened during this window were just beginning. The theoretical extra hour of daylight provided was negated by everyone spending hours in a windowless conference room adjusting financial data by hand.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

CryptogramTwo New Papers on the Encryption Debate

Seems like everyone is writing about encryption and backdoors this season.

I recently blogged about the new National Academies report on the same topic.

Here's a review of the National Academies report, and another of the East West Institute's report.

EDITED TO ADD (3/8): Commentary on the National Academies study by the EFF.

Planet DebianJunichi Uekawa: I've been writing js more for chrome extensions.

I've been writing js more for chrome extensions. I write python using pandas for plotting graphs now. I wonder if there's good graphing solution for js. I don't remember how I crafted R graphs annymore.

Planet Linux AustraliaDavid Rowe: Measuring SDR Noise Figure in Real Time

I’m building a sensitive receiver for FreeDV 2400A signals. As a first step I tried a HackRF with an external Low Noise Amplifier (LNA), and attempted to measure the Noise Figure (NF) using the system Mark and I developed two years ago.

However I was getting results that didn’t make sense and were not repeatable. So over the course of a few early morning sessions I came up with a real time NF measurement system, and wrinkled several bugs out of it. I also purchased a few Airspy SDRs, and managed to measure NF on them as well as the HackRF.

It’s a GNU Octave script called nf_from_stdio.m that accepts a sample stream from stdio. It assumes the signal contains a sine wave test tone from a calibrated signal generator, and noise from the receiver under test. By sampling the test tone it can establish the gain of the receiver, and by sampling the noise spectrum an estimate of the noise power.

The script can be driven from command line utilities like hackrf_transfer or airspy_rx or via software receivers like gqrx that can send SSB-demodaulted samples over UDP. Instructions are at the top of the script.


I’m working from a home workbench, with rudimentary RF skills, a strong signal processing background and determination. I do have a good second hand signal generator (Marconi 2031), that cost AUD$1000 at a Hamfest, and a Rigol 815 Spec An (generously donated by Mel K0PFX, and Jim, N0OB) to support my FreeDV work. Both very useful and highly recommended. I cross-checked the sig-gen calibrated output using an oscilloscope and external attenuator (within 0.5dB). The Rigol is less accurate in amplitude (1.5dB on its specs), but useful for relative measurements, e.g. comparing cable attenuation.

For the NF test method I have used a calibrated signal source is required. I performed my tests at 435MHz using a -100dBm carrier generated from the Marconi 2031 sig-gen.

Usage and Results

The script accepts real samples from a SSB demod, or complex samples from an IQ source. Tune your receiver so that the sinusoidal test tone is in the 2000 to 4000 Hz range as displayed on Fig 2 of the script. In general for minimum NF turn all SDR gains up to maximum. Check Fig 1 to ensure the signal is not clipping, reduce the baseband gain if necessary.

Noise is measured between 5000 and 10000 Hz, so ensure the receiver passband is flat in that region. When using gqrx, I drag the filter bandwidth out to 12000 Hz.

The noise estimates are less stable than the tone power estimate, leading to some sample/sample variation in the NF estimate. I take the median of the last five estimates.

I tried supplying samples to nf_from_stdio using two methods:

  1. Using gqrx in UDP mode to supply samples over UDP. This allows easy tuning and the ability to adjust the SDR gains in real time, but requires a few steps to set up
  2. Using a “single” command line approach that consists of a chain of processing steps concatenated together. Once your signal is tuned you can start the NF measurements with a single step.

Instructions on how to use both methods are at the top of nf_from_stdio.m

Here are some results using both gqrx and command line methods, with and without an external (20dB gain/1dB NF) LNA. They were consistent across two laptops.

SDR Gqrx LNA Cmd Line LNA Cmd Line no LNA
AirSpy Mini 2.0 2.2 7.9
AirSpy R2 1.7 1.7 7.0
HackRF One 2.6 3.4 11.1

The results with LNA are what we would expect for system noise figures with a good LNA at the front end.

The “no LNA” Airspy NF results are curious – the Airspy specs state a NF of just 3.5dB. So we contacted Airspy via Twitter and email to see how they measured their stated NF. We haven’t received a response to date. I posted to the Airspy mailing list and one gentleman (Dave – WØLEV) kindly replied and has measured noise figures of 4dB using calibrated noise sources and attenuators.

Looking into the data sheets for the Airspy, it appears the R820T tuner at the front end of the Airspy has a NF of 3.5dB. However a system NF will always be worse than the first device, as other devices (e.g. the ADC) also inject noise.

Other possibilities for my figures are measurement error, ambient noise sources at my site, frequency dependent NF, or variations in individual R820T samples.

In our past work we have used Bit Error Rate (BER) results as an independent method of confirming system noise figure. We found a close match between theoretical and measured BER when testing with and without a LNA. I’ll be repeating similar low level BER tests with FreeDV 2400A soon.

Real Time Noise Figure

It’s really nice to read the system noise figure in real time. For example you can start it running, then experiment with grounding, tightening connectors, or moving the SDR away from the laptop, or connect/disconnect a LNA in real time and watch the results. Really helps catch little issues in these difficult to perform tests. After all – we are measuring thermal noise, a very weak signal.

Some of the NF problems I could find and remove with a real time measurement:

  • The Airspy mini is nearly 1dB worse on the front left USB port than the rear left USB port on my X220 Thinkpad!
  • The Airspy mini really likes USB extension cables with ferrite clamps – without the ferrite I found the LNA was ineffective in reducing the NF – being swamped by conducted laptop noise I guess.
  • Loose connectors can make the noise figure a few dB worse. Wiggle and tighten them all.
  • Position of SDR/LNA near the radio and other bench equipment.
  • My magic touch can decrease noise figure! Grounding effect I guess?

Development Bugs

I had to work through several problems before I started getting sensible numbers. This was quite discouraging for a while as the numbers were jumping all over the place. However its fair to say measuring NF is a tough problem. From what I can Google its an uncommon measurement for people in home workshops.

These bugs are worth mentioning as traps for anyone else attempting home NF measurements:

  1. Cable loss: I found a 1.5dB loss is some cable I was using between the sig gen and the SDR under test. I Measured the loss by comparing a few cables connected between my sig gen and spec an. While the 815 is not accurate in terms of absolute calibration (rated at 1.5dB), it can still be used for comparative measurements. The cable loss can be added to the calculations or just choose a low loss cable.
  2. Filter shape: I had initially placed the test tone under 1000Hz. However I noticed that the gqrx signal had a few dB of high pass filtering in this region (Fig 2 below). Not an issue for regular USB demodulation, but a few dB really matters for NF! So I moved the test tone to the 2-4kHz region where the gqrx output was nice and flat.
  3. A noisy USB port, especially without a clamp, on the Airspy Mini (photo below). Found by trying different SDRs and USB ports, and finally a clamp. Oh Boy, never expected that one. I was connecting the LNA and the NF was stuck at 4dB – swamped by noise from the USB Port I guess.
  4. Compression: Worth checking the SDR output is not clipped or in compression. I adjusted the sig gen output up and down 3dB, and checked the power estimate from the script changed by 3dB. Also worth monitoring Fig 1 from the script, make sure it’s not hitting the limits. The HackRF needed it’s baseband gain reduced, but the Airspys were OK.
  5. I used latest Airspy tools built from source (rather than Ubuntu 17 package) to get stdout piping working properly and not have other status information from printfs injected into the sample stream!


Thanks Mark, for the use of your RF hardware, and I’d also like to mention the awesome CSDR tools and fantastic gqrx software – both very handy for SDR work.

Valerie AuroraAdvice for women in tech who are tired of talking about women in tech

To be a woman in tech is to be asked to talk about being a woman in tech, regardless of the desires or knowledge of the individual, unique woman in tech in question (see The Unicorn Law). This is a frustrating part of being a member of a marginalized group in any field of endeavor: being expected to speak for, represent, and advocate for your group, regardless of your own personal inclinations. Even women in tech who actively embrace talking about women in tech want to choose if, when, and how they talk about women in tech, and not do so on command by others.

As a woman in tech activist, I’m here to to tell women in tech: it’s 100% fine for you to not talk about women in tech if you don’t want to! It’s literally not your job! Your job is to do tech stuff. If someone really wants you to talk about women in tech, they can darn well offer to pay you for it, and you can still say, “Nope, don’t want to.”

Here are the reasons for you not to feel guilty about not wanting to be an activist, followed by some coping strategies for when you are asked to talk about women in tech. But first, some disclaimers.

This post presumes that you don’t want to harm women in tech as a whole; if you don’t feel solidarity with other women in tech or feel fine harming other women in tech to get ahead, this post isn’t for you. Likewise, if you are a woman in tech and want to talk about women in tech more than you are now, I fully support your decision, speaking as a programmer who became a full-time activist herself. Doing this work is difficult and often unrewarding; let me at least thank you and support you for doing it. If you want to point out that the ideas in this post apply to another marginalized group, or to fields other than tech: I agree, I just know the most about being a woman in tech and so that’s what I’m writing about.

Reasons not to feel guilty

Men should do more for women in tech. Many women in tech feel guilty for not helping other women in tech more, despite the fact that equivalent men often have more time, energy, power, and influence to support women in tech. I once felt guilty as a junior engineer when an older, more experienced woman in my group left, because she had previously asked me to mentor her (!!!) and I refused because I felt unqualified. At the same time, my group was filled with dozens of more knowledgeable and powerful men who felt no personal responsibility at all for her departure. Men aren’t putting in their fair share of work to support women in tech yet. Until they do, feel free to flip the question around and ask what men are doing to support women in tech.

Women are punished for advocating for women in tech. Women who do speak about women in tech are often accused of doing it for personal gain, which is hilarious. I can’t think of a single woman in tech whose lifetime earnings were improved by saying anything about women in tech that wasn’t “work harder and make more money for corporations.” In reality, the research shows that the careers of women and other members of marginalized groups are actually harmed if they appear to be advocating for members of their own group. Feel free to decline to do work that will harm your career. (And if you do it anyway: thank you!!!)

Women in tech already have to do more work. Women in tech already have to do more work in order to get the same credit as an equivalent man. In addition to having to do more of our technical work to be perceived as contributing equally, we are also expected to do emotional labor for free: listening to people’s problems, expressing empathy, doing “office housework” like arranging parties and birthday cards, smiling and being cheerful, taking care of visitors, and welcoming new employees. We are also expected to help and assist men with their jobs without getting credit, and punished when we stick to our own work. Add on to that the job of talking about women in tech, which is not only unrewarded but often punished. While you’ll get pushback for turning down any of this free labor, feel free to wiggle out of as much of it as possible.

Activism is a whole separate job. Activism is a different job from a job in tech. It needs different skills and requires different aptitudes from most tech jobs. Some people have both the skills and aptitude (and the free time) to work a tech job and also be an activist; don’t feel strange if you’re not one of those people.

You can support women in tech in other ways. If you do want to support women in tech, but don’t feel comfortable being an activist yourself, there are plenty of other ways to support women in tech. You can give money to organizations that support women in tech. You can hire more women in tech. You can invest in women in tech. You can be a supportive spouse to a woman in tech. You can mentor women in tech. Feel free to be creative about how you support women in tech and don’t let other people guilt you into their ideas for how you should be supporting women in tech.

You are being a role model for women in tech. Women in tech can help women in tech simply by existing and not actively harming other women in tech. You can speak or write about your tech job. You can agree to interviews with the condition of not being asked about women in tech. You can get promoted and raise your salary. In other words, keep doing your job, and avoid doing things that harm women in tech in the long-term. Avoiding harm is harder than it sounds and takes some expertise and learning to get right, but some rules of thumb are: don’t push other marginalized folks down to give yourself a leg up, do recognize there are many different ways to be a women in tech, do default to listening over speaking when it comes to subjects you’re not an expert in (which may be activism).

Coping strategies

Here are a few coping strategies for when you are inevitably asked to talk about women in tech. You can use these strategies if you never want to talk about women in tech, or if you just don’t want to talk about women in tech in this particular situation. I personally find talking about women in tech fairly boring when the other person thinks they know more than they actually do about the topic, so I often use one of these techniques in that situation.

Make a list of other people to pass requests on to. Sure, you don’t want to give the one millionth talk on What It’s Like to Be a Woman in Programming Community X. But perhaps someone else has started a Women in Programming Community X group and would love to give a talk on the subject. You can also make a list of books or websites or other resources and tell people that while you don’t know much about career advice for women in tech, you’ve heard that “What Works for Women at Work” has some good tips.

Suggest that men do the work instead. When you suggest men do the work to support women in tech, you’ll get some predictable pushback. Lack of knowledge: Remind them that the research exists and can be learned by reading it. Feel afraid/scared/out of place: Remind them that that is how women feel in male-dominated spaces. Don’t you feel guilty: No, but if had the power men did I’d feel guilty for not using it. After a few of these annoying discussions, many people will stop asking you to do women in tech stuff.

Point out your lack of expertise. There’s nothing about being a woman in tech that necessarily makes you an expert on how to support women in tech in general. People will often ask women in tech to do things or make statements in areas they don’t have expertise in; get used to saying “I don’t know about that,” or “I haven’t studied that.” Lots of requests to speak for all women in tech or to reassure people that they aren’t personally sexist can be shot down this way.

Change the subject. If people ask you about women in tech, you often have an easy subject change: your job! Tell them about your project, ask them about their project, ask about a controversial research topic in your area of tech – it’s hard to object to a woman in tech wanting to talk about tech.

Practice saying no. For many people, it’s hard to say no, and it’s even harder when you’re a member of a marginalized group and people expect you to do what they say. Practicing some go-to words and phrases can help with saying no in the moment. It can also help reduce the feelings of guilt if you imagine the situation in your head and then go over all the reasons not to feel guilty.

Some examples of putting these coping strategies into practice:

“Will you write a blog post for International Women’s Day?”
“Thanks for the invitation, but I’m focusing on other projects right now. Have you thought about writing something yourself?”

“We need a woman keynote speaker for my conference. Will you speak? We pay travel.”
“I appreciate the invitation, but I’m only taking paid speaking engagements right now.”

“What do you think about Susan Fowler’s blog post?”
“You know, I haven’t had time to think about because I’ve been so busy. Can I bring you up to date on my project?”

“We’re doing great on gender equality at our company. Right?”
“I’m afraid I don’t have enough information to say either way. If you really wanted to know, I’d suggest paying an outside expert to do a rigorous study.”

“Will you join this panel on women in computing for Ada Lovelace Day?”
“Thanks for thinking of me, but I’m taking a break from non-technical speaking appearances.”

“I got approval for you to go to Grace Hopper Celebration! I assumed you wanted to go.”
“Wow, that was really kind of you, but I think other people on my team will get more out of it than I would.”

“Boy, that Ellen Pao really screwed things up for women in venture capital, don’t you agree?”
“That’s not really something I feel confident speaking about. I’ve got to get back to work, see you at lunch!”

“How does it feel to be the only woman at this conference?”
“That’s not something I’m comfortable talking about. What talk are you going to next?”

“We really want to hire more women, but they just aren’t applying to our job postings! What do you think we’re doing wrong?”
“I’m not a recruiting expert, sorry! That sounds like something you should hire a professional to figure out.”

“I’m putting together a book of essays on women in tech! Will you write a chapter for me for free?”

“Why are you so selfish? Why won’t you do more to help other women?”
“I’m doing what’s right for me.”

For more advice on shutting down unwelcome conversations, check out Captain Awkward’s “Broken Record” technique.

Whatever your decision about if, when, and how you want to talk about women in tech, we hope these techniques are useful to you!

Planet DebianBen Hutchings: Debian LTS work, February 2018

I was assigned 15 hours of work by Freexian's Debian LTS initiative and worked 13 hours. I will carry over 2 hours to March.

I made another release on the Linux 3.2 longterm stable branch (3.2.99) and started the review cycle for the next update (3.2.100). I rebased the Debian package onto 3.2.99 but didn't upload an update to Debian this month.

I also discussed the possibilities for cooperation between Debian LTS and CIP, briefly reviewed leptonlib for additional security issues, and updated the wiki page about the status of Spectre and Meltdown in Debian.


Krebs on SecurityChecked Your Credit Since the Equifax Hack?

A recent consumer survey suggests that half of all Americans still haven’t checked their credit report since the Equifax breach last year exposed the Social Security numbers, dates of birth, addresses and other personal information on nearly 150 million people. If you’re in that fifty percent, please make an effort to remedy that soon.

Credit reports from the three major bureaus — Equifax, Experian and TransUnion — can be obtained online for free at — the only Web site mandated by Congress to serve each American a free credit report every year. is run by a Florida-based company, but its data is supplied by the major credit bureaus, which struggled mightily to meet consumer demand for free credit reports in the immediate aftermath of the Equifax breach. Personally, I was unable to order a credit report for either me or my wife even two weeks after the Equifax breach went public: The site just kept returning errors and telling us to request the reports in writing via the U.S. Mail.

Based on thousands of comments left here in the days following the Equifax breach disclosure, I suspect many readers experienced the same but forgot to come back and try again. If this describes you, please take a moment this week to order your report(s) (and perhaps your spouse’s) and see if anything looks amiss. If you spot an error or something suspicious, contact the bureau that produced the report to correct the record immediately.

Of course, keeping on top of your credit report requires discipline, and if you’re not taking advantage of all three free reports each year you need to get a plan. My strategy is to put a reminder on our calendar to order a new report every four months or so, each time from a different credit bureau.

Whenever stories about credit reports come up, so do the questions from readers about the efficacy and value of credit monitoring services. KrebsOnSecurity has not been particularly kind to the credit monitoring industry; many stories here have highlighted the reality that they are ineffective at preventing identity theft or existing account fraud, and that the most you can hope for from them is that they alert you when an ID thief tries to get new lines of credit in your name.

But there is one area where I think credit monitoring services can be useful: Helping you sort things out with the credit bureaus in the event that there are discrepancies or fraudulent entries on your credit report. I’ve personally worked with three different credit monitoring services, two of which were quite helpful in resolving fraudulent accounts opened in our names.

At $10-$15 a month, are credit monitoring services worth the cost? Probably not on an annual basis, but perhaps during periods when you actively need help. However, if you’re not already signed up for one of these monitoring services, don’t be too quick to whip out that credit card: There’s a good chance you have at least a year’s worth available to you at no cost.

If you’re willing to spend the time, check out a few of the state Web sites which publish lists of companies that have had a recent data breach. In most cases, those publications come with a sample consumer alert letter providing information about how to sign up for free credit monitoring. California publishes probably the most comprehensive such lists at this link. Washington state published their list here; and here’s Maryland’s list. There are more.

It’s important for everyone to remember that as bad as the Equifax breach was (and it was a dumpster fire all around), most of the consumer data exposed in the breach has been for sale in the cybercrime underground for many years on a majority of Americans. If anything, the Equifax breach may have simply refreshed some of those criminal data stores.

That’s why I’ve persisted over the years in urging my fellow Americans to consider freezing their credit files. A security freeze essentially blocks any potential creditors from being able to view or “pull” your credit file, unless you affirmatively unfreeze or thaw your file beforehand.

With a freeze in place on your credit file, ID thieves can apply for credit in your name all they want, but they will not succeed in getting new lines of credit in your name because few if any creditors will extend that credit without first being able to gauge how risky it is to loan to you (i.e., view your credit file).

Bear in mind that if you haven’t yet frozen your credit file and you’re interested in signing up for credit monitoring services, you’ll need to sign up first before freezing your file. That’s because credit monitoring services typically need to access your credit file to enroll you, and if you freeze it they can’t do that.

The previous two tips came from a primer I wrote a few days after the Equifax breach, which is an in-depth Q&A about some of the more confusing aspects of policing your credit, including freezes, credit monitoring, fraud alerts, credit locks and second-tier credit bureaus.

Planet DebianElena Gjevukaj: CoderGals Hackathon

CoderGals Hackathon was organized for the first time in my country. This event took place in the beautiful city of Prizren. This hackathon held for 24 to 48 hours, was an idea which started from two girls majoring in Computer Science, Qendresa and Albiona Hoti.

Thanks to them, we had the chance to work on exciting projects as well as be mentored by key tech people including: Mergim Cahani, Daniel Pocock, Taulant Mehmeti, Mergim Krasniqi, Kolos Pukaj, Bujar Dervishaj, Arta Shehu Zaimi and Edon Bajrami.

We brainstormed for about 3-4 hours to decide for the project. We discussed many ideas that ranged from Doppler effect to GUI interfaces for phone calls. Finally we ended up making an project for linking the PC with your phone so it will ease the procedure not to use both when you need to add a contact, make a call or even sent text messages. We called it Phone Client project.

You can check our work online:

Phone Client

It was a challenge for us because we worked for the first time on Debian OS.

Projects that other girls worked on:

Planet DebianVasudev Kamath: Biboumi - A XMPP - IRC Gateway

IRC is a communication mode (technically a communication protocol) used by many Free Software projects for communication and collaboration. It is serving these projects well even 30 years after its inception. Though I'm pretty much okay with IRC I had a problem of not able to use IRC from the mobile phones. Main problem is the inconsistent network connection, where IRC needs always to be connected. This is where I came across Biboumi.

Biboumi by itself does not have anything to do with mobile phones, its just a gateway which will allow you to connect with IRC channel as if it is a XMPP MUC room from any XMPP client. Benefit of this is it allows to enjoy some of XMPP feature in your IRC channel (not all but those which can be mapped).

I run Biboumi with my ejabbered instance and there by now I can connect to some of the Debian IRC channel directly from my phone using Conversations XMPP client for Android.

Biboumi is packaged for Debian, though I'm co-maintainer of the package most hardwork is done by Jonas Smedegaard in keeping the package in shape. It is also available for stretch-backports (though slightly outdated as its not packaged by us for backports). Once you install the package, copy example configuration file from /usr/share/doc/biboumi/examples/example.conf to /etc/biboumi/biboumi.cfg and modify the values as needed. Below is my sample file with password redacted.


Explanation for all the key, values in the configuration file is available in the man page (man biboumi).

Biboumi is configured as external component of the XMPP server. In my case I'm using ejabberd to host my XMPP service. Below is the configuration needed for allowing biboumi to connect with ejabberd.

  port: 8888
  ip: ""
  module: ejabberd_service
  acess: all
       password: xxx

password field in biboumi configuration should match password value in your XMPP server configuration.

After doing above configuration reload ejabberd (or your XMPP server) and start biboumi. Biboumi package provides systemd service file so you might need to enable it first. That's it now you have an XMPP to IRC gateway ready.

You might notice that I'm using local host name for hostname key as well as ip field in ejabberd configuration. This is because TLS support was added to biboumi Debian package only after 7.2 release as botan 2.x was not available till that point in Debian. Hence using proper domain name and making biboumi listen to public will be not safe at least prior to Debian package version 7.2-2. Also making the biboumi service public means you will also need to handle spam bots trying to connect from your service to IRC, which might get your VPS banned from IRC.

Connection Semantics

Once biboumi is configured and running you can now use XMPP client of your choice (Gajim, Conversation etc.) to connect to IRC. To connect to OFTC from your XMPP client you need to following address in Group Chat section

Replace part after @ to what you have configured in hostname field in biboumi configuration. To join a specific channel on a IRC server you need to join the group conversation with following format

If your nick name is registered and you would want to identify yourself to IRC server you can do that by joining in group conversation with NickServ using following address

Once connected you can send NickServ command directly in this virtual channel. Like identify password nick. It is also possible to configure your XMPP clients like Gajim to send Ad-Hoc commands on connection to particular IRC server for identifying your self with IRC servers. But this part I did not get working in Gajim.

If you are running your own XMPP server then biboumi gives you best way to connect to IRC from your mobile phones. And with applications like Conversation running XMPP application won't be hard on your phone battery.



Planet DebianJeremy Bicha: webkitgtk in Debian Stretch: Report Card

webkitgtk is the GTK+ port of WebKit. webkitgtk provides web functionality for many things including GNOME Online Accounts’ login panels; Evolution’s HTML email editor and viewer; and the engine for the Epiphany web browser (also known as GNOME Web).

Last year, I announced here that Debian 9 “Stretch” included the latest version of webkitgtk (Debian’s package is named webkit2gtk). At the time, I hoped that Debian 9 would get periodic security and bugfix updates. Nine months later, let’s see how we’ve been doing.

Release History

Debian 9.0, released June 17, 2017, included webkit2gtk 2.16.3 (up to date).

Debian 9.1 was released July 22, 2017 with no webkit2gtk update (2.16.5 was the current release at the time).

Debian 9.2, released October 8, 2017, included 2.16.6 (There was a 2.18.0 release available then but for the first stable update, we kept it simple by not taking the brand new series.)

Debian 9.3 was released December 9, 2017 with no webkit2gtk update (2.18.3 was the current release at the time).

Debian 9.4 released March 10, 2018 (today!), includes 2.18.6 (up to date).

Release Schedule

webkitgtk development follows the GNOME release schedule and produces new major updates every March and September. Only the current stable series is supported (although sometimes there can be a short overlap; 2.14.6 was released at the same time as 2.16.1). Distros need to adopt the new series every six months.

Like GNOME, webkitgtk uses even numbers for stable releases (2.16 is a stable series, 2.16.3 is a point release in that series, but 2.17.3 is a development release leading up to 2.18, the next stable series).

There are webkitgtk bugfix releases, approximately monthly. Debian stable point releases happen approximately every two or three months (the first point release was quicker).

In a few days, webkitgtk 2.20 will be released. Debian 9.5 will need to include 2.20.1 (or 2.20.2) to keep users on a supported release.

Report Card

From five Debian 9 releases, we have been up to date in 2 or 3 of them (depending on how you count the 9.2 release).

Using a letter grade scale, I think I’d give Debian a B or B- so far. But this is significantly better than Debian 8 which offered no webkitgtk updates at all except through backports. In my grading, Debian could get a A- if we consistently updated webkitgtk in these point releases.

To get a full A, I think Debian would need to push the new webkitgtk updates (after a brief delay for regression testing) directly as security updates without waiting for point releases. Although that proposal has been rejected for Debian 9, I think it is reasonable for Debian 10 to use this model.

If you are a Debian Developer or Maintainer and would like to help with webkitgtk updates, please get in touch with Berto or me. I, um, actually don’t even run Debian (except briefly in virtual machines for testing), so I’d really like to turn over this responsibility to someone else in Debian.


I find the Repology webkitgtk tracker to be fascinating. For one thing, I find it humorous how the same package can have so many different names in different distros.

Planet DebianAndrew Shadura: Say no to Slack, say yes to Matrix

Of all proprietary chatting systems, Slack has always seemed one of the worst to me. Not only it’s a closed proprietary system with no sane clients, open source or not, but it not just one walled garden, as Facebook or WhatsApp are, but a constellation of walled gardens, isolated from each other. To be able to participate in multiple Slack communities, the user has to create multiple accounts and keep multiple chat windows open all the time. Federation? Self-hosting? Owning your data? All of those are not a thing in Slack. Until recently, it was possible to at least keep the logs of all conversations locally by connecting to the chat using IRC or XMPP if the gateway was enabled.

Now, with Slack shutting down gateways not only you cannot keep the logs on your computer, you also cannot use a client of your choice to connect to Slack. They also began changing the bots API which was likely the reason the Matrix-to-Slack gateway didn’t work properly at times. The issue has since resolved itself, but Slack doesn’t give any guarantees the gateway will continue working, and obviously they aren’t really interested in keeping it working.

So, following Gunnar Wolf’s advice (consider also reading this article by Megan Squire), I recommend you stop using Slack. If you prefer an isolated chat system with features Slack provides, and you can self-host, consider MatterMost or Rocket.Chat. Both seem to provide more or less the same features as Slack, but don’t lock you in, and you can choose to either use their paid cloud offering, or run it on your own server. We’ve been using MatterMost at Collabora since July last year, and while it’s not perfect, it’s not a bad piece of software.

If you woulde prefer a system you can federate, you may be interested to have a look at Matrix. Matrix is an open decentralised protocol and ecosystem, which architecturally looks similar to XMPP, but uses different technologies and offers a richer and more modern baseline, including VoIP, end-to-end encryption, decentralised history and content storage, easy bot integration and more. The web client for Matrix, Riot is comparable to Slack, but unlike Slack, there are more clients you can use, including Weechat, libpurple, a bunch of Qt-based clients and, importantly, Riot for Android and iOS.

You don’t have to self-host a Matrix homeserver, since runs one you can use, but it’s quite easy to run one if you decide to, and you don’t even have to migrate your existing chats — you just join them from accounts on your own homeserver, and that’s it!

To help you with the decision to move from Slack to Matrix, you should know that since Matrix has a Slack gateway, you can gradually migrate your colleagues to the new infrastructure, by joining the Slack and Matrix chats together, and dropping the gateway only when everyone moves from Slack.

Repeating Gunnar, say no to predatory tactics. Say no to Embrace, Extend and Extinguish. Say no to Slack.

Planet DebianMichael Stapelberg: dput usability changes

dput-ng ≥ 1.16 contains two usability changes which make uploading easier:

  1. When no arguments are specified, dput-ng auto-selects the most recent .changes file (with confirmation).
  2. Instead of erroring out when detecting an unsigned .changes file, debsign(1) is invoked to sign the .changes file before proceeding.

With these changes, after building a package, you just need to type dput (in the correct directory of course) to sign and upload it.

Planet DebianGunnar Wolf: On the demise of Slack's IRC / XMPP gateways

I have grudgingly joined three Slack workspaces , due to me being part of proejects that use it as a communications center for their participants. Why grudgingly? Because there is very little that it adds to well-established communications standards that we have had for long years decades.

On this topic, I must refer you to the talk and article presented by Megan Squire, one of the clear highlights of my participation last year at the 13th International Conference on Open Source Systems (OSS2017): «Considering the Use of Walled Gardens for FLOSS Project Communication». Please do have a good read of this article.

Thing is, after several years of playing open with probably the best integration gateway I have seen, Slack is joining the Embrace, Extend and Extinguish-minded companies. Of course, I strongly doubt they will manage to extinguish XMPP or IRC, but they want to strengthen the walls around their walled garden...

So, once they have established their presence among companies and developer groups alike, Slack is shutting down their gateways to XMPP and IRC, arguing it's impossible to achieve feature-parity via the gateway.

Of course, I guess all of us recognize and understand there has long not been feature parity. But that's a feature, not a bug! I expressly dislike the abuse of emojis and images inside what's supposed to be a work-enabling medium. Of course, connecting to Slack via IRC, I just don't see the content not meant for me.

The real motivation is they want to control the full user experience.

Well, they have lost me as a user. The day my IRC client fails to connect to Slack, I will delete my user account. They already had record of all of my interactions using their system. Maybe I won't be able to move any of the groups I am part of away from Slack – But many of us can help create a flood.

Say no to predatory tactics. Say no to Embrace, Extend and Extinguish. Say no to Slack.

Planet Linux AustraliaDonna Benjamin: I said, let me tell you now

Montage of Library Bookshelves

Ever since I heard this month’s #AusGlamBlog theme was “Happiness” I’ve had that Happy song stuck in my head.

“Clap along if you know what happiness is to you”

I’m new to the library world as a professional, but not new to libraries. A sequence of fuzzy memories swirl in my mind when I think of libraries.

First, was my local public library children’s cave filled with books that glittered with colour like jewels.

Next, I recall the mesmerising tone and timbre of the librarian’s voice at primary school. Each week she transported us into a different story as we sat, cross legged in front of her, in some form of rapture.

Coming into closer focus I recall opening drawers in the huge wooden catalogue in the library at high school. Breathing in the deeply lovely, dusty air wafting up whilst flipping through those tiny cards was a tactile delight. Some cards were handwritten, some typewritten, some plastered with laser printed stickers.

And finally, I remember relishing the peace and quiet afforded by booking one of 49 carrel study booths at La Trobe University.

I love libraries. Libraries make me happy.

The loss of libraries makes me sad. I think of Alexandria, and more recently in Timbuktu, and closer to home, I mourn the libraries lost to the dreaming by the ravages of destructive colonial force on this little continent so many of us now call home.

Preservation and digitisation, and open collections give me hope. There can only ever be one precious original of a thing, but facsimiles, and copies and 3D blueprints increasingly means physical things can now too be shared and studied without needing to handle, or risk damaging the original.

Sending precious things from collection to collection is fraught with danger. The revelations of what Australian customs did to priceless plant specimens from France & New Zealand still gives me goosebumps of horror.

Digital. Copies. Catalogues, Circulation, Fines, Holds, Reserves, and Serial patterns. I’m learning new things about the complexities under the surface as I start to work seriously with the Koha Community Integrated Library System. I first learned about the Koha ILS more than a decade ago, but I'm only now getting a chance to work with it. It brings my secret love of libraries and my publicly proclaimed love of open source together in a way I still can’t believe is possible.

So yeah.

OH HAI! I’m Donna, and I’m here to help.

“Clap along if you feel like that's what you wanna do”


CryptogramFriday Squid Blogging: Interesting Interview

Here's an hour-long audio interview with squid scientist Sarah McAnulty.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianAdnan Hodzic: Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

TEDTED gets a fresh new look on TV apps

TED fans with an Android TV or Amazon FireTV will see a newly reimagined app — one that offers far more than just a sleek new design — beginning today.

We’re giving you more relevant talk suggestions, provided daily on the homepage. With our new layout, the app’s playlists and talks are easier than ever to navigate.

TED fans can now use the TED TV app in 21 languages, and even take advantage of Google Assistant on compatible TVs for controls like play, pause and fast forward. Watching your favorite talks has never been easier.

The move is all part of TED’s ongoing effort to fulfill its mission of making the ideas that matter more accessible — regardless of where you are and how you like to tune in. With these changes to curation, design and internationalization, we want to make sure each fan has a more personalized and seamless experience while engaging with TED Talks.

To download the new TED Android TV app, visit the Google Play store. Apps are also available on iOS, Android, Roku, AppleTV and FireTV.

Planet DebianSven Hoexter: half-assed Oracle JRE/JDK 10 support for java-package

I spent an hour to add very basic support for the upcoming Java 10 to my fork of java-package. It still has some edges and the list of binary executables managed via the alternatives system requires some major cleanup. I think once Java 8 is EOL in September it's a good point to consolidate and strip everything except for Java 11 support. If someone requires an older release he can still get back on an earlier version, but by then we won't see any new releases of Java 8, 9, 10. Not speaking about even older stuff.

[sven@digital lib (master)]$ java -version
java version "10" 2018-03-20
Java(TM) SE Runtime Environment 18.3 (build 10+46)
Java HotSpot(TM) 64-Bit Server VM 18.3 (build 10+46, mixed mode)

Planet DebianOlivier Berger: Adding a reminder notification in XFCE systray that I should launch a backup script

I’ve started using borg and borgmatic for backups of my machines. I won’t be using a fully automated backup via a crontab for a start. Instead, I’ve added a recurrent reminder system that will appear on my XFCE desktop to tell me it may be time to do backups.

I’m using yad (a zenity on steroids) to add notifications in the desktop via an anacron.

The notification icon, when clicked, will start a shell script that performs the backups, starting borgmatic.

Here are some bits of my setup :

crontab -l excerpt:

@hourly /usr/sbin/anacron -s -t $HOME/.anacron/etc/anacrontab -S $HOME/.anacron/spool

~/.anacron/etc/anacrontab excerpt:

7 15      borgmatic-home  /home/olivier/bin/

The idea of this anacrontab is to remind me weekly that I should do a backup, 15 minutes after I’ve booted the machine. Another reminding mechanism may be more handy… time will tell.

Then, the script :

notify-send 'Borg backups at home!' "It's time to do a backup." --icon=document-save

# borrowed from

# create a FIFO file, used to manage the I/O redirection from shell
PIPE=$(mktemp -u --tmpdir ${0##*/}.XXXXXXXX)
mkfifo $PIPE

# attach a file descriptor to the file
exec 3<> $PIPE

# add handler to manage process shutdown
function on_exit() {
 echo "quit" >&3
 rm -f $PIPE
trap on_exit EXIT

# add handler for tray icon left click
function on_click() {
 # echo "pid: $YAD_PID"
 echo "icon:document-save" >/proc/$YAD_PID/fd/3
 echo "visible:blink" >/proc/$YAD_PID/fd/3
 xterm -e bash -c "/home/olivier/bin/ --verbosity 1 -c /home/olivier/borgmatic/home-config.yaml; read -p 'Press any key ...'"
 echo "quit" >/proc/$YAD_PID/fd/3
 # kill -INT $YAD_PID
export -f on_click

# create the notification icon
yad --notification \
 --listen \
 --image="appointment-soon" \
 --text="Click icon to start borgmatic backup at home" \
 --command="bash -c on_click $YAD_PID" <&3

The script will start yad so that it displays an icon in the systray. When the icon is clicked, it will start borgmatic, after having changed the icon. Borgmatic will be started inside an xterm so as to get passphrase input, and display messages. Once borgmatic is done backing up, yad will be terminated.

There may be a more elegant way to pass commands to yad listening on file descriptor 3/pipe, but I couldn’t figure out, so the /proc hack. This works on Linux… but not sure in other Unices.

Hope this helps.

CryptogramOURSA Conference

Responding to the lack of diversity at the RSA Conference, a group of security experts have announced a competing one-day conference: OUR Security Advocates, or OURSA. It's in San Francisco, and it's during RSA, so you can attend both.

Worse Than FailureError'd: ICANN't Even...

Jeff W. writes, "You know, I don't think this one will pass."


"Wow! This Dell laptop is pretty sweet!...but I wonder what that other 999999913 GB of data I have can contain..." writes Nicolas A.


"XML is big news at our university!" Gordon S. wrote.


Mark B. wrote, "On Saturday afternoons this British institution lets its hair down and fully embraces the metric system."


"Apparently, my computer and I judge success by very different standards," Michael C. writes.


"I agree you can't disconnect something that doesn't exist, more so when it's named two random Unicode characters," wrote Jurjen.


[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianChristoph Berg: Cool Unix Features: paste

paste is one of those tools nobody uses [1]. It puts two file side by side, line by line.

One application for this came up today where some tool was called for several files at once and would spit out one line by file, but unfortunately not including the filename.

$ paste <(ls *.rpm) <(ls *.rpm | xargs -r rpm -q --queryformat '%{name} \n' -p)

[1] See "J" in The ABCs of Unix

[PS: I meant to blog this in 2011, but apparently never committed the file...]

Planet DebianChristoph Berg: Stepping down as DAM

After quite some time (years actually) of inactivity as Debian Account Manager, I finally decided to give back that Debian hat. I'm stepping down as DAM. I will still be around for the occasional comment from the peanut gallery, or to provide input if anyone actually cares to ask me about the old times.

Thanks for the fish!

Planet DebianIustin Pop: Corydalis 0.3.0 release

Short notice: just release 0.3.0, with a large number of new features and improvements - see the changelog for details.

Without aiming for, this release follows almost exactly a month after v0.2, so maybe a monthly release cycle while I still have lots of things to add (and some time to actually do it) would be an interesting goal.

One potentially interesting thing: since v0.2, I've added a demo site at using a few photos from my own collection, so if you're curious what this actually is, check that out.

Planet DebianSteve Kemp: A change of direction ..

In my previous post I talked about how our child-care works here in wintery Finland, and suggested there might be a change in the near future.

So here is the predictable update; I've resigned from my job and I'm going to be taking over childcare/daycare. Ideally this will last indefinitely, but it is definitely going to continue until November. (Which is the earliest any child could be moved into public day-care if there problems.)

I've loved my job, twice, but even though it makes me happy (in a way that several other positions didn't) there is no comparison. Child-care makes me happier-still. Sure there are days when your child just wants to scream, refuse to eat, and nothing works. But on average everything is awesome.

It's a hard decision, a "brave" decision too apparently (which I read negatively!), but also an easy one to make.

It'll be hard. I'll have no free time from 7AM-5PM, except during nap-time (11AM-1PM, give or take). But it will be worth it.

And who knows, maybe I'll even get to rant at people who ask "Where's his mother?" I live for those moments. Truly.

Don MartiPeople's personal data: take it or ask for it?

We know that advertising on the web has reached a low point of fraud, security risks, and lack of brand safety. And it's not making much money for publishers anyway. So a lot of people are talking about how to fix it, by building a new user data sharing system, in which individuals are in control of which data they choose to reveal to which companies.

Unlike today's surveillance marketing, people wouldn't be targeted for advertising based on data that someone figures out about them and that they might not choose to share.

A big win here will be that the new system would tend to lower the ROI on creepy marketing investments that have harmful side effects such as identity theft and facilitation of state-sponsored misinformation, and increase the ROI for funding ad-supported sites that people trust and choose to share personal information with.

A user-permissioned data sharing system is an excellent goal with the potential to help clean up a lot of the Internet's problems. But I have to be realistic about it. Adam Smith once wrote,

The pride of man makes him love to domineer, and nothing mortifies him so much as to be obliged to condescend to persuade his inferiors.

So the big question is still:

Why would buyers of user data choose to deal with users (or publishers who hold data with the user's permission) when they can just take the data from users, using existing surveillance marketing firms?

Some possible answers.

  • GDPR? Unfortunately, regulatory capture is still a thing even in Europe. Sometimes I wish that American privacy nerds would quit pretending that Europe is ruled by Galadriel or something.

  • brand safety problems? Maybe a little around the edges when a particularly bad video gets super viral. But platforms and adtech can easily hide brand-unsafe "dark" material from marketers, who can even spend time on Youtube and Facebook without ever developing a clue about how brand-unsafe they are for regular people. Even as news-gatherers get better at finding the worst stuff, platforms will always make hiding brand-unsafe content a high priority.

  • fraud concerns? Now we're getting somewhere. Fraud hackers are good at making realistic user data. Even "people-based" platforms mysteriously have more users in desirable geography/demography combinations than are actually there according to the census data. So, where can user-permissioned data be a fraud solution?

  • signaling? The brand equity math must be out there somewhere, but it's nowhere near as widely known as the direct response math that backs up the creepy stuff. Maybe some researcher at one of the big brand advertisers developed the math internally in the 1980s but it got shredded when the person retired. Big possible future win for the right behavioral economist at the right agency, but not in the short term.

  • improvements in client-side privacy? Another good one. Email spam filtering went from obscure nerdery to mainstream checklist feature quickly—because email services competed on it. Right now the web browser is a generic product, and browser makers need to differentiate. One promising angle is for the browser to help build a feeling of safety in the user by reducing user-perceived creepiness, and the browser's need to compete on this is aligned with the interests of trustworthy sites and with user-permissioned data sharing.

(And what's all this "we" stuff, anyway? Post-creepy advertising is an opportunity for individual publishers and brands to get out ahead, not a collective action problem.)

Planet Linux AustraliaOpenSTEM: Amelia Earhart in the news

Recently Amelia Earhart has been in the news once more, with publication of a paper by an American forensic anthropologist, Richard Jantz. Jantz has done an analysis of the measurements made of bones found in 1940 on the island of Nikumaroro Island in Kiribati. Unfortunately, the bones no longer survive, but they were analysed in […]

Planet DebianJoey Hess: prove you are not an Evil corporate person

In which Google be Google and I drop a hot AGPL tip.


Google Is Quietly Providing AI Technology for Drone Strike Targeting Project
Google Is Helping the Pentagon Build AI for Drones

to automate the identification and classification of images taken by drones — cars, buildings, people — providing analysts with increased ability to make informed decisions on the battlefield

These news reports don't mention reCaptcha explicitly, but it's been asking about a lot of cars lately. Whatever the source of the data that Google is using for this, it's disgusting that they're mining it from us without our knowledge or consent.

Google claims that "The technology flags images for human review, and is for non-offensive uses only". So, if a drone operator has a neural network that we all were tricked & coerced into training to identify cars and people helping to highlight them on their screen and center the crosshairs just right, and the neural network is not pressing the kill switch, is it being used for "non-offensive purposes only"?

Google is known to be deathly allergic to the AGPL license. Not only on servers; they don't even allow employees to use AGPL software on workstations. If you write free software, and you'd prefer that Google not use it, a good way to ensure that is to license it under the AGPL.

I normally try to respect the privacy of users of my software, and of personal conversations. But at this point, I feel that Google's behavior has mostly obviated those moral obligations. So...

Now seems like a good time to mention that I have been contacted by multiple people at Google about several of my AGPL licensed projects (git-annex and either keysafe or debug-me I can't remember which) trying to get me to switch them to the GPL, and had long conversations with them about it.

Google has some legal advice that the AGPL source provision triggers much more often than it's commonly understood to. I encouraged them to make that legal reasoning public, so the community could address/debunk it, but I don't think they have. I won't go into details about it here, other than it seemed pretty bonkers.

Mixing in some AGPL code with an otherwise GPL codebase also seems sufficient to trigger Google's allergy. In the case of git-annex, it's possible to build all releases (until next month's) with a flag that prevents linking with any AGPL code, which should mean the resulting binary is GPL licensed, but Google still didn't feel able to use it, since the git-annex source tree includes AGPL files.

I don't know if Google's allergy to the AGPL extends to software used for drone murder applications, but in any case I look forward to preventing Google from using more of my software in the future.

(Illustration by scatter//gather)

Planet DebianRuss Allbery: My friend Stirge

Eric Sturgeon, one of my oldest and dearest friends, died this week of complications from what I'm fairly certain was non-alcoholic fatty liver disease.

It was not entirely unexpected. He'd been getting progressively worse over the past six months. But at the same time there's no way to expect this sort of hole in my life.

I've known Stirge for twenty-five years, more than half of my life. We were both in college when we first met on Usenet in 1993 in the rec.arts.comics.* hierarchy, where Stirge was the one with the insane pull list and the canonical knowledge of the Marvel Universe. We have been friends ever since: part of on-line fiction groups, IRC channels, and free-form role-playing groups. He's been my friend through school and graduation, through every step of my career, through four generations of console systems, through two moves for me and maybe a dozen for him, through a difficult job change... through my entire adult life.

For more than fifteen years, he's been spending a day or a week or two, several times a year, sitting on my couch and playing video games. Usually he played and I navigated, researching FAQs and walkthroughs. Twitch was immediately obvious to me the moment I discovered it existed; it's the experience I'd had with Stirge for years before that. I don't know what video games are without his thoughts on them.

Stirge rarely was able to put his ideas into stories he could share with other people. He admired other people's art deeply, but wasn't an artist himself. But he loved fictional worlds, loved their depth and complexity and lore, and was deeply and passionately creative. He knew the stories he played and read and watched, and he knew the characters he played, particularly in World of Warcraft and Star Wars: The Old Republic. His characters had depth and emotions, histories, independent viewpoints, and stories that I got to hear. Stirge wrote stories the way that I do: in our heads, shared with a small number of people if anyone, not crafted for external consumption, not polished, not always coherent, but deeply important to our thoughts and our emotions and our lives. He's one of the very few people in the world I was able to share that with, who understood what that was like.

He was the friend who I could not see for six months, a year, and then pick up a conversation with as if we'd seen each other yesterday.

After my dad had a heart attack and emergency surgery to embed a pacemaker while we were on vacation in Oregon, I was worrying about how we would manage to get him back home. Stirge immediately volunteered to drive down from Seattle to drive us. He had a crappy job with no vacation, and if he'd done that he almost certainly would have gotten fired, and I knew with absolute certainty that he would have done it anyway.

I didn't take him up on the offer (probably to his vast relief). When I told him years later how much it meant to me, he didn't feel like it should have counted, since he didn't do anything. But he did. In one of the worst moments of my life, he said exactly the right thing to make me feel like I wasn't alone, that I wasn't bearing the burden of figuring everything out by myself, that I could call on help if I needed it. To this day I start crying every time I think about it. It's one of the best things that anyone has ever done for me.

Stirge confided in me, the last time he visited me, that he didn't think he was the sort of person anyone thought about when he wasn't around. That people might enjoy him well enough when he was there, but that he'd quickly fade from memory, with perhaps a vague wonder about what happened to that guy. But it wasn't true, not for me, not ever. I tried to convince him of that while he was alive, and I'm so very glad that I did.

The last time I talked to him, he explained the Marvel Cinematic Universe to me in detail, and gave me a rundown of the relative strength of every movie, the ones to watch and the ones that weren't as good, and then did the same thing for the DC movies. He got to see Star Wars before he died. He would have loved Black Panther.

There were so many games we never finished, and so many games we never started.

I will miss you, my friend. More than I think you would ever have believed.

Planet DebianDaniel Pocock: Bug Squashing and Diversity

Over the weekend, I was fortunate enough to visit Tirana again for their first Debian Bug Squashing Party.

Every time I go there, female developers (this is a hotspot of diversity) ask me if they can host the next Mini DebConf for Women. There have already been two of these very successful events, in Barcelona and Bucharest. It is not my decision to make though: anybody can host a MiniDebConf of any kind, anywhere, at any time. I've encouraged the women in Tirana to reach out to some of the previous speakers personally to scope potential dates and contact the DPL directly about funding for necessary expenses like travel.

The confession

If you have read Elena's blog post today, you might have seen my name and picture and assumed that I did a lot of the work. As it is International Women's Day, it seems like an opportune time to admit that isn't true and that as in many of the events in the Balkans, the bulk of the work was done by women. In fact, I only bought my ticket to go there at the last minute.

When I arrived, Izabela Bakollari and Anisa Kuci where already at the venue getting everything ready. They looked busy, so I asked them if they would like a bonus responsibility, presenting some slides about bug squashing that they had never seen before while translating them into Albanian in real-time. They delivered the presentation superbly, it was more entertaining than any TED talk I've ever seen.

The bugs that won't let you sleep

The event was boosted by a large contingent of Kosovans, including 15 more women. They had all pried themselves out of bed at 03:00 am to take the first bus to Tirana. It's rare to see such enthusiasm for bugs amongst developers anywhere but it was no surprise to me: most of them had been at the hackathon for girls in Prizren last year, where many of them encountered free software development processes for the first time, working long hours throughout the weekend in the summer heat.

and a celebrity guest

A major highlight of the event was the presence of Jona Azizaj, a Fedora contributor who is very proactive in supporting all the communities who engage with people in the Balkans, including all the recent Debian events there. Jona is one of the finalists for Red Hat's Women in Open Source Award. Jona was a virtual speaker at DebConf17 last year, helping me demonstrate a call from the Fedora community WebRTC service to the Debian equivalent, At Mini DebConf Prishtina, where fifty percent of talks were delivered by women, I invited Jona on stage and challenged her to contemplate being a speaker at Red Hat Summit. Giving a talk there seemed like little more than a pipe dream just a few months ago in Prishtina: as a finalist for this prestigious award, her odds have shortened dramatically. It is so inspiring that a collaboration between free software communities helps build such fantastic leaders.

With results like this in the Balkans, you may think the diversity problem has been solved there. In reality, while the ratio of female participants may be more natural, they still face problems that are familiar to women anywhere.

One of the greatest highlights of my own visits to the region has been listening to some of the challenges these women have faced, things that I never encountered or even imagined as the stereotypical privileged white male. Yet despite enormous social, cultural and economic differences, while I was sitting in the heat of the summer in Prizren last year, it was not unlike my own time as a student in Australia and the enthusiasm and motivation of these young women discovering new technologies was just as familiar to me as the climate.

Hopefully more people will be able to listen to what they have to say if Jona wins the Red Hat award or if a Mini DebConf for Women goes ahead in the Balkans (subscribe before posting).


Planet DebianMarkus Koschany: My Free Software Activities in February 2018

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Last month I wrote about „The state of Debian Games“ and I was pleasantly surprised that someone apparently read my post and offered some help with saving endangered games. Well, I don’t know how it will turn out but at least it is encouraging to see that there are people who still care about some old fashioned games. As a matter of fact the GNOME maintainers would like to remove some obsolete GNOME 2 libraries which makes a few of our games RC-buggy. Ideally they should be ported to GNOME 3 but if they could be replaced with a similar game written in a different and awesome programming language (such as Java or Clojure?), for a different desktop environment, that would do as well. 😉 If you’re bored to death or just want a challenge contact us at
  • I packaged a new release of mupen64plus-qt to fix a FTBFS bug (#887576)
  • I uploaded a new version of freeciv to stretch-backports.
  • Pygame-sdl2 and renpy got some love too. (new upstream releases)
  • I sponsored a new revision of redeclipse for Martin-Erik Werner to fix #887744.
  • Yangfl introduced ddnet to Debian which is a popular modification/standalone game similar to teeworlds. I reviewed and eventually sponsored a new upstream release for him. If you are into multiplayer games then ddnet is certainly something you should look forward to.
  • I gladly applied another patch by Peter Green to fix #889059 in warzone2100 and Aurelien Jarno’s fix for btanks (#890632).

Debian Java

  • The Eclipse problem: The Eclipse IDE is seriously threatened to be removed from Debian. Once upon a time we even had a dedicated team that cared about the package but nowadays there is nobody. We regularly get requests to update the IDE to the latest version but there is no one who wants to do the necessary work. The situation is best described in #681726. This alone is worrying enough but due to an interesting dependency chain (batik -> maven -> guice -> libspring-java -> aspectj -> eclipse-platform) Eclipse cannot be removed without breaking dozens of other Java packages. So long story short I started to work on it and packaged a standalone libequinox-osgi-java package, so that we can save at least all reverse-dependencies for this package. Next was tycho which is required to build newer Eclipse versions. Annoyingly it requires said newer version of Eclipse to build…which means we must bootstrap it. I’m still in the process to upgrade tycho to version 1.0 and hope to make some progress in March.
  • I prepared security updates for jackson-databind, lucene-solr and tomcat-native.
  • New upstream releases: jboss-xnio, commons-parent, jboss-logging, jboss-module, mongo-java-driver and libspring-java (#890001).
  • Bug fixes and triaging: wagon2 (#881815, #889427), byte-buddy, (#884207), commons-io, maven-archiver (#886875), jdeb (#889642), commons-math, jflex (#890345), commons-httpclient (#871142)
  • I introduced jboss-bridger which is a new build-dependency of jboss-modules.
  • I sponsored a freeplane update for Felix Natter.

Debian LTS

This was my twenty-fourth month as a paid contributor and I have been paid to work 23,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 05.02.2018 until 11.02.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in binutils, graphicsmagick, wayland, unzip, kde-runtime, libjboss-remoting-java, libvirt, exim4, libspring-java, puppet, audacity, leptonlib, librsvg, suricata, exiv2, polarssl and imagemagick.
  • I tested a security update for exim4 and uploaded a package for Abhijith.
  • DLA-1275-1. Issued a security update for uwsgi fixing 1 CVE.
  • DLA-1276-1. Issued a security update for tomcat-native fixing 1 CVE.
  • DLA-1280-1. Issued a security update for pound fixing 1 CVE.
  • DLA-1281-1. Issued a security update for advancecomp fixing 1 CVE.
  • DLA-1295-1. Issued a security update for drupal7 fixing 4 CVE.
  • DLA-1296-1. Issued a security update for xmltooling fixing 1 CVE.
  • DLA-1301-1. Issued a security update for tomcat7 fixing 2 CVE.


  • I NMUed vdk2 (#885760) to prevent the removal of langdrill.

Thanks for reading and see you next time.

Planet DebianSteinar H. Gunderson: Nageru 1.7.0 released

I've just released version 1.7.0 of Nageru, my free software video mixer. The poster child feature for this release is the direct integration of CEF, yielding high-performance HTML5 graphics directly into Nageru. This obsoletes the earlier CasparCG integration through playing a video from a socket (although video support is of course still very much present!), which were significantly slower and more flimsy. (Also, when CEF gets around to integrating with clients on the GPU level, you'll see even higher performance, and also stuff like WebGL, which I've turned off the for time being.)

Unfortunately, Debian doesn't carry CEF, and I haven't received any answers to my probes of whether it would be possible to do so—it would certainly involve some coordination with the Chromium maintainers. Thus, it is an optional dependency, and the packages that are coming into unstable are built without CEF support.

As always, the changelog is below, and the documentation has been updated to reflect new features and developments. Happy mixing!

Nageru 1.7.0, March 8th, 2018

  - Support for HTML5 graphics directly in Nageru, through CEF
    (Chromium Embedded Framework). This performs better and is more
    flexible than integrating with CasparCG over a socket. Note that
    CEF is an optional component; see the documentation for more

  - Add an HTTP endpoint for enumerating channels and one for getting
    only their colors. Intended for remote tally applications;
    set the documentation.

  - Add a video grid display that removes the audio controls and shows
    the video channels only, potentially in multiple rows if that makes
    for a larger viewing area.

  - Themes can now present simple menus in the Nageru UI. See the
    documentation for more information.

  - Various bugfixes.

Cory DoctorowClassroom materials for Little Brother from Mary Kraus

Mary Kraus — who created a key to page-numbers in the Little Brother audiobook for students with reading disabilities — continues to create great classroom materials for Little Brother: Who’s Who in “Little Brother” is a Quizlet that teaches about the famous people mentioned in the book, from Alan Turing to Rosa Luxembourg; while the Acronym Challenge asks students to unpack acronyms like DHS, NPR, IM, DNS, and ACLU.

TEDMeet the 2018 class of TED Fellows and Senior Fellows

The TED Fellows program is excited to announce the new group of TED2018 Fellows and Senior Fellows.

Representing a wide range of disciplines and countries — including, for the first time in the program, Syria, Thailand and Ukraine — this year’s TED Fellows are rising stars in their fields, each with a bold, original approach to addressing today’s most complex challenges and capturing the truth of our humanity. Members of the new Fellows class include a journalist fighting fake news in her native Ukraine; a Thai landscape architect designing public spaces to protect vulnerable communities from climate change; an American attorney using legal assistance and policy advocacy to bring justice to survivors of campus sexual violence; a regenerative tissue engineer harnessing the body’s immune system to more quickly heal wounds; a multidisciplinary artist probing the legacy of slavery in the US; and many more.

The TED Fellows program supports extraordinary, iconoclastic individuals at work on world-changing projects, providing them with access to the global TED platform and community, as well as new tools and resources to amplify their remarkable vision. The TED Fellows program now includes 453 Fellows who work across 96 countries, forming a powerful, far-reaching network of artists, scientists, doctors, activists, entrepreneurs, inventors, journalists and beyond, each dedicated to making our world better and more equitable. Read more about their visionary work on the TED Fellows blog.

Below, meet the group of Fellows and Senior Fellows who will join us at TED2018, April 10–14, in Vancouver, BC, Canada.

Antionette Carroll
Antionette Carroll (USA)
Social entrepreneur + designer
Designer and founder of Creative Reaction Lab, a nonprofit using design to foster racially equitable communities through education and training programs, community engagement consulting and open-source tools and resources.

Psychiatrist Essam Daod comforts a Syrian refugee as she arrives ashore at the Greek island of Lesvos. His organization Humanity Crew provides psychological aid to refugees and recently displaced populations. (Photo: Laurence Geai)

Essam Daod
Essam Daod (Palestine | Israel)
Mental health specialist
Psychiatrist and co-founder of Humanity Crew, an NGO providing psychological aid and first-response mental health interventions to refugees and displaced populations.

Laura L. Dunn
Laura L. Dunn (USA)
Victims’ rights attorney
Attorney and Founder of SurvJustice, a national nonprofit increasing the prospect of justice for survivors of campus sexual violence through legal assistance, policy advocacy and institutional training.

Rola Hallam
Rola Hallam (Syria | UK)
Humanitarian aid entrepreneur 
Medical doctor and founder of CanDo, a social enterprise and crowdfunding platform that enables local humanitarians to provide healthcare to their own war-devastated communities.

Olga Iurkova
Olga Yurkova (Ukraine)
Journalist + editor
Journalist and co-founder of, an independent Ukrainian organization that trains an international cohort of fact-checkers in an effort to curb propaganda and misinformation in the media.

Glaciologist M Jackson studies glaciers like this one — the glacier Svínafellsjökull in southeastern Iceland. The high-water mark visible on the mountainside indicates how thick the glacier once was, before climate change caused its rapid recession. (Photo: M Jackson)

M Jackson
M Jackson (USA)
Geographer + glaciologist
Glaciologist researching the cultural and social impacts of climate change on communities across all eight circumpolar nations, and an advocate for more inclusive practices in the field of glaciology.

Romain Lacombe
Romain Lacombe (France)
Environmental entrepreneur
Founder of Plume Labs, a company dedicated to raising awareness about global air pollution by creating a personal electronic pollution tracker that forecasts air quality levels in real time.

Saran Kaba Jones
Saran Kaba Jones (Liberia | USA)
Clean water advocate
Founder and CEO of FACE Africa, an NGO that strengthens clean water and sanitation infrastructure in Sub-Saharan Africa through innovative community support services.

Yasin Kakande
Yasin Kakande (Uganda)
Investigative journalist + author
Journalist working undercover in the Middle East to expose the human rights abuses of migrant workers there.

In one of her long-term projects, “The Three: Senior Love Triangle,” documentary photographer Isadora Kosofsky shadowed a three-way relationship between aged individuals in Los Angeles, CA – Jeanie (81), Will (84), and Adina (90). Here, Jeanie and Will kiss one day after a fight.

Isadora Kosofsky
Isadora Kosofsky (USA)
Photojournalist + filmmaker
Photojournalist exploring underrepresented communities in America with an immersive approach, documenting senior citizen communities, developmentally disabled populations, incarcerated youth, and beyond.

Adam Kucharski
Adam Kucharski (UK)
Infectious disease scientist
Infectious disease scientist creating new mathematical and computational approaches to understand how epidemics like Zika and Ebola spread, and how they can be controlled.

Lucy Marcil
Lucy Marcil (USA)
Pediatrician + social entrepreneur
Pediatrician and co-founder of StreetCred, a nonprofit addressing the health impact of financial stress by providing fiscal services to low-income families in the doctor’s waiting room.

Burçin Mutlu-Pakdil
Burçin Mutlu-Pakdil (Turkey | USA)
Astrophysicist studying the structure and dynamics of galaxies — including a rare double-ringed elliptical galaxy she discovered — to help us understand how they form and evolve.

Faith Osier
Faith Osier (Kenya | Germany)
Infectious disease doctor
Scientist studying how humans acquire immunity to malaria, translating her research into new, highly effective malaria vaccines.

In “Birth of a Nation” (2015), artist Paul Rucker recast Ku Klux Klan robes in vibrant, contemporary fabrics like spandex, Kente cloth, camouflage and white satin – a reminder that the horrors of slavery and the Jim Crow South still define the contours of American life today. (Photo: Ryan Stevenson)

Paul Rucker
Paul Rucker (USA)
Visual artist + cellist
Multidisciplinary artist exploring issues related to mass incarceration, racially motivated violence, police brutality and the continuing impact of slavery in the US.

Kaitlyn Sadtler
Kaitlyn Sadtler (USA)
Regenerative tissue engineer
Tissue engineer harnessing the body’s natural immune system to create new regenerative medicines that mend muscle and more quickly heal wounds.

DeAndrea Salvador (USA)
Environmental justice advocate
Sustainability expert and founder of RETI, a nonprofit that advocates for inclusive clean-energy policies that help low-income families access cutting-edge technology to reduce their energy costs.

Harbor seal patient Bogey gets a checkup at the Marine Mammal Center in California. Veterinarian Claire Simeone studies marine mammals like harbor seals to understand how the health of animals, humans and our oceans are interrelated. (Photo: Ingrid Overgard / The Marine Mammal Center)

Claire Simeone
Claire Simeone (USA)
Marine mammal veterinarian
Veterinarian and conservationist studying how the health of marine mammals, such as sea lions and dolphins, informs and influences both human and ocean health.

Kotchakorn Voraakhom
Kotchakorn Voraakhom (Thailand)
Urban landscape architect
Landscape architect and founder of Landprocess, a Bangkok-based design firm building public green spaces and green infrastructure to increase urban resilience and protect vulnerable communities from climate change.

Mikhail Zygar
Mikhail Zygar (Russia)
Journalist + historian
Journalist covering contemporary and historical Russia and founder of Project1917, a digital documentary project that narrates the 1917 Russian Revolution in an effort to contextualize modern-day Russian issues.

TED2018 Senior Fellows

Senior Fellows embody the spirit of the TED Fellows program. They attend four additional TED events, mentor new Fellows and continue to share their remarkable work with the TED community.

Prosanta Chakrabarty
Prosanta Chakrabarty (USA)
Evolutionary biologist and natural historian researching and discovering fish around the world in an effort to understand fundamental aspects of biological diversity.

Aziza Chaouni
Aziza Chaouni (Morocco)
Civil engineer and architect creating sustainable built environments in the developing world, particularly in the deserts of the Middle East.

Shohini Ghose
Shohini Ghose (Canada)
Quantum physicist + educator
Theoretical physicist developing quantum computers and novel protocols like teleportation, and an advocate for equity, diversity and inclusion in science.

A pair of shrimpfish collected in Tanzanian mangroves by ichthyologist Prosanta Chakrabarty and his colleagues this past year. They may represent an unknown population or even a new species of these unusual fishes, which swim head down among aquatic plants.

Zena el Khalil
Zena el Khalil (Lebanon)
Artist + cultural activist
Artist and cultural activist using visual art, site-specific installation, performance and ritual to explore and heal the war-torn history of Lebanon and other global sites of trauma.

Bektour Iskender
Bektour Iskender (Kyrgyzstan)
Independent news publisher
Co-founder of Kloop, an NGO and leading news publication in Kyrgyzstan, committed to freedom of speech and training young journalists to cover politics and investigate corruption.

Mitchell Jackson
Mitchell Jackson (USA)
Writer + filmmaker
Writer exploring race, masculinity, the criminal justice system, and family relationships through fiction, essays and documentary film.

Jessica Ladd
Jessica Ladd (USA)
Sexual health technologist
Founder and CEO of Callisto, a nonprofit organization developing technology to combat sexual assault and harassment on campus and beyond.

Jorge Mañes Rubio
Jorge Mañes Rubio (Spain)
Artist investigating overlooked places on our planet and beyond, creating artworks that reimagine and revive these sites through photography, site-specific installation and sculpture.

An asteroid impact is the only natural disaster we have the technology to prevent, but since prevention takes time, we must search for near-Earth asteroids now. Astronomer Carrie Nugent does just that, discovering and studying asteroids like this one. (Illustration: Tim Pyle and Robert Hurt / NASA/JPL-Caltech)

Carrie Nugent (USA)
Asteroid hunter
Astronomer using machine learning to discover and study near-Earth asteroids, our smallest and most numerous cosmic neighbors.

David Sengeh
David Sengeh (Sierra Leone + South Africa)
Biomechatronics engineer
Research scientist designing and deploying new healthcare technologies, including artificial intelligence, to cure and fight disease in Africa.

CryptogramExtracting Secrets from Machine Learning Systems

This is fascinating research about how the underlying training data for a machine-learning system can be inadvertently exposed. Basically, if a machine-learning system trains on a dataset that contains secret information, in some cases an attacker can query the system to extract that secret information. My guess is that there is a lot more research to be done here.

EDITED TO ADD (3/9): Some interesting links on the subject.

CryptogramNew DDoS Reflection-Attack Variant

This is worrisome:

DDoS vandals have long intensified their attacks by sending a small number of specially designed data packets to publicly available services. The services then unwittingly respond by sending a much larger number of unwanted packets to a target. The best known vectors for these DDoS amplification attacks are poorly secured domain name system resolution servers, which magnify volumes by as much as 50 fold, and network time protocol, which increases volumes by about 58 times.

On Tuesday, researchers reported attackers are abusing a previously obscure method that delivers attacks 51,000 times their original size, making it by far the biggest amplification method ever used in the wild. The vector this time is memcached, a database caching system for speeding up websites and networks. Over the past week, attackers have started abusing it to deliver DDoSes with volumes of 500 gigabits per second and bigger, DDoS mitigation service Arbor Networks reported in a blog post.

Cloudflare blog post. BoingBoing post.

EDITED TO ADD (3/9): Brian Krebs covered this.

Krebs on SecurityLook-Alike Domains and Visual Confusion

How good are you at telling the difference between domain names you know and trust and impostor or look-alike domains? The answer may depend on how familiar you are with the nuances of internationalized domain names (IDNs), as well as which browser or Web application you’re using.

For example, how does your browser interpret the following domain? I’ll give you a hint: Despite appearances, it is most certainly not the actual domain for software firm CA Technologies (formerly Computer Associates Intl Inc.), which owns the original domain name:


Go ahead and click on the link above or cut-and-paste it into a browser address bar. If you’re using Google Chrome, Apple’s Safari, or some recent version of Microsoft‘s Internet Explorer or Edge browsers, you should notice that the address converts to “” This is called “punycode,” and it allows browsers to render domains with non-Latin alphabets like Cyrillic and Ukrainian.

Below is what it looks like in Edge on Windows 10; Google Chrome renders it much the same way. Notice what’s in the address bar (ignore the “fake site” and “Welcome to…” text, which was added as a courtesy by the person who registered this domain):

The domain https://www.са.com/ as rendered by Microsoft Edge on Windows 10. The rest of the text in the image (beginning with “Welcome to a site…”) was added by the person who registered this test domain, not the browser.

IE, Edge, Chrome and Safari all will convert https://www.са.com/ into its punycode output (, in part to warn visitors about any confusion over look-alike domains registered in other languages. But if you load that domain in Mozilla Firefox and look at the address bar, you’ll notice there’s no warning of possible danger ahead. It just looks like it’s loading the real

What the fake domain looks like when loaded in Mozilla Firefox. A browser certificate ordered from Comodo allows it to include the green lock (https://) in the address bar, adding legitimacy to the look-alike domain. The rest of the text in the image (beginning with “Welcome to a site…”) was added by the person who registered this test domain, not the browser. Click to enlarge.

The domain “” pictured in the first screenshot above is punycode for the Ukrainian letters for “s” (which is represented by the character “c” in Russian and Ukrainian), as well as an identical Ukrainian “a”.

It was registered by Alex Holden, founder of Milwaukee, Wis.-based Hold Security Inc. Holden’s been experimenting with how the different browsers handle punycodes in the browser and via email. Holden grew up in what was then the Soviet Union and speaks both Russian and Ukrainian, and he’s been playing with Cyrillic letters to spell English words in domain names.

Letters like A and O look exactly the same and the only difference is their Unicode value. There are more than 136,000 Unicode characters used to represent letters and symbols in 139 modern and historic scripts, so there’s a ton of room for look-alike or malicious/fake domains.

For example, “a” in Latin is the Unicode value “0061” and in Cyrillic is “0430.”  To a human, the graphical representation for both looks the same, but for a computer there is a huge difference. Internationalized domain names (IDNs) allow domain names to be registered in non-Latin letters (RFC 3492), provided the domain is all in the same language; trying to mix two different IDNs in the same name causes the domain registries to reject the registration attempt.

So, in the Cyrillic alphabet (Russian/Ukrainian), we can spell АТТ, УАНОО, ХВОХ, and so on. As you can imagine, the potential opportunity for impersonation and abuse are great with IDNs. Here’s a snippet from a larger chart Holden put together showing some of the more common ways that IDNs can be made to look like established, recognizable domains:

Image: Hold Security.

Holden also was able to register a valid SSL encryption certificate for https://www.са.com from, which would only add legitimacy to the domain were it to be used in phishing attacks against CA customers by bad guys, for example.


To be clear, the potential threat highlighted by Holden’s experiment is not new. Security researchers have long warned about the use of look-alike domains that abuse special IDN/Unicode characters. Most of the major browser makers have responded in some way by making their browsers warn users about potential punycode look-alikes.

With the exception of Mozilla, which by most accounts is the third most-popular Web browser. And I wanted to know why. I’d read the Mozilla Wiki’s IDN Display Algorithm FAQ,” so I had an idea of what Mozilla was driving at in their decision not to warn Firefox users about punycode domains: Nobody wanted it to look like Mozilla was somehow treating the non-Western world as second-class citizens.

I wondered why Mozilla doesn’t just have Firefox alert users about punycode domains unless the user has already specified that he or she wants a non-English language keyboard installed. So I asked that in some questions I sent to their media team. They sent the following short statement in reply:

“Visual confusion attacks are not new and are difficult to address while still ensuring that we render everyone’s domain name correctly. We have solved almost all IDN spoofing problems by implementing script mixing restrictions, and we also make use of Safe Browsing technology to protect against phishing attacks. While we continue to investigate better ways to protect our users, we ultimately believe domain name registries are in the best position to address this problem because they have all the necessary information to identify these potential spoofing attacks.”

If you’re a Firefox user and would like Firefox to always render IDNs as their punycode equivalent when displayed in the browser address bar, type “about:config” without the quotes into a Firefox address bar. Then in the “search:” box type “punycode,” and you should see one or two options there. The one you want is called “network.IDN_show_punycode.” By default, it is set to “false”; double-clicking that entry should change that setting to “true.”

Incidentally, anyone using the Tor Browser to anonymize their surfing online is exposed to IDN spoofing because Tor by default uses Mozilla as well. I could definitely see spoofed IDNs being used in targeting phishing attacks aimed at Tor users, many of whom have significant assets tied up in virtual currencies. Fortunately, the same “about:config” instructions work just as well on Tor to display punycode in lieu of IDNs.

Holden said he’s still in the process of testing how various email clients and Web services handle look-alike IDNs. For example, it’s clear that Twitter sees nothing wrong with sending the look-alike domain in messages to other users without any context or notice. Skype, on the other hand, seems to truncate the IDN link, sending clickers to a non-existent page.

“I’d say that most email services and clients are either vulnerable or not fully protected,” Holden said.

For a look at how phishers or other scammers might use IDNs to abuse your domain name, check out this domain checker that Hold Security developed. Here’s the first page of results for, which indicate that someone at one point registered krebsoṇsecurity[dot]com (that domain includes a lowercase “n” with a tiny dot below it, a character used by several dozen scripts). The results in yellow are just possible (unregistered) domains based on common look-alike IDN characters.

The first page of warnings for from Hold Security’s IDN scanner tool.

I wrote this post mainly because I wanted to learn more about the potential phishing and malware threat from look-alike domains, and I hope the information here has been interesting if not also useful. I don’t think this kind of phishing is a terribly pressing threat (especially given how far less complex phishing attacks seem to succeed just fine for now). But it sure can’t hurt Firefox users to change the default “visual confusion” behavior of the browser so that it always displays punycode in the address bar (see the solution mentioned above).

[Author’s note: I am listed as an adviser to Hold Security on the company’s Web site. However this is not a role for which I have been compensated in any way now or in the past.]

Planet DebianAlexandre Viau: testeduploads - looking for GSOC mentor

I have been waiting for the right opportunity to participate to GSOC for a while. I have worked on a project idea that is just right for my skill set, it would be a great learning opportunity for me and I hope that it can useful to the wider Debian community.

Please take a look at the project description and let me know if you would be interested in mentoring me over the summer.

testeduploads: test your packages before they hit the archive


testeduploads is a service that provides a way to test Debian source packages. The main goal of the project is to empower Debian Developers by giving them easy access to more rigorous testing before they upload a package to the archive. It runs tests that Debian Developers don’t necessarily run because of lack of time and resources.

testeduploads can also be used to test a large number of packages in contexts such as:

  • detecting whether or not packages can be updated to a newer upstream version
  • detecting whether or not packages can be backported
  • testing new versions of compilers
  • testing new versions of debhelper add-ons


Packages can be submitted to testeduploads with dput. Depending on the upload queue that was used, it can also automatically forward the uploads to

dput testeduploads [changesfile] will upload a package to the configured testeduploads queue and trigger the following tests:

  • rebuild the source package from the .dsc and verify that the signature matches
  • build binary packages
  • run autopkgtests on the package
  • rebuild all reverse dependencies using the new package
  • run autopkgtests on all reverse dependencies

On success:

  • the uploader is notified
  • logs are made available
  • if the package was received through the test-and-upload queue, it is automatically forwarded to

On failure:

  • the uploader is notified
  • logs are made available

Results and statistics are accessible through a web interface and a REST API. All uploads are assigned an id. HTTP uploads immediately return an upload id that can be used to query test status and to perform other actions. This allows for other tools to build on top of testeduploads.

The service accepts uploads to several queues that define specific behaviours:

  • test-and-upload: test the package on all release architectures and forward it to on success
  • test-only: test the package on all release architectures but not forward it to on success
  • amd64/test-and-upload: limit the tests to amd64 and apply test-and-upload behaviour

Why me

I have been contributing to Debian for a couple of years now and I have been a Debian Developper since 2015. For now, I have mostly been conttibuting to packaging new software and fixing packaging-related bugs.

Participating to Google Summer of Code would be a great opportunity for me to contribute to Debian in other areas. Starting a new project like testeduploads is a good learning opportunity but it requires a lot of time. The summer would be more than enough for me to kick start development of the service. Then, I can see myself maintaining and improving it for a long time.

For me, this summer is just the right time. There is very few classes that I could take over the summer, it is a good opportunity to take a summer off and work on GSOC.

For general GSOC questions, please refer to the debian-outreach mailing list or to #debian-outreach on

If you are interested in the project and want to mentor it over the summer, please get in touch with me at

Debian GSOC coordination guide

debian-outreach mailing list

testeduploads prototype

CryptogramHistory of the US Army Security Agency

Interesting history of the US Army Security Agency in the early years of Cold War Germany.

Planet DebianLars Wirzenius: New chapter of Hacker Noir on Patreon

For the 2016 NaNoWriMo I started writing a novel about software development, "Hacker Noir". I didn't finish it during that November, and I still haven't finished it. I had a year long hiatus, due to work and life being stressful, when I didn't write on the novel at all. However, inspired by both the Doctorow method and the Seinfeld method, I have recently started writing again.

I've just published a new chapter. However, unlike last year, I'm publishing it on my Patreon only, for the first month, and only for patrons. Then, next month, I'll be putting that chapter on the book's public site (, and another new chapter on Patreon.

I don't expect to make a lot of money, but I am hoping having active supporters will motivate me to keep writing.

I'm writing the first draft of the book. It's likely to be as horrific as every first-time author's first draft is. If you'd like to read it as raw as it gets, please do. Once the first draft is finished, I expect to read it myself, and be horrified, and throw it all away, and start over.

Also, I should go get some training on marketing.

Worse Than FailureCodeSOD: Let's Set a Date

Let’s imagine, for a moment, that you came across a method called setDate. Would you think, perhaps, that it stores a date somewhere? Of course it does. But what else does it do?

Matthias was fixing some bugs in a legacy project, and found himself asking exactly that question.

function setDate(objElement, strDate, objCalendar) {

    if (objElement.getAttribute("onmyfocus")) {
        eval(objElement.getAttribute("onmyfocus").replace(/this/g, "$('" + + "')"));
    } else if (objElement.onfocus && objElement.onfocus.toString()) {
        eval(GetInnerFunction(objElement.onfocus.toString()).replace(/this/g, "$('" + + "')"));

    objElement.value = parseDate(strDate);

    if (objElement.getAttribute("onmyblur")) {
        eval(objElement.getAttribute("onmyblur").replace(/this/g, "$('" + + "')"));
    } else if (objElement.onblur && objElement.onblur.toString()) {
        eval(GetInnerFunction(objElement.onblur.toString()).replace(/this/g, "$('" + + "')"));

    if (objCalendar) {
    } else {

In this code, objElement and objCalendar are both expected to be DOM elements. strDate, as the name implies, is a string holding a date. You can see a few elements in the code which obviously have something to do with the actual function of setting a date: objElement.value = parseDate(strDate) and the conditional about trying to toggle the calendar object seem like they might have something to do with managing the date.

It’s the rest of the code that gets… weird. The purpose, at a guess, is that this setDate method is emulating a user interacting with a DOM element- perhaps this is part of some bolted-on calendar widget- so they want to fire the on-focus and on-blur methods of the underlying element. That, alone, would be an ugly but serviceable hack.

But that’s not what they do.

First, they’ve apparently created attributes onmyfocus and onmyblur. Should the element have those attributes, they extract the value there, and replace any references to this with a call to $(), passing in the objElementId… and then they eval it.

If there isn’t a special onmyfocus/onmyblur attribute, they instead check for the more normal onfocus/onblur event handlers. Which are functions. But this code doesn’t want functions, so it converts them to a string and replaces this again, before passing it back to eval.

Replacing this means that they were trying to reinvent function.apply, a JavaScript method that allows you to pass in whatever object you want to be this within the function you’re calling. But, at least in the case of the onfocus/onblur, this isn’t necessary, since every browser has had a method to dispatchEvent or createEvent since time immemorial. You don’t need to mangle a function to emulate an event.

The jQuery experts might notice that $ and say, “Well, heck, if they’re using jQuery, that has a .trigger() method which fires events.” That’s a good thought, but this code is actually worse than it looks. I’ll allow Matthias to explain:

$ is NOT jQuery, but a global function that does a getElementById-lookup

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianElena Gjevukaj: Bug Squashing Party in Tirana

Bug Squashing Party was organized by Debian and OpenLabs in Tirana last weekend (3,4 March 2018). BSP is a come-together of Debian Developers and Debian enthusiasts on a specified timeframe where these persons try to fix as many bugs as possible.

Unusually for tech events, in this one there were like 90% of women participating and I think if anyone saw us working together they would doubt that it was a tech event. As in other fields in general tech world is not an exeption when it comes for dicrimination and sexisem, but luckly for us in this event organized by our friend Daniel Pocock (from Debian) and OpenLabs Tirana that wasn’t our case.

We were a large group of computer science, students and graduates coming from Kosovo.

For me it was the first time in OpenLabs and I must say It was and amazing time meeting the organizers and members and working with them.

After the presentation about the Openlabs and it’s events we had some interesting topics and projects that we could choose to work on. Mainly, I worked with other girls into translating some parts of Debian text to Albanian, also we did some research for bugs into systems.

In the evning we had a nice dinner, in an Italian resturant in Tirana.

Discovering Tirana.


Planet DebianVincent Bernat: Packaging an out-of-tree module for Debian with DKMS

DKMS is a framework designed to allow individual kernel modules to be upgraded without changing the whole kernel. It is also very easy to rebuild modules as you upgrade kernels.

On Debian-like systems,1 DKMS enables the installation of various drivers, from ZFS on Linux to VirtualBox kernel modules or NVIDIA drivers. These out-of-tree modules are not distributed as binaries: once installed, they need to be compiled for your current kernel. Everything is done automatically:

# apt install zfs-dkms
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  binutils cpp cpp-6 dkms fakeroot gcc gcc-6 gcc-6-base libasan3 libatomic1 libc-dev-bin libc6-dev
  libcc1-0 libcilkrts5 libfakeroot libgcc-6-dev libgcc1 libgomp1 libisl15 libitm1 liblsan0 libmpc3
  libmpfr4 libmpx2 libnvpair1linux libquadmath0 libstdc++6 libtsan0 libubsan0 libuutil1linux libzfs2linux
  libzpool2linux linux-compiler-gcc-6-x86 linux-headers-4.9.0-6-amd64 linux-headers-4.9.0-6-common
  linux-headers-amd64 linux-kbuild-4.9 linux-libc-dev make manpages manpages-dev patch spl spl-dkms
  zfs-zed zfsutils-linux
3 upgraded, 44 newly installed, 0 to remove and 3 not upgraded.
Need to get 42.1 MB of archives.
After this operation, 187 MB of additional disk space will be used.
Do you want to continue? [Y/n]
# dkms status
spl,, 4.9.0-6-amd64, x86_64: installed
zfs,, 4.9.0-6-amd64, x86_64: installed
# modinfo zfs | head
filename:       /lib/modules/4.9.0-6-amd64/updates/dkms/zfs.ko
license:        CDDL
author:         OpenZFS on Linux
description:    ZFS
srcversion:     42C4AB70887EA26A9970936
depends:        spl,znvpair,zcommon,zunicode,zavl
retpoline:      Y
vermagic:       4.9.0-6-amd64 SMP mod_unload modversions
parm:           zvol_inhibit_dev:Do not create zvol device nodes (uint)

If you install a new kernel, a compilation of the module is automatically triggered.

Building your own DKMS-enabled package🔗

Suppose you’ve gotten your hands on an Intel XXV710-DA2 NIC. This card is handled by the i40e driver. Unfortunately, it only got support from Linux 4.10 and you are using a stock 4.9 Debian Stretch kernel. DKMS provides here an easy solution!

Download the driver from Intel, unpack it in some directory and add a debian/ subdirectory with the following files:

  • debian/changelog:

    i40e-dkms (2.4.6-0) stretch; urgency=medium
      * Initial package.
     -- Vincent Bernat <>  Tue, 27 Feb 2018 17:20:58 +0100
  • debian/control:

    Source: i40e-dkms
    Maintainer: Vincent Bernat <>
    Build-Depends: debhelper (>= 9), dkms
    Package: i40e-dkms
    Architecture: all
    Depends: ${misc:Depends}
    Description: DKMS source for the Intel i40e network driver
  • debian/rules:

    #!/usr/bin/make -f
    include /usr/share/dpkg/
            dh $@ --with dkms
            dh_install src/* usr/src/i40e-$(DEB_VERSION_UPSTREAM)/
            dh_dkms -V $(DEB_VERSION_UPSTREAM)
  • debian/i40e-dkms.dkms:

  • debian/compat:


In debian/changelog, pay attention to the version. The version of the driver is 2.4.6. Therefore, we use 2.4.6-0 for the package. In debian/rules, we install the source of the driver in /usr/src/i40e-2.4.6—the version is extracted from debian/changelog.

The content of debian/i40e-dkms.dkms is described in details in the dkms(8) manual page. The i40e driver is fairly standard and dkms is able to figure out how to compile it. However, if your kernel module does not follow the usual conventions, it is the right place to override the build command.

Once all the files are in place, you can turn the directory into a Debian package with, for example, the dpkg-buildpackage command.2 At the end of this operation, you get your DKMS-enabled package, i40e-dkms_2.4.6-0_all.deb. Put it in your internal repository and install it on the target.

Avoiding compilation on target🔗

If you feel uncomfortable installing compilation tools on the target servers, there is a simple solution. Since version,3 thanks to Thijs Kinkhorst, dkms can build lean binary packages with only the built modules. For each kernel version, you build such a package in your CI system:

KERNEL_VERSION=4.9.0-6-amd64 # could be a Jenkins parameter
apt -qyy install \
      i40e-dkms \
      linux-image-${KERNEL_VERSION} \

DRIVER_VERSION=$(dkms status i40e | awk -F', ' '{print $2}')
dkms mkbmdeb i40e/${DRIVER_VERSION} -k ${KERNEL_VERSION}

cd /var/lib/dkms/i40e/${DRIVER_VERSION}/bmdeb/
dpkg -c i40e-modules-${KERNEL_VERSION}_*
dpkg -I i40e-modules-${KERNEL_VERSION}_*

Here is the shortened output of the two last commands:

# dpkg -c i40e-modules-${KERNEL_VERSION}_*
-rw-r--r-- root/root    551664 2018-03-01 19:16 ./lib/modules/4.9.0-6-amd64/updates/dkms/i40e.ko
# dpkg -I i40e-modules-${KERNEL_VERSION}_*
 new debian package, version 2.0.
 Package: i40e-modules-4.9.0-6-amd64
 Source: i40e-dkms-bin
 Version: 2.4.6
 Architecture: amd64
 Maintainer: Dynamic Kernel Modules Support Team <>
 Installed-Size: 555
 Depends: linux-image-4.9.0-6-amd64
 Provides: i40e-modules
 Section: misc
 Priority: optional
 Description: i40e binary drivers for linux-image-4.9.0-6-amd64
  This package contains i40e drivers for the 4.9.0-6-amd64 Linux kernel,
  built from i40e-dkms for the amd64 architecture.

The generated Debian package contains the pre-compiled driver and only depends on the associated kernel. You can safely install it without pulling dozens of packages.

  1. DKMS is also compatible with RPM-based distributions but the content of this article is not suitable for these. ↩︎

  2. You may need to install some additional packages: build-essential, fakeroot and debhelper↩︎

  3. Available in Debian Stretch and in the backports for Debian Jessie. However, for Ubuntu Xenial, you need to backport a more recent version of dkms↩︎

Worse Than FailureCodeSOD: Just One More Point

Fermat Points Proof

Tim B. had been tasked with updating an older internal application implemented in Java. Its primary purpose was to read in and display files containing a series of XY points—around 100,000 points per file on average—which would then be rendered as a line chart. It was notoriously slow, taking 1-2 minutes to process each file, but otherwise remained fairly stable.

Except that lately, some newer files were failing during the loading process. Tim quickly identified the problem—date formats had changed—and fixed the necessary code. Since the code that read in the XY points happened to reside in the same class, Tim asked his boss whether he could take a crack at killing two birds with one stone. With her approval, he dug in to figure out why the loading process was so slow.

//Initial code, pulled from memory so forgive any errors.
try {
            //The 3rd party library we are passing the values to requires
            //an array of doubles
            double[][] points = null;
            BufferedReader input =  new BufferedReader(new FileReader(aFile));
            try {
                String line = null;
                while (( line = input.readLine()) != null)
                    //First, get the XY points from line using a convenience class
                    //to parse out the values.
                    XYPoint p = new XYPoint(line);
                    //Now, to store the points in the array.
                    if ( points == null )
                        //Okay, we've got one point so far.
                        points = new double[1][2];
                        points[0][0] = p.X;
                        points[0][1] = p.Y;
                        //Uh oh, we need more room. Let's create an array that's one larger
                        //and copy all of our points so far into it.
                        double[][] newPointArray = new double[points.length + 1][2];
                        for ( int i = 0; i < points.length; i++ )
                            newPointArray[i][0] = points[i][0];
                            newPointArray[i][1] = points[i][1];
                        //Now we can add the new point!
                        newPointArray[points.length][0] = p.X;
                        newPointArray[points.length][1] = p.Y;
                        points = newPointArray;
                //Now, we can pass this to our next function
                drawChart( points );
        } catch (IOException ex)
//End original code

After scouring the code twice, Tim called over a few coworkers to have a look for themselves. Unfortunately, no, he wasn't reading it wrong. Apparently the original developer, who no longer worked there, had run into the problem of not knowing ahead of time how many points would be in each file. However, he'd needed an array of doubles to pass to the next library so he could use a list, which only accepted objects. Thus had he engineered a truly brillant workaround.

Tim determined that for the average file of 100,000 points, each import required a jaw-dropping 2 billion copy operations (1 billion for the Xs, 1 billion for the Ys). After a quick refactoring to use an ArrayList, followed by a copy to a double array, the file load time went from minutes to nearly instantaneous.

//Re-factored code below.
try {
            //The 3rd party library we are passing the values to requires
            //an array of doubles
            double[][] points = null;
            ArrayList xyPoints = new ArrayList();
            BufferedReader input =  new BufferedReader(new FileReader(aFile));
            try {
                String line = null;
                while (( line = input.readLine()) != null)
                    xyPoints.add( new XYPoint(line) );
                //Now, convert the list to an array
                points = new double[xyPoints.size()][2];
                for ( int i = 0; i < xyPoints.size(); i++ )
                    points[i][0] = xyPoints.get(i).X;
                    points[i][1] = xyPoints.get(i).Y;
                //Now, we can pass this to our next function
                drawChart( points );
        } catch (IOException ex)
//End re-factored code.
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV March 2018 Workshop: Comparing window managers

Mar 17 2018 12:30
Mar 17 2018 16:30
Mar 17 2018 12:30
Mar 17 2018 16:30
Infoxchange, 33 Elizabeth St. Richmond

Comparing window managers

We'll be looking at several of the many window managers available on Linux.

We're still looking for more people who can talk about the window manager they are using, what they like and dislike about it, and maybe demonstrate a little.

Please email me at <> with the name of your window manager if you think you could help!

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

March 17, 2018 - 12:30

read more

Planet DebianCraig Sanders: brawndo-installer

Tired of being oppressed by the slack-arse distro package maintainers who waste time testing that new versions don’t break anything and then waste even more time integrating software into the system?

Well, so am I. So I’ve fixed it, and it was easy to do. Here’s the ultimate installation tool for any program:

brawndo() {
   curl $1 | sudo /usr/bin/env bash

I’ve never written a shell script before in my entire life, I spend all my time writing javascript or ruby or python – but shell’s not a real language so it can’t be that hard to get right, can it? Of course not, and I just proved it with the amazing brawndo installer (It’s got what users crave – it’s got electrolyes!)

So next time some lame sysadmin recommends that you install the packaged version of something, just ask them if apt-get or yum or whatever loser packaging tool they’re suggesting has electrolytes. That’ll shut ’em up.

brawndo-installer is a post from: Errata

Planet DebianElana Hashman: Stop streaming music from YouTube with this one weird trick

Having grown up on the internet long before the average connection speed made music streaming services viable, streaming has always struck me as wasteful. And I know that doesn't make much sense—it's not like there's a limited amount of bandwidth to go around! But if I'm going to listen to the same audio file five times, why not just download it once and listen to it forever? Particularly if I want to listen to it while airborne and avoid the horrors of plane wifi. Or if I want to remove NSFW graphics that seem to frequently accompany mixes I enjoy.

youtube-dl to the rescue

Luckily, at least as far as YouTube audio is concerned, there is plenty of free software available to help with this! I like to fetch and process music from YouTube using youtube-dl and ffmpeg. Both are packaged and available in Debian if you like to apt install things:

Saving audio files from YouTube

Well, let's suppose you want to download some eurobeat. youtube-dl can help. The -x flag tells youtube-dl to skip downloading video and to only fetch audio.

$ youtube-dl -x
[youtube] 8B4guKLlbVU: Downloading webpage
[youtube] 8B4guKLlbVU: Downloading video info webpage
[youtube] 8B4guKLlbVU: Extracting video information
[youtube] 8B4guKLlbVU: Downloading js player vflGUPF-i
[download] Destination: SUPER EUROBEAT MIX-8B4guKLlbVU.webm
[download] 100% of 60.68MiB in 20:57
Deleting original file SUPER EUROBEAT MIX-8B4guKLlbVU.webm (pass -k to keep)

YouTube sometimes throttles connections to approximate real-time buffer rates, so the download could take a while. If you need to interrupt the download for some reason, you can use SIGINT (Ctrl+C) to stop it. If you run youtube-dl again, it's smart enough to resume the download where you left off.

Once the download is complete, there's not much more to do. It will be saved with the appropriate file extension so you can determine what audio codec it uses. You might want to rename the file:

$ mv SUPER\ EUROBEAT\ MIX-8B4guKLlbVU.ogg super_eurobeat_mix.ogg

Now we can enjoy many plays of our super_eurobeat_mix.ogg file! 🚗🎶

Re-encoding audio to another format

Suppose that I have a really old MP3 player I'd like to put this file on, and it doesn't support the OGG Vorbis format. That's not a problem; we can use ffmpeg to re-encode the audio.

The -i flag to ffmpeg specifies an input file. -acodec mp3 says that we want to use the mp3 codec to re-encode our audio. The last positional argument, super_eurobeat_mix.mp3, is the name of the file we want to output.

$ ffmpeg -i super_eurobeat_mix.ogg -acodec mp3 super_eurobeat_mix.mp3
ffmpeg version 3.4.2-1+b1 Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 7 (Debian 7.3.0-4)
  configuration: --prefix=/usr --extra-version=1+b1 --toolchain=hardened
 --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu
 --enable-gpl --disable-stripping --enable-avresample --enable-avisynth
 --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray
 --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite
 --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme
 --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg
 --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband
 --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr
 --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame
 --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp
 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq
 --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2
 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint
 --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 78.100 / 55. 78.100
  libavcodec     57.107.100 / 57.107.100
  libavformat    57. 83.100 / 57. 83.100
  libavdevice    57. 10.100 / 57. 10.100
  libavfilter     6.107.100 /  6.107.100
  libavresample   3.  7.  0 /  3.  7.  0
  libswscale      4.  8.100 /  4.  8.100
  libswresample   2.  9.100 /  2.  9.100
  libpostproc    54.  7.100 / 54.  7.100
Input #0, ogg, from 'super_eurobeat_mix.ogg':
  Duration: 01:06:35.57, start: 0.000000, bitrate: 124 kb/s
    Stream #0:0(eng): Audio: vorbis, 44100 Hz, stereo, fltp, 128 kb/s
      LANGUAGE        : eng
      ENCODER         : Lavf57.83.100
Stream mapping:
  Stream #0:0 -> #0:0 (vorbis (native) -> mp3 (libmp3lame))
Press [q] to stop, [?] for help
Output #0, mp3, to 'super_eurobeat_mix.mp3':
    TSSE            : Lavf57.83.100
    Stream #0:0(eng): Audio: mp3 (libmp3lame), 44100 Hz, stereo, fltp
      LANGUAGE        : eng
      encoder         : Lavc57.107.100 libmp3lame
size=   62432kB time=01:06:35.58 bitrate= 128.0kbits/s speed=22.4x    
video:0kB audio:62431kB subtitle:0kB other streams:0kB global headers:0kB
muxing overhead: 0.000396%

Voila! We now have a super_eurobeat_mix.mp3 file we can copy to our janky old MP3 player.

Extracting audio from an existing video file

Sometimes I accidentally forget to pass the -x flag to youtube-dl, and get a video file instead of an audio track. Oops.

But that's okay! Extracting the audio track from the video file with ffmpeg is just a couple of commands away.

First, we should determine the encoding of the audio track. The combined video file is a .webm file, but we can peek inside using ffmpeg.

$ ffmpeg -i SUPER\ EUROBEAT\ MIX-8B4guKLlbVU.webm 
ffmpeg version 3.4.2-1+b1 Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 7 (Debian 7.3.0-4)
  configuration: --prefix=/usr --extra-version=1+b1 --toolchain=hardened
 --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu
 --enable-gpl --disable-stripping --enable-avresample --enable-avisynth
 --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray
 --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite
 --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme
 --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg
 --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband
 --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr
 --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame
 --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp
 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq
 --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2
 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint
 --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 78.100 / 55. 78.100
  libavcodec     57.107.100 / 57.107.100
  libavformat    57. 83.100 / 57. 83.100
  libavdevice    57. 10.100 / 57. 10.100
  libavfilter     6.107.100 /  6.107.100
  libavresample   3.  7.  0 /  3.  7.  0
  libswscale      4.  8.100 /  4.  8.100
  libswresample   2.  9.100 /  2.  9.100
  libpostproc    54.  7.100 / 54.  7.100
Input #0, matroska,webm, from 'SUPER EUROBEAT MIX-8B4guKLlbVU.webm':
    ENCODER         : Lavf57.83.100
  Duration: 01:06:35.57, start: 0.000000, bitrate: 490 kb/s
    Stream #0:0(eng): Video: vp9 (Profile 0), yuv420p(tv,
bt709/unknown/unknown), 1920x1080, SAR 1:1 DAR 16:9, 29.97 fps, 29.97 tbr, 1k
tbn, 1k tbc (default)
      DURATION        : 01:06:35.558000000
    Stream #0:1(eng): Audio: vorbis, 44100 Hz, stereo, fltp (default)
      DURATION        : 01:06:35.572000000
At least one output file must be specified

The important line is here:

Stream #0:1(eng): Audio: vorbis, 44100 Hz, stereo, fltp (default)

The audio uses the "vorbis" codec. Hence, we should probably use the .ogg extension for our output file, to ensure we specify a compatible audio container format. If it were mp3-encoded, we'd use .mp3, and so on.

Let's extract the audio track from our video file. We need a couple new flags for ffmpeg. The first is -vn, which tells ffmpeg to not include a video track. The second is -acodec copy, which says we want to copy the existing audio track, rather than re-encode it.

$ ffmpeg -i SUPER\ EUROBEAT\ MIX-8B4guKLlbVU.webm -vn -acodec copy super_eurobeat_mix.ogg
ffmpeg version 3.4.2-1+b1 Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 7 (Debian 7.3.0-4)
  configuration: --prefix=/usr --extra-version=1+b1 --toolchain=hardened
 --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu
 --enable-gpl --disable-stripping --enable-avresample --enable-avisynth
 --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray
 --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite
 --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme
 --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg
 --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband
 --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr
 --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame
 --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp
 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq
 --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2
 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint
 --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 78.100 / 55. 78.100
  libavcodec     57.107.100 / 57.107.100
  libavformat    57. 83.100 / 57. 83.100
  libavdevice    57. 10.100 / 57. 10.100
  libavfilter     6.107.100 /  6.107.100
  libavresample   3.  7.  0 /  3.  7.  0
  libswscale      4.  8.100 /  4.  8.100
  libswresample   2.  9.100 /  2.  9.100
  libpostproc    54.  7.100 / 54.  7.100
Input #0, matroska,webm, from 'SUPER EUROBEAT MIX-8B4guKLlbVU.webm':
    ENCODER         : Lavf57.83.100
  Duration: 01:06:35.57, start: 0.000000, bitrate: 490 kb/s
    Stream #0:0(eng): Video: vp9 (Profile 0), yuv420p(tv,
bt709/unknown/unknown), 1920x1080, SAR 1:1 DAR 16:9, 29.97 fps, 29.97 tbr, 1k
tbn, 1k tbc (default)
      DURATION        : 01:06:35.558000000
    Stream #0:1(eng): Audio: vorbis, 44100 Hz, stereo, fltp (default)
      DURATION        : 01:06:35.572000000
Output #0, ogg, to 'super_eurobeat_mix.ogg':
    encoder         : Lavf57.83.100
    Stream #0:0(eng): Audio: vorbis, 44100 Hz, stereo, fltp (default)
      DURATION        : 01:06:35.572000000
      encoder         : Lavf57.83.100
Stream mapping:
  Stream #0:1 -> #0:0 (copy)
Press [q] to stop, [?] for help
size=   60863kB time=01:06:35.54 bitrate= 124.8kbits/s speed=1.03e+03x    
video:0kB audio:60330kB subtitle:0kB other streams:0kB global headers:4kB
muxing overhead: 0.884396%

We've successfully extracted super_eurobeat_mix.ogg from our video file! Go us!

Happy listening, and drive safely while you eurobeat. 🎵

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #149

Here's what happened in the Reproducible Builds effort between Sunday February 25 and Saturday March 3 2018:

diffoscope development

Version 91 was uploaded to unstable by Mattia Rizzolo. It included contributions already covered by posts of the previous weeks as well as new ones from:

In addition, Juliana — our Outreachy intern — continued her work on parallel processing; the above work is part of it.

reproducible-website development

Packages reviewed and fixed, and bugs filed

An issue with the pydoctor documentation generator was merged upstream.

Reviews of unreproducible packages

73 package reviews have been added, 37 have been updated and 26 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (46)
  • Jeremy Bicha (4)


This week's edition was written by Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet Linux AustraliaSimon Lyall: Audiobooks – Background and February 2018 list


I started listening to audiobooks around the start of January 2017 when I started walking to work (I previously caught the bus and read a book or on my phone).

I currently get them for free from the Auckland Public Library using the Overdrive app on Android. However while I download them to my phone using the Overdrive app I listen to the using Listen Audiobook Player . I switched to the alternative player mainly since it supports playback speeds greater the 2x normal.

I’ve been posting a list the books I listened to at the end of each month to twitter ( See list from Jan 2018, Dec 2017, Nov 2017 ) but I thought I’d start posting them here too.

I mostly listen to history with some science fiction and other topics.

Books listened to in February 2018

The Three-Body Problem by Cixin Liu – Pretty good Sci-Fi and towards the hard-core end I like. Looking forward to the sequels 7/10

Destiny and Power: The American Odyssey of George Herbert Walker Bush by Jon Meacham – A very nicely done biography, comprehensive and giving a good positive picture of Bush. 7/10

Starship Troopers by Robert A. Heinlein – A pretty good version of the classic. The story works well although the politics are “different”. Enjoyable though 8/10

Uncommon People: The Rise and Fall of the Rock Stars 1955-1994 by David Hepworth – Read by the Author (who sounds like a classic Brit journalist). A Story or two plus a playlist from every year. Fascinating and delightful 9/10

The Long Haul: A Trucker’s Tales of Life on the Road by Finn Murphy – Very interesting and well written about the author’s life as a long distance mover. 8/10

Mornings on Horseback – David McCullough – The Early life of Teddy Roosevelt, my McCullough book for the month. Interesting but not as engaging as I’d have hoped. 7/10

The Battle of the Atlantic: How the Allies Won the War – Jonathan Dimbleby – Overview of the Atlantic Campaign of World War 2. The author works to stress it was on of the most important fronts and does pretty well 7/10






Cory DoctorowHey, Wellington! I’m headed your way!

I’ve just finished a wonderful time at the Adelaide Festival and now I’m headed to the last stop on the Australia/New Zealand tour for Walkaway: Wellington!

I’m doing a pair of events at Writers & Readers Week at the New Zealand Festival; followed by a special one-day NetHui on copyright and then a luncheon seminar for the Privacy Commissioner on “machine learning, big data and being less wrong.”

It starts on the 9th of March and finishes on the 13th, and I really hope I see you there! Thanks to everyone who’s come out in Perth, Sydney, Melbourne and Adelaide; you’ve truly made this a tour to remember.

Harald WelteReport from the Geniatech vs. McHardy GPL violation court hearing

Today, I took some time off to attend the court hearing in the appeal hearing related to a GPL infringement dispute between former netfilter colleague Partrick McHardy and Geniatech Europe

I am not in any way legally involved in the lawsuit on either the plaintiff or the defendant side. However, as a fellow (former) Linux kernel developer myself, and a long-term Free Software community member who strongly believes in the copyleft model, I of course am very interested in this case.

History of the Case

This case is about GPL infringements in consumer electronics devices based on a GNU/Linux operating system, including the Linux kernel and at least some devices netfilter/iptables. The specific devices in question are a series of satellite TV receivers built by a Shenzhen (China) based company Geniatech, which is represented in Europe by Germany-based Geniatech Europe GmbH.

The Geniatech Europe CEO has openly admitted (out of court) that they had some GPL incompliance in the past, and that there was failure on their part that needed to be fixed. However, he was not willing to accept an overly wide claim in the preliminary injunction against his company.

The history of the case is that at some point in July 2017, Patrick McHardy has made a test purchase of a Geniatech Europe product, and found it infringing the GNU General Public License v2. Apparently no source code (and/or written offer) had been provide alongside the binary - a straight-forward violation of the license terms and hence a violation of copyright. The plaintiff then asked the regional court of Cologne to issue a preliminary injunction against the defendant, which was granted on September 8th,2017.

In terms of legal procedure, in Germany, when a plaintiff applies for a preliminary injunction, it is immediately granted by the court after brief review of the filing, without previously hearing the defendant in an oral hearing. If the defendant (like in this case) wishes to appeal the preliminary injunction, it files an appeal which then results in an oral hearing. This is what happened, after which the district court of cologne (Landgericht Koeln) on October 20, 2017 issued ruling 14 O 188/17 partially upholding the injunction.

All in all, nothing particularly unusual about this. There is no dispute about a copyright infringement having existed, and this generally grants any of the copyright holders the right to have the infringing party to cease and desist from any further infringement.

However, this injunction has a very wide scope, stating that the defendant was to cease and desist not only from ever publishing, selling, offering for download any version of Linux (unless being compliant to the license). It furthermore asked the defendant to cease and desist

  • from putting hyperlinks on their website to any version of Linux
  • from asking users to download any version of Linux

unless the conditions of the GPL are met, particularly the clauses related to providing the complete and corresponding source code.

The appeals case at OLG Cologne

The defendant now escalated this to the next higher court, the higher regional court of Cologne (OLG Koeln), asking to withdraw the earlier ruling of the lower court, i.e. removing the injunction with its current scope.

The first very positive surprise at the hearing was the depth in which the OLG court has studied the subject matter of the dispute prior to the hearing. In the many GPL related court cases that I witnessed so far, it was by far the most precise analysis of how Linux kernel development works, and this despite the more than 1000 pages of filings that parties had made to the court to this point.

Just to give you some examples:

  • the court understood that Linux was created by Linus Torvalds in 1991 and released under GPL to facilitate the open and collaborative development
  • the court recognized that there is no co-authorship / joint authorship (German: Miturheber) in the Linux kernel as a whole, as it was not a group of people planning+developing a given program together, but it is a program that has been released by Linus Torvalds and has since been edited by more than 15.000 developers without any "grand joint plan" but rather in successive iterations. This situation constitutes "editing authorship" (German: Bearbeiterurheber)
  • the court further recognized that being listed as "head of the netfilter core team" or a "subsystem maintainer" doesn't necessarily mean that one is contributing copyrightable works. Reviewing thousands of patches doesn't mean you own copyright on them, drawing an analogy to an editorial office at a publisher.
  • the court understood there are plenty of Linux versions that may not even contain any of Patric McHardy's code (such as older versions)

After about 35 minutes of the presiding judge explaining the court's understanding of the case (and how kernel development works), it went on to summarize the summary of their internal elaboration at the court prior to the meeting.

In this summary, the presiding judge stated very clearly that they believe there is some merit to the arguments of the defendant, and that they would be inclined in a ruling favorable to the defendant based on their current understanding of the case.

He cited the following main reasons:

  • The Linux kernel development model does not support the claim of Patrick McHardy having co-authored Linux. In so far, he is only an editing author (Bearbeiterurheber), and not a co-author. Nevertheless, even an editing author has the right to ask for cease and desist, but only on those portions that he authored/edited, and not on the entire Linux kernel.
  • The plaintiff did not sufficiently show what exactly his contributions were and how they were forming themselves copyrightable works
  • The plaintiff did not substantiate what copyrightable contributions he has made outside of netfilter/iptables. His mere listing as general networking subsystem maintainer does not clarify what his copyrightable contributions were
  • The plaintiff being a member of the netfilter core team or even the head of the core team still doesn't support the claim of being a co-author, as netfilter substantially existed since 1999, three years before Patrick's first contribution to netfilter, and five years before joining the core team in 2004.

So all in all, it was clear that the court also thought the ruling on all of Linux was too far-fetching.

The court suggested that it might be better to have regular main proceedings, in which expert witnesses can be called and real evidence has to be provided, as opposed to the constraints of the preliminary procedure that was applied currently.

Some other details that were mentioned somewhere during the hearing:

  • Patrick McHardy apparently unilaterally terminated the license to his works in an e-mail dated 26th of July 2017 towards the defendant. According to the defendant (and general legal opinion, including my own position), this is in turn a violation of the GPLv2, as it only allowed plaintiff to create and publish modified versions of Linux under the obligation that he licenses his works under GPLv2 to any third party, including the defendant. The defendant believes this is abuse of his rights (German: Rechtsmissbraeuchlich).
  • sworn affidavits of senior kernel developer Greg Kroah-Hartman and current netfilter maintainer Pablo Neira were presented in support of some of the defendants claims. The contents of those are unfortunately not public, neither is the contents of the sworn affidavists presented by the plaintiff.
  • The defendant has made substantiated claims in his filings that Patrick McHardy would perform his enforcement activities not with the primary motivation of achieving license compliance, but as a method to generate monetary gain. Such claims include that McHardy has acted in more than 38 cases, in at least one of which he has requested a contractual penalty of 1.8 million EUR. The total amount of monies received as contractual penalties was quoted as over 2 million EUR to this point. Please note that those are claims made by the defendant, which were just reproduced by the court. The court has not assessed their validity. However, the presiding judge explicitly stated that he received a phone calls about this case from a lawyer known to him personally, who supported that large contractual penalties are being paid in other related cases.
  • One argument by the plaintiff seems to center around being listed as a general kernel networking maintainer until 2017 (despite his latest patches being from 2015, and those were netfilter only)

Withdrawal by Patrick McHardy

At some point, the court hearing was temporarily suspended to provide the legal representation of the plaintiff with the opportunity to have a Phone call with the plaintiff to decide if they would want to continue with their request to uphold the preliminary injunction. After a few minutes, the hearing was resumed, with the plaintiff withdrawing their request to uphold the injunction.

As a result, the injunction is now withdrawn, and the plaintiff has to bear all legal costs (court fees, lawyers costs on both sides).

Personal Opinion

For me, this is all of course a difficult topic. With my history of being the first to enforce the GNU GPLv2 in (equally German) court, it is unsurprising that I am in favor of license enforcement being performed by copyright holders.

I believe individual developers who have contributed to the Linux kernel should have the right to enforce the license, if needed. It is important to have distributed copyright, and to avoid a situation where only one (possibly industry friendly) entity would be able to take [legal] action.

I'm not arguing for a "too soft" approach. It's almost 15 years since the first court cases on license violations on (embedded) Linux, and the fact that the problem still exists today clearly shows the industry is very far from having solved a seemingly rather simple problem.

On the other hand, such activities must always be oriented to compliance, and compliance only. Collecting huge amounts of contractual penalties is questionable. And if it was necessary to collect such huge amounts to motivate large corporations to be compliant, then this must be done in the open, with the community knowing about it, and the proceeds of such contractual penalties must be donated to free software related entities to prove that personal financial gain is not a motivation.

The rumors of Patrick performing GPL enforcement for personal financial gain have been around for years. It was initially very hard for me to believe. But as more and more about this became known, and Patrick would refuse to any contact requests by his former netfilter team-mates as well as the wider kernel community make it hard to avoid drawing related conclusions.

We do need enforcement, both out of court and in court. But we need it to happen out of the closet, with the community in the picture, and without financial gain to individuals. The "principles of community oriented enforcement" of the Software Freedom Conservancy as well as the more recent (but much less substantial) kernel enforcement statement represent the most sane and fair approach for how we as a community should deal with license violations.

So am I happy with the outcome? Not entirely. It's good that an over-reaching injunction was removed. But then, a lot of money and effort was wasted on this, without any verdict/ruling. It would have been IMHO better to have a court ruling published, in which the injunction is substantially reduced in scope (e.g. only about netfilter, or specific versions of the kernel, or specific products, not about placing hyperlinks, etc.). It would also have been useful to have some of the other arguments end up in a written ruling of a court, rather than more or less "evaporating" in the spoken word of the hearing today, without advancing legal precedent.

Lessons learned for the developer community

  • In the absence of detailed knowledge on computer programming, legal folks tend to look at "metadata" more, as this is what they can understand.
  • It matters who has which title and when. Should somebody not be an active maintainer, make sure he's not listed as such.
  • If somebody ceases to be a maintainer or developer of a project, remove him or her from the respective lists immediately, not just several years later.
  • Copyright statements do matter. Make sure you don't merge any patches adding copyright statements without being sure they are actually valid.

Lessons learned for the IT industry

  • There may be people doing GPL enforcement for not-so-noble motives
  • Defending yourself against claims in court can very well be worth it, as opposed to simply settling out of court (presumably for some money). The Telefonica case in 2016 <>_ has shown this, as has this current Geniatech case. The legal system can work, if you give it a chance.
  • Nevertheless, if you have violated the license, and one of the copyright holders makes a properly substantiated claim, you still will get injunctions granted against you (and rightfully so). This was just not done in this case (not properly substantiated, scope of injunction too wide/coarse).

Dear Patrick

For years, your former netfilter colleagues and friends wanted to have a conversation with you. You have not returned our invitation so far. Please do reach out to us. We won't bite, but we want to share our views with you, and show you what implications your actions have not only on Linux, but also particularly on the personal and professional lives of the very developers that you worked hand-in-hand with for a decade. It's your decision what you do with that information afterwards, but please do give us a chance to talk. We would greatly appreciate if you'd take up that invitation for such a conversation. Thanks.

Planet DebianSteinar H. Gunderson: Skellam distribution likelihood

I wondered if it was possible to make a ranking system based on the Skellam distribution, taking point spread as the only input; first step is figuring out what the likelihood looks like, so here's an example for k=4 (ie., one team beat the other by four goals):

Skellam distribution likelihood surface plot

It's pretty, but unfortunately, it shows that the most likely combination is µ1 = 0 and µ2 = 4, which isn't really that realistic. I don't know what I expected, though :-)

Perhaps it's different when we start summing many of them (more games, more teams), but you get into too high dimensionality to plot. If nothing else, it shows that it's hard to solve symbolically by looking for derivatives, as the extreme point is on an edge, not on a hill.

Krebs on SecurityWhat Is Your Bank’s Security Banking On?

A large number of banks, credit unions and other financial institutions just pushed customers onto new e-banking platforms that asked them to reset their account passwords by entering a username plus some other static identifier — such as the first six digits of their Social Security number (SSN), or a mix of partial SSN, date of birth and surname. Here’s a closer look at what may be going on (spoiler: small, regional banks and credit unions have grown far too reliant on the whims of just a few major online banking platform providers).

You might think it odd that any self-respecting financial institution would seek to authenticate customers via static data like partial SSN for passwords, and you’d be completely justified for thinking that, too. Nobody has any business using these static identifiers for authentication because they are for sale on most Americans quite cheaply in the cybercrime underground. The Equifax breach might have “refreshed” some of those data stores for identity thieves, but most U.S. adults have had their static details (DOB/SSN/MMN, address, previous address, etc) on sale for years now.

On Feb. 16, KrebsOnSecurity reader Brent Hoeft shared a copy of an email he’d just received from his financial institution Associated Bank, which at $30+ billion in assets happens to be Wisconsin’s largest by asset size.

The notice advised:

“Please read and save this information (including the password below) to prepare for your online and mobile banking upgrade.

Our refreshed online and mobile banking experience is officially launching on Monday, February 26, 2018.

We’re excited to share it with you, and want you to be aware of some important details about the transition.


Use this temporary password the first time you sign in after the upgrade. Your temporary password is the first four letters of your last name plus the last four digits of your Social Security Number.

XXXX#### [redacted by me but included in the email]

Note: your password is all lowercase without spaces.

Once the upgrade is complete, you will need your temporary password to begin the re-enrollment process.
• Beginning Monday, February 26, you will need to sign in using your existing user ID and the temporary password included above in this email. Please note that you are only required to reenroll in online or mobile banking but can access both using the same user ID and password.
• Once you sign in, you will be prompted to create a new password and establish other security features. Your user ID will remain the same.”

Hoeft said Associated Bank seems to treat the customer username as a secret, something to be protected along with the password.

“I contacted Associated’s customer service via email and received a far less satisfying explanation that the user name is required for re-activation and, that since [the username] was not provided in the email, the process they are using is in fact secure,” Hoeft said.

After speaking with Hoeft, I tweeted about whether to name and shame the bank before it was too late, or perhaps to try and talk some sense into them privately. Most readers advised that calling attention to the problem before the transition could cause more harm than good, and that at least until after Feb. 26 contacting some of the banks privately was the best idea (which is what I did).

Associated Bank wouldn’t say who their new consumer online banking platform provider was, but they did say it was one of the big ones. I took that to mean either FIS, Fiserv or Jack Henry, which collectively control approximately 70 percent of the market for bank core processors (according to, Fiserv is by far the largest).


The bank’s chief information security officer Joe Smits said Associated’s new consumer online banking platform provider required that new and existing customers log in with a username and a temporary password — which was described as choice among secondary, static data elements about customers — such as the first six digits of the customer’s SSN or date of birth.

Smits added that the bank originally started emailing customers the instructions for figuring out their temporary passwords, but then decided US mail would be a safer option and sent the rest out that way. He said only about 15 percent of Associated Bank customers (~50,000) received instructions about their temporary passwords through email.

I followed up with Hoeft to find out how his online banking upgrade went at Associated Bank. He told me that upon visiting the site, it asked for his username and the temporary password (the first four letters of his last name and the last four digits of his SSN).

“After entering that I was told to re-enter my temporary password and then create a new password,” Hoeft said. “I then was asked to select 5 security questions and provide answers. Next I was asked for a verification phone number. Upon entering that I received a text message with a 4 digit verification code. After entering the code it asked me to finish my profile information including name, email and daytime phone. After that it took me right into my online banking account.”

Hoeft said it seems like the “verification” step that was supposed to create an extra security check didn’t really add any security at all.

“If someone were able to get in with the temporary password, they would be able to create a new password, fill out all the security code information, and then provide their phone number to receive the verification code,” Hoeft said. “Armed with the verification code they then would be able to get right into my online banking account.”


A simple search online revealed Associated Bank wasn’t alone: Multiple institutions were moving to a new online banking platform all on the same day: Feb. 26, 2018.

My Credit Union also moved to a new online banking service in February, posting a notice stating that all customers will need to log in with their current username and the last four of their SSN as a temporary password.

Customers Bank, a $10 billion bank with nearly two dozen branches between Boston and Philadelphia, also told customers that starting Feb. 26 they would need to use a temporary password — the last six digits of their Social Security number — to re-enroll in online banking. Here’s part of their advice, which was published in a PDF on the bank’s site:

• You may notice a new co-branded logo for Customers Bank and BankMobile (Division Customers Bank).
• Your existing user name for Online Banking will remain the same within the new system; however, it must be entered as all lowercase letters.
• The first time you log into the new Online Banking system, your temporary password is the last 6-digits of your social security number. Your temporary
password will expire on Friday, April 20, 2018. Please be sure to log in prior to that date.
• Online Banking includes multi-factor authentication which will need to be reestablished as part of the initial sign in to the system.
• Your username and password credentials for Online Banking will be the same for Mobile Banking. Note: Before accessing the new Mobile Banking services,
you must first login to our enhanced Online Banking system to change your password.
• You will also need to enroll your mobile device, either through Online Banking by visiting the Mobile Banking Center option, or directly on the device through the
app. Both options will require additional authentication.

Columbia Bank, which has 140 branches in Washington, Oregon and Idaho, also switched gears on Feb. 26, but used a more sensible approach: Sending customers a new user ID, organization ID and temporary password in two separate mailings.


My tweet about whether to name Associated Bank attracted the attention of at least two banking industry security regulators, each of whom spoke with KrebsOnSecurity on condition of not being identified by name or regulatory agency.

Both said their agencies would be using the above examples in briefings with member institutions as instructional on how not to do online banking securely. Both also said small to mid-sized banks are massively beholden to their platform providers, and many banks simply accept the defaults instead of pushing for stronger alternatives.

“I have a lot of communications directly with the chief information security officers, chief security officers, and chief information officers in many institutions,” one regulator said. “Many of them have massively dumbed down their password requirements. A lot of smaller institutions often don’t understand the risk involved in online banking, which is why they try to outsource the whole thing to someone else. But they can’t outsource accountability.”

One of the regulators I spoke with suggested that all of the banks they’d seen transitioning to a new online banking platform on Feb. 26 were customers of Fiserv — the nation’s largest online banking platform provider.

Fiserv did not respond to specific questions for this story, saying only in a written statement that: “Fiserv regularly partners with financial institutions to provide capabilities that help mitigate and manage risk, enhance the customer experience, and allow banks to remain competitive. A variety of methodologies are used by institutions to enroll and authenticate new users onto online banking platforms, and password authentication is one of multiple layers of security used to protect customers.”

Both banking industry regulators I spoke with said a basic problem is that many smaller institutions unfortunately still treat usernames as secret codes. I have railed against this practice for years, but far too many banks treat customer usernames as part of their security, even though most customers pick something very close to the first part of their email address (before the “@” sign). I’ve even skewered some of the airline industry giants for doing the same (United does this with its super-secret frequent flyer account number).

“I think this will be an opportunity for us to coach them on that,” one banking regulator said. “This process has to involve random password generation and that needs to be standard operating procedure. If you can shortcut security just by supplying static data like SSN, it’s all screwed. Some of these organizations have had such poor control structure for so long they don’t even understand how bad it is.”

The other regulator said another challenge is how long banks should wait before disabling accounts if consumers don’t log in to the new online banking system.

“What they’re going to do is set up all these users on this brand new system and give them default passwords,” the regulator said. “Some individuals will log into their bank account every day, others once a month and sometimes quite randomly. So, how are they going to control that window of opportunity? At some point, maybe after a couple of weeks, they need to just disable those other accounts and have people start from scratch.”

The first regulator said it appears many banks (and their platform providers) are singularly focused on making these transitions as seamless and painless as possible for the financial institution and its customers.

“I think they’re looking at making it easier for their customers and lessening the fallout as they get fewer angry and frustrated calls,” the regulator said. “That’s their incentive more than anything else.”


While it may appear that banks are more afraid of calls from their customers than of fallout from identity thieves and hackers, remember that you the consumer can shop with your wallet, and should move your funds to another bank if you’re unhappy with the security practices of your current institution.

Also, don’t re-use passwords. In fact, wherever possible don’t use passwords at all. Instead, choose passphrases over passwords (remember, length is key). Unfortunately, passphrases may not be possible because some banks have chosen to truncate passwords after a certain number of characters, and to disallow special symbols.

If you’re the kind of person who likes to use the same password across multiple sites, then a password manager is definitely for you. That’s because password managers pick strong, long and secure passwords for you and the only thing you have to remember is a single master password.

Please consider any two-step or two-factor authentication options your financial institution may offer, and be sure to take full advantage of that when it’s available. Also, ask your bank to require a unique verbal password before discussing any of your account details over the phone; this prevents someone from calling in to your bank and convincing a customer service rep that he’s you just because he can regurgitate your static personal details.

Finally, take steps to prevent your security from being backdoored by your mobile provider: Check out last week’s tips on blocking mobile number port-out scams, which thieves sometimes use in cashing out hacked bank accounts.

Planet DebianArianit Dobroshi: Debian Bug Squashing Party in Tirana

On 3 March I attended a Debian Bug Squashing Party in Tirana. Organized by colleagues at Open Labs Albania Anisa and friends and Daniel. Debian is the second oldest GNU/Linux distribution still active and a launchpad for so many others.

A large number of Kosovo participants took place, mostly female students. I chose to focus on adding Kosovo to country-lists in Debian by verifying that Kosovo was missing and then filing bug reports or, even better, doing pull requests.

apt-cache rdepends iso-codes will return a list of packages that include ISO codes. However, this proved hard to examine by simply looking at these applications on Debian; one would have to search through their code to find out how the ISO MA-3166 codes are used. So I left that for another time.

I moved next to what I thought I would be able complete within the event. Coding is becoming quite popular with children in Kosovo. I looked into MIT’s Scratch and Google’s Blockly, the second one being freeer software and targeting younger children. They both work by snapping together logical building blocks into a program.

Translation of Blockly into Albanian is now complete and hopefully will get much use. You can improve on my work at Translatewiki.

Thank you for the all fish and see you at the next Debian BSP.


Planet DebianRaphaël Hertzog: My Free Software Activities in February 2018

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Distro Tracker

Since we switched to salsa, and with the arrival of prospective GSOC students interested to work on distro-tracker this summer, I have been rather active on this project as can be seen in the project’s activity summary. Among the most important changes we can note:

  • The documentation and code coverage analysis is updated on each push.
  • Unit tests, functional tests and style checks (flake8) are run on each push but also on merge requests, allowing contributors to have quick feedback on their code. Implemented with this Gitlab CI configuration.
  • Multiple bug fixes (more of it). Update code to use python3-gpg instead of deprecated python3-gpgme (I had to coordinate with DSA to get the new package installed).
  • More unit tests for team related code. Still a work in progress but I made multiple reviews already.

Debian Live

I created the live-team on to prepare for the move of the various Debian live repositories. The move itself has been done by Steve McIntyre. In the discussion, we also concluded that the live-images source package can go away. I thus filed its removal request.

Then I spent a whole day reviewing all the pending patches. I merged most of them and left comments on the remaining ones:

  • Merged #885453 cleaning up double slashes in some paths.
  • Merged #885466 allowing to set upperdir tmpfs mount point size.
  • Merged #885455 switching back the live-boot initrd to use busybox’s wget as it supports https now.
  • Merged #886328 simplifying the mount points handling by using /run/live instead of /lib/live/mount.
  • Merged #886337 adding options to build smaller initrd by disabling some features.
  • Merged #866009 fixing a race condition between live-config and systemd-tmpfiles-setup.
  • Reviewed #884355 implementing new hooks in live-boot’s initrd. Not ready for merge yet.
  • Reviewed #884553 implementing cross-architecture linux flavour selection. Not ready for merge yet.
  • Merged #891206 fixing a regression with local mirrors.
  • Merged #867539 lowering the process priority of mksquasfs to avoid rendering the machine completely unresponsive during this step.
  • Merged #885692 adding UEFI support for ARM64.
  • Merged #847919 simplifying the bootstrap of foreign architectures.
  • Merged #868559 fixing fuse mounts by switching back to klibc’s mount.
  • Wrote a patch to fix verify-checksums option in live-boot (see #856482).
  • I released a new version of live-config but wanted some external testing before releasing the new live-boot. This did not happen yet unfortunately.

Debian LTS

I started a discussion on debian-devel about how we could handle the extension of the LTS program that some LTS sponsors are asking us to do.

The response have been rather mixed so far. It is unlikely that wheezy will be kept on the official mirror after its official EOL date but it’s not clear whether it would be possible to host the wheezy updates on some other server for longer.

Debian Handbook

I moved the git repository of the book to salsa and released a new version in unstable to fix two recent bugs: #888575 asking us to implement some parallel building to speed the build and #888578 informing us that a recent debhelper update broke the build process due to the presence of a build directory in the source package.

Debian Packaging

I moved all my remaining packages to and used the opportunity to clean them up:

  • dh-linktree, ftplib, gnome-shell-timer (fixed #891305 later), logidee-tools, publican, publican-debian, vboot-utils, rozofs
  • Some also got a new upstream release for the same price: tcpdf, lpctools, elastalert, notmuch-addrlookup.
  • I orphaned tcpdf in #889731 and I asked for the removal of feed2omb in #742601.
  • I updated django-modeltranslation to 0.12.2 to fix FTBFS bug #834667 (I submitted an upstream pull request at the same time).

Dolibarr. As a sponsor of dolibarr I filed its removal request and then I started a debian-devel discussion because we should be able to provide such applications to our users even though its development practice does not conform to some of our policies.

Bash. I uploaded a bash NMU (4.4.18-1.1) to fix a regression introduced by the PIE-enabled build (see #889869). I filed an upstream bug against bash but it turns out it’s actually a bug in qemu-user that really ought to be fixed. I reported the bug to qemu upstream but it hasn’t gotten much traction.

pkg-security team. I sponsored many updates over the month: rhash 1.3.5-1, medusa 2.2-5, hashcat, dnsrecon, btscanner, wfuzz 2.2.9, pixiewps 1.4.2-1, inetsim (new from kali). I also made a new upload of sslsniff with the OpenSSL 1.1 patch contributed by Hilko Bengen.

Debian bug reports

I filed a few bug reports:

  • #889814: lintian: Improve long description of epoch-change-without-comment
  • #889816: lintian: Complain when epoch has been bumped but upstream version did not go backwards
  • #890594: devscripts: Implement a salsa-configure script to configure project repositories
  • #890700 and #890701 about missing Vcs-Git fields to siridb-server and libcleri
  • #891301: lintian: privacy-breach-generic should not complain about <link rel=”generator”> and others

Misc contributions

Saltstack formulas. I pushed misc fixes to the munin-formula, the samba-formula and the openssh-formula. I submitted two other pull requests: on samba-formula and on users-formula.

QA’s carnivore database. I fixed a bug in a carnivore script that was spewing error messages about duplicate uids. This database links together multiple identifiers (emails, GPG key ids, LDAP entry, etc.) for the same Debian contributor.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

CryptogramSecurity Vulnerabilities in Smart Contracts

Interesting research: "Finding The Greedy, Prodigal, and Suicidal Contracts at Scale":

Abstract: Smart contracts -- stateful executable objects hosted on blockchains like Ethereum -- carry billions of dollars worth of coins and cannot be updated once deployed. We present a new systematic characterization of a class of trace vulnerabilities, which result from analyzing multiple invocations of a contract over its lifetime. We focus attention on three example properties of such trace vulnerabilities: finding contracts that either lock funds indefinitely, leak them carelessly to arbitrary users, or can be killed by anyone. We implemented MAIAN, the first tool for precisely specifying and reasoning about trace properties, which employs inter-procedural symbolic analysis and concrete validator for exhibiting real exploits. Our analysis of nearly one million contracts flags 34,200 (2,365 distinct) contracts vulnerable, in 10 seconds per contract. On a subset of 3,759 contracts which we sampled for concrete validation and manual analysis, we reproduce real exploits at a true positive rate of 89%, yielding exploits for 3,686 contracts. Our tool finds exploits for the infamous Parity bug that indirectly locked 200 million dollars worth in Ether, which previous analyses failed to capture.

Worse Than FailureThe Unbidden Password

English - Mortise Lock with Key - Walters 52173

So here's a thing that keeps me up at night: we get a lot of submissions about programmers who cannot seem to think like users. There's a type of programmer who has never not known how computers worked, whose theory of computers in their mind has been so accurate for so long that they can't look at things in a different way. Many times, they close themselves off from users, insisting that if the user had a problem with using the software, they just don't know how computers work and need to educate themselves. Rather than focus on what would make the software more usable, they program what is easiest for the computer to do, and call it a day.

The same is sometimes true of security concerns. Rather than focus on what would be secure, on what the best practices are in the industry, these programmers hammer out something easy and straightforward and consider it good enough. Today's submitter, Rick, recently ran across just such a "security system."

Rick was shopping at a small online retailer, and found some items he liked. He got through the "fill in all your personal information and hope they have good security" stage of the online check-out process and placed his order. At no time was he asked if he wanted an account—which is good, because he never signs up for accounts at small independent retailers, preferring for his card information not to be stored at all. He was asked to fill in his email, which is common enough; a receipt and shipping updates are usually sent to the email associated with the order.

Sure enough, Rick received an email from the retailer moments later. Only this wasn't a receipt. It was, in fact, confirmation of a new account creation ... complete with a password in plain text.

Rick was understandably alarmed. He headed back to the site immediately to change the password to a longer, more secure one-off he could store in a password manager and never, ever have emailed to him in plaintext. But once on the site, he could find no sign of a login button or secure area. So at this point, he had an insecure password he couldn't appear to use, for an account he didn't even want in the first place.

Rick sent an email, worried about this state of affairs. The reply came fairly rapidly, from someone who was likely the sole tech department for the company: this was by design. All Rick had to do next time he purchased any goods was to enter the password on the checkout screen, and it would remember his delivery address for him.

As Rick put it:


So you send a random password insecurely and don't allow the user to change it, only because you think users would rather leave your web page to login to their email, search for the email that includes the password and copy that password in your web page, instead of just filling in their address that they know by heart.

Of course in this case, it doesn't matter one bit: Rick isn't going back to buy anything else. He didn't name-and-shame, but I encourage you to do so in the comments if you know of a retailer with similarly bad security. After all, there's only one thing that can beat programmer arrogance in this kind of situation: losing customers.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

TEDStatement on incident at TEDxBrussels

March 5, 2018 — Today at TEDxBrussels, an independently organized TEDx event, speaker and performance artist Deborah De Robertis was forcibly removed from the stage by one of the event’s organizers, who objected to the talk’s content.

We have reviewed the situation and spoken with the organizer. While we know there are moments when it is difficult to decide how to respond to a situation, this response was deeply inappropriate. We are immediately revoking the TEDxBrussels license granted to this individual.

TEDFollow your dreams without fear: 4 questions with Zubaida Bai

Cartier and TED believe in the power of bold ideas to empower local initiatives to have global impact. To celebrate Cartier’s dedication to launching the ideas of female entrepreneurs into concrete change, TED has curated a special session of talks around the theme “Bold Alchemy” for the Cartier Women’s Initiative Awards, featuring a selection of favorite TED speakers.

Leading up to the session, TED talked with women’s health advocate and TED Fellow Zubaida Bai about what inspires her work to improve the health and livelihoods of women worldwide.

TED: Tell us who you are.
Zubaida Bai: I am a women’s health advocate, a mother, a designer and innovator of health and livelihood solutions for underserved women and girls. I’ve traveled to the poorest communities in the world, listened compassionately to women and observed their challenges and indignities. As an entrepreneur and thought leader, I’m putting my passion into a movement that will address market failures, break taboos, and elevate the health of women and girls as a core topic in the world.

TED: What’s a bold move you’ve made in your career?
ZB: The decision I made with my husband and co-founder to make our company a for-profit venture. We wanted to prove that the poor are not poor in mind, and if you offer them a quality product that they need, and can afford, they will buy it. We also wanted to show that our business mode — serving the bottom of the pyramid — was scalable. Being a social sustainable enterprise is tough, especially if you serve women and children. But relying on non-profit donations especially for women’s health comes with a price. And that price is often an endless cycle of fundraising that makes it hard to create jobs and economically lift up the very communities being served. We are proud that every woman in our facilities in Chennai receives healthcare in addition to her salary.

TED: Tell us about a woman who inspires you.
ZB: My mother. She worked very hard under social constraints in India that were not favorable towards women. She was always working side jobs and creating small enterprises to help keep our family going, and I learned a lot from her. She also pushed me and believed in me and always created opportunities for me that she was denied and didn’t have access to.

TED: If you could go back in time, what would you tell your 18-year-old self?
ZB: To believe in your true potential. To follow your dreams without fear, as success is believing in your dreams and having the courage to pursue them — not the end result.

The private TED session at Cartier takes place April 26 in Singapore. It will feature talks from a diverse range of global leaders, entrepreneurs and change-makers, exploring topics ranging from the changing global workforce to maternal health to data literacy, and it will include a performance from the only female double violinist in the world.

Sociological ImagesAre We Really Looking at Body Cameras?

The Washington Post has been collecting data on documented fatal police shootings of civilians since 2015, and they recently released an update to the data set with incidents through the beginning of 2018. Over at Sociology Toolbox, Todd Beer has a great summary of the data set and a number of charts on how these shootings break down by race.

One of the main policy reforms suggested to address this problem is body cameras—the idea being that video evidence will reduce the number of killings by monitoring police behavior. Of course, not all police departments implement these cameras and their impact may be quite small. One small way to address these problems is public visibility and pressure.

So, how often are body cameras incorporated into incident reporting? Not that often, it turns out. I looked at all the shootings of unarmed civilians in The Washington Post’s dataset, flagging the ones where news reports indicated a body camera was in use. The measure isn’t perfect, but it lends some important context.

(Click to Enlarge)

Body cameras were only logged in 37 of 219 cases—about 17% of the time—and a log doesn’t necessarily mean the camera present was even recording. Sociologists know that organizations are often slow to implement new policies, and they don’t often just bend to public pressure. But there also hasn’t been a change in the reporting of body cameras, and this highlights another potential stumbling block as we track efforts for police reform.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at


Cory DoctorowHow to be better at being pissed off at Big Tech

My latest Locus column, “Let’s Get Better at Demanding Better from Tech,” looks at how science fiction can make us better critics of technology by imagining how tech could be used in difference social and economic contexts than the one we live in today.

The “pro-tech” side’s argument is some variation on, “You can’t get the social benefits of Facebook without letting us spy on you and manipulate you — if you want to stay in touch with your friends, that’s the price of admission.” All too often, the “anti-tech” side takes this premise at face value: “Since we can’t hang out with our friends online without being spied on and manipulated, you need to stop wanting to hang out with your friends online.”

But the science fiction version of this goes, “What kinds of systems could we build if we wanted to hang out with our friends without being spied on and manipulated — and what kinds of political, regulatory and technological interventions would make those systems easier to build?”

A critique of technology that focuses on its market conditions, rather than its code, yields up some interesting alternate narratives. It has become fashionable, for example, to say that advertising was the original sin of online publication. Once the norm emerged that creative work would be free and paid for through attention – that is, by showing ads – the wheels were set in motion, leading to clickbait, political polarization, and invasive, surveillant networks: “If you’re not paying for the product, you’re the product.”

But if we understand the contours of the advertising marketplace as being driven by market conditions, not “attention economics,” a different story emerges. Market conditions have driven incredible consolidation in every sector of the economy, meaning that fewer and fewer advertisers call the shots, and meaning that more and more of the money flows through fewer and fewer payment processors. Compound that with lax anti-trust enforcement, and you have companies that are poised to put pressure on publishers and control who sees which information.

In 2018, companies from John Deere to GM to Johnson & Johnson use digital locks and abusive license agreements to force you to submit to surveillance and control how you use their products. It’s true that if you don’t pay for the product, you’re the product – but if you’re a farmer who’s just shelled out $500,000 for a new tractor, you’re still the product.

The “original sin of advertising” story says that if only microtransactions had been technologically viable and commercially attractive, we could have had an attention-respecting, artist-compensating online world, but in a world of mass inequality, financializing culture and discourse means excluding huge swaths of the population from the modern public sphere. If the Supreme Court’s Citizens United decision has you convinced that money has had a corrupting influence on who gets to speak, imagine how corrupting the situation would be if you also had to pay to listen.

Let’s Get Better at Demanding Better from Tech [Cory Doctorow/Locus]

Rondam RamblingsIs it time to take the Hyperloop seriously? No.

Over four years since it was first introduced, Ars Technica asks if it is time to take the Hyperloop seriously.  And four years since I first gave it, the answer is still a resounding no.  Not only has the thermal expansion problem not been solved, there has been (AFAICT) absolutely no attention paid to simple operational concerns that could be show-stoppers.  Like terrorism.  If you think

CryptogramIntimate Partner Threat

Princeton's Karen Levy has a good article computer security and the intimate partner threat:

When you learn that your privacy has been compromised, the common advice is to prevent additional access -- delete your insecure account, open a new one, change your password. This advice is such standard protocol for personal security that it's almost a no-brainer. But in abusive romantic relationships, disconnection can be extremely fraught. For one, it can put the victim at risk of physical harm: If abusers expect digital access and that access is suddenly closed off, it can lead them to become more violent or intrusive in other ways. It may seem cathartic to delete abusive material, like alarming text messages -- but if you don't preserve that kind of evidence, it can make prosecution more difficult. And closing some kinds of accounts, like social networks, to hide from a determined abuser can cut off social support that survivors desperately need. In some cases, maintaining a digital connection to the abuser may even be legally required (for instance, if the abuser and survivor share joint custody of children).

Threats from intimate partners also change the nature of what it means to be authenticated online. In most contexts, access credentials­ -- like passwords and security questions -- are intended to insulate your accounts against access from an adversary. But those mechanisms are often completely ineffective for security in intimate contexts: The abuser can compel disclosure of your password through threats of violence and has access to your devices because you're in the same physical space. In many cases, the abuser might even own your phone -- or might have access to your communications data because you share a family plan. Things like security questions are unlikely to be effective tools for protecting your security, because the abuser knows or can guess at intimate details about your life -- where you were born, what your first job was, the name of your pet.

Worse Than FailureCodeSOD: A Very Private Memory

May the gods spare us from “clever” programmers.

Esben found this little block of C# code:

System.Diagnostics.Process proc = System.Diagnostics.Process.GetCurrentProcess();
long check = proc.PrivateMemorySize64;
if (check > 1150000000)

Even before you check on the objects and methods in use, it’s hard to figure out what the heck this method is supposed to do. If some memory stat is over a certain size, pop up a message box and break out of the method? Why? Isn’t this more the case for an exception? Since the base value is hard coded, what happens if I run this code on a machine with way more RAM? Or configure the CLR to give my process more memory? Or…

If the goal was to prevent an operation from starting if there wasn’t enough free memory, this code is dumb. It’s “clever”, in the sense that the original developer said, “Hey, I’m about to do something memory intensive, let me make sure there’s enough memory” and then patted themselves on the head for practicing defensive programming techniques.

But that isn’t what this code exactly does. PrivateMemorySize simply reports how much memory is allocated to the process. Not how much is free, not how much is used, just… how much there is. That number may grow, as the process allocates objects, so if it’s too large relative to available memory… you’ll be paging a bunch, which isn’t great I suppose, but it still doesn’t explain this check.

This is almost certainly means this was a case of “works on my/one machine”, where the developer tuned the number 1550000000 based on one specific machine. Either that, or there was a memory leak in the code- and yes, even garbage collected languages can still have memory leaks if you’re an agressively “clever” programmer- and this was the developer’s way of telling people “Hey, restart the program.”

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaRussell Coker: Compromised Guest Account

Some of the workstations I run are sometimes used by multiple people. Having multiple people share an account is bad for security so having a guest account for guest access is convenient.

If a system doesn’t allow logins over the Internet then a strong password is not needed for the guest account.

If such a system later allows logins over the Internet then hostile parties can try to guess the password. This happens even if you don’t use the default port for ssh.

This recently happened to a system I run. The attacker logged in as guest, changed the password, and installed a cron job to run every minute and restart their blockchain mining program if it had been stopped.

In 2007 a bug was filed against the Debian package openssh-server requesting that the AllowUsers be added to the default /etc/ssh/sshd_config file [1]. If that bug hadn’t been marked as “wishlist” and left alone for 11 years then I would probably have set it to only allow ssh connections to the one account that I desired which always had a strong password.

I’ve been a sysadmin for about 25 years (since before ssh was invented). I have been a Debian Developer for almost 20 years, including working on security related code. The fact that I stuffed up in regard to this issue suggests that there are probably many other people making similar mistakes, and probably most of them aren’t monitoring things like system load average and temperature which can lead to the discovery of such attacks.


Cory DoctorowA key to page-numbers in the Little Brother audiobook

Mary Kraus teaches my novel Little Brother to health science interns learning about cybersecurity; to help a student who has a print disability, Mary created a key that maps the MP3 files in the audiobook to the Tor paperback edition. She was kind enough to make her doc public to help other people move easily from the audiobook to the print edition — thanks, Mary!


Krebs on SecurityPowerful New DDoS Method Adds Extortion

Attackers have seized on a relatively new method for executing distributed denial-of-service (DDoS) attacks of unprecedented disruptive power, using it to launch record-breaking DDoS assaults over the past week. Now evidence suggests this novel attack method is fueling digital shakedowns in which victims are asked to pay a ransom to call off crippling cyberattacks.

On March 1, DDoS mitigation firm Akamai revealed that one of its clients was hit with a DDoS attack that clocked in at 1.3 Tbps, which would make it the largest publicly recorded DDoS attack ever.

The type of DDoS method used in this record-breaking attack abuses a legitimate and relatively common service called “memcached” (pronounced “mem-cash-dee”) to massively amp up the power of their DDoS attacks.

Installed by default on many Linux operating system versions, memcached is designed to cache data and ease the strain on heavier data stores, like disk or databases. It is typically found in cloud server environments and it is meant to be used on systems that are not directly exposed to the Internet.

Memcached communicates using the User Datagram Protocol or UDP, which allows communications without any authentication — pretty much anyone or anything can talk to it and request data from it.

Because memcached doesn’t support authentication, an attacker can “spoof” or fake the Internet address of the machine making that request so that the memcached servers responding to the request all respond to the spoofed address — the intended target of the DDoS attack.

Worse yet, memcached has a unique ability to take a small amount of attack traffic and amplify it into a much bigger threat. Most popular DDoS tactics that abuse UDP connections can amplify the attack traffic 10 or 20 times — allowing, for example a 1 mb file request to generate a response that includes between 10mb and 20mb of traffic.

But with memcached, an attacker can force the response to be thousands of times the size of the request. All of the responses get sent to the target specified in the spoofed request, and it requires only a small number of open memcached servers to create huge attacks using very few resources.

Akamai believes there are currently more than 50,000 known memcached systems exposed to the Internet that can be leveraged at a moment’s notice to aid in massive DDoS attacks.

Both Akamai and Qrator — a Russian DDoS mitigation company — published blog posts on Feb. 28 warning of the increased threat from memcached attacks.

“This attack was the largest attack seen to date by Akamai, more than twice the size of the September, 2016 attacks that announced the Mirai botnet and possibly the largest DDoS attack publicly disclosed,” Akamai said [link added]. “Because of memcached reflection capabilities, it is highly likely that this record attack will not be the biggest for long.”

According to Qrator, this specific possibility of enabling high-value DDoS attacks was disclosed in 2017 by a Chinese group of researchers from the cybersecurity 0Kee Team. The larger concept was first introduced in a 2014 Black Hat U.S. security conference talk titled “Memcached injections.”


On Thursday, KrebsOnSecurity heard from several experts from Cybereason, a Boston-based security company that’s been closely tracking these memcached attacks. Cybereason said its analysis reveals the attackers are embedding a short ransom note and payment address into the junk traffic they’re sending to memcached services.

Cybereason said it has seen memcached attack payloads that consist of little more than a simple ransom note requesting payment of 50 XMR (Monero virtual currency) to be sent to a specific Monero account. In these attacks, Cybereason found, the payment request gets repeated until the file reaches approximately one megabyte in size.

The ransom demand (50 Monero) found in the memcached attacks by Cybereason on Thursday.

Memcached can accept files and host files in temporary memory for download by others. So the attackers will place the 1 mb file full of ransom requests onto a server with memcached, and request that file thousands of times — all the while telling the service that the replies should all go to the same Internet address — the address of the attack’s target.

“The payload is the ransom demand itself, over and over again for about a megabyte of data,” said Matt Ploessel, principal security intelligence researcher at Cybereason. “We then request the memcached ransom payload over and over, and from multiple memcached servers to produce an extremely high volume DDoS with a simple script and any normal home office Internet connection. We’re observing people putting up those ransom payloads and DDoSsing people with them.”

Because it only takes a handful of memcached servers to launch a large DDoS, security researchers working to lessen these DDoS attacks have been focusing their efforts on getting Internet service providers (ISPs) and Web hosting providers to block traffic destined for the UDP port used by memcached (port 11211).

Ofer Gayer, senior product manager at security firm Imperva, said many hosting providers have decided to filter port 11211 traffic to help blunt these memcached attacks.

“The big packets here are very easy to mitigate because this is junk traffic and anything coming from that port (11211) can be easily mitigated,” Gayer said.

Several different organizations are mapping the geographical distribution of memcached servers that can be abused in these attacks. Here’s the world at-a-glance, from our friends at

The geographic distribution of memcached servers exposed to the Internet. Image:

Here are the Top 20 networks that are hosting the most number of publicly accessible memcached servers at this moment, according to data collected by Cybereason:

The global ISPs with the most number of publicly available memcached servers.

DDoS monitoring site publishes a live, running list of the latest targets getting pelted with traffic in these memcached attacks.

What do the stats at tell us? According to netlab@360, memcached attacks were not super popular as an attack method until very recently.

“But things have greatly changed since February 24th, 2018,” netlab wrote in a Mar. 1 blog post, noting that in just a few days memcached-based DDoS went from less than 50 events per day, up to 300-400 per day. “Today’s number has already reached 1484, with an hour to go.”

Hopefully, the global ISP and hosting community can come together to block these memcached DDoS attacks. I am encouraged by what I have heard and seen so far, and hope that can continue in earnest before these attacks start becoming more widespread and destructive.

Here’s the Cybereason video from which that image above with the XMR ransom demand was taken:

Cory DoctorowI’m coming to the Adelaide Festival this weekend (and then to Wellington, NZ!)

I’m on the last two cities in my Australia/NZ tour for my novel Walkaway: today, I’m flying to Adelaide for the Adelaide Festival, where I’m appearing in several program items: Breakfast with Papers on Sunday at 8AM; a book signing on Monday at 10AM in Dymocks at Rundle Mall; “Dust Devils,” a panel followed by a signing on Monday at 5PM on the West Stage at Pioneer Women’s Memorial Garden; and “Craphound,” a panel/signing on Tuesday at 5PM on the East Stage at Pioneer Women’s Memorial Garden.

After Adelaide, I’m off to Wellington for Writers and Readers Week and then the NetHui one-day copyright event.

I’ve had a fantastic time in Perth, Melbourne and Sydney and it’s been such a treat to meet so many of you — I’m looking so forward to these last two stops!

CryptogramFriday Squid Blogging: Searching for Humboldt Squid with Electronic Bait

Video and short commentary.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramMalware from Space

Since you don't have enough to worry about, here's a paper postulating that space aliens could send us malware capable of destroying humanity.

Abstract: A complex message from space may require the use of computers to display, analyze and understand. Such a message cannot be decontaminated with certainty, and technical risks remain which can pose an existential threat. Complex messages would need to be destroyed in the risk averse case.

I think we're more likely to be enslaved by malicious AIs.

Worse Than FailureError'd: I Don't Always Test my Code, but When I do...

"Does this mean my package is here or is it also in development?" writes Nariim.


Stuart L. wrote, "Who needs a development environment when you can just test in production on the 'Just In' feed?"


"It was so nice of Three to unexpectedly include me - a real user - in their User Acceptance Testing. Yeah, it's still not fixed," wrote Paul P.


"I found this great nearby hotel option that transcended into the complex plane," Rosenfield writes.


Stuart L. also wrote in, "I can't think of a better place for BoM to test out cyclone warnings than in production."


"The Ball Don't Lie blog at Yahoo! Sports seems to have run out of content during the NBA Finals so they started testing instead," writes Carlos S.


[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

CryptogramCellebrite Unlocks iPhones for the US Government

Forbes reports that the Israeli company Cellebrite can probably unlock all iPhone models:

Cellebrite, a Petah Tikva, Israel-based vendor that's become the U.S. government's company of choice when it comes to unlocking mobile devices, is this month telling customers its engineers currently have the ability to get around the security of devices running iOS 11. That includes the iPhone X, a model that Forbes has learned was successfully raided for data by the Department for Homeland Security back in November 2017, most likely with Cellebrite technology.


It also appears the feds have already tried out Cellebrite tech on the most recent Apple handset, the iPhone X. That's according to a warrant unearthed by Forbes in Michigan, marking the first known government inspection of the bleeding edge smartphone in a criminal investigation. The warrant detailed a probe into Abdulmajid Saidi, a suspect in an arms trafficking case, whose iPhone X was taken from him as he was about to leave America for Beirut, Lebanon, on November 20. The device was sent to a Cellebrite specialist at the DHS Homeland Security Investigations Grand Rapids labs and the data extracted on December 5.

This story is based on some excellent reporting, but leaves a lot of questions unanswered. We don't know exactly what was extracted from any of the phones. Was it metadata or data, and what kind of metadata or data was it.

The story I hear is that Cellebrite hires ex-Apple engineers and moves them to countries where Apple can't prosecute them under the DMCA or its equivalents. There's also a credible rumor that Cellebrite's mechanisms only defeat the mechanism that limits the number of password attempts. It does not allow engineers to move the encrypted data off the phone and run an offline password cracker. If this is true, then strong passwords are still secure.

EDITED TO ADD (3/1): Another article, with more information. It looks like there's an arms race going on between Apple and Cellebrite. At least, if Cellebrite is telling the truth -- which they may or may not be.

Planet Linux AustraliaFrancois Marier: Redirecting an entire site except for the certbot webroot

In order to be able to use the webroot plugin for certbot and automatically renew the Let's Encrypt certificate for, I had to put together an Apache config that would do the following on port 80:

  • Let /.well-known/acme-challenge/* through on the bare domain (
  • Redirect anything else to

The reason for this is that the main Libravatar service listens on and not, but that cerbot needs to ascertain control of the bare domain.

This is the configuration I ended up with:

<VirtualHost *:80>
    DocumentRoot /var/www/acme
    <Directory /var/www/acme>
        Options -Indexes

    RewriteEngine on
    RewriteCond "/var/www/acme%{REQUEST_URI}" !-f
    RewriteRule ^(.*)$ [last,redirect=301]

The trick I used here is to make the redirection RewriteRule conditional on the requested file (%{REQUEST_URI}) not existing in the /var/www/acme directory, the one where I tell certbot to drop its temporary files.

Here are the relevant portions of /etc/letsencrypt/renewal/

authenticator = webroot
account = 

<span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=posts%2Fredirecting-entire-site-except-certbot-webroot&amp;page=webroot_map" rel="nofollow">?</a>webroot map</span> = /var/www/acme = /var/www/acme


Krebs on SecurityFinancial Cyber Threat Sharing Group Phished

The Financial Services Information Sharing and Analysis Center (FS-ISAC), an industry forum for sharing data about critical cybersecurity threats facing the banking and finance industries, said today that a successful phishing attack on one of its employees was used to launch additional phishing attacks against FS-ISAC members.

The fallout from the back-to-back phishing attacks appears to have been limited and contained, as many FS-ISAC members who received the phishing attack quickly detected and reported it as suspicious. But the incident is a good reminder to be on your guard, remember that anyone can get phished, and that most phishing attacks succeed by abusing the sense of trust already established between the sender and recipient.

The confidential alert FS-ISAC sent to members about a successful phishing attack that spawned phishing emails coming from the FS-ISAC.

Notice of the phishing incident came in an alert FS-ISAC shared with its members today and obtained by KrebsOnSecurity. It describes an incident on Feb. 28 in which an FS-ISAC employee “clicked on a phishing email, compromising that employee’s login credentials. Using the credentials, a threat actor created an email with a PDF that had a link to a credential harvesting site and was then sent from the employee’s email account to select members, affiliates and employees.”

The alert said while FS-ISAC was already planning and implementing a multi-factor authentication (MFA) solution across all of its email platforms, “unfortunately, this incident happened to an employee that was not yet set up for MFA. We are accelerating our MFA solution across all FS-ISAC assets.”

The FS-ISAC also said it upgraded its Office 365 email version to provide “additional visibility and security.”

In an interview with KrebsOnSecurity, FS-ISAC President and CEO Bill Nelson said his organization has grown significantly in new staff over the past few years to more than 75 people now, including Greg Temm, the FS-ISAC’s chief information risk officer.

“To say I’m disappointed this got through is an understatement,” Nelson said. “We need to accelerate MFA extremely quickly for all of our assets.”

Nelson observed that “The positive messaging out of this I guess is anyone can become victimized by this.” But according to both Nelson and Temm, the phishing attack that tricked the FS-ISAC employee into giving away email credentials does not appear to have been targeted — nor was it particularly sophisticated.

“I would classify this as a typical, routine, non-targeted account harvesting and phishing,” Temm said. “It did not affect our member portal, or where our data is. That’s 100 percent multifactor. In this case it happened to be an asset that did not have multifactor.”

In this incident, it didn’t take a sophisticated actor to gain privileged access to an FS-ISAC employee’s inbox. But attacks like these raise the question: How successful might such a phishing attack be if it were only slightly more professional and/or organized?

Nelson said his staff members all participate in regular security awareness training and testing, but that there is always room to fill security gaps and move the needle on how many people click when they shouldn’t with email.

“The data our members share with us is fully protected,” he said. “We have a plan working with our board of directors to make sure we have added security going forward,” Nelson said. “But clearly, recognizing where some of these softer targets are is something every company needs to take a look at.”

CryptogramRussians Hacked the Olympics

Two weeks ago, I blogged about the myriad of hacking threats against the Olympics. Last week, the Washington Post reported that Russia hacked the Olympics network and tried to cast the blame on North Korea.

Of course, the evidence is classified, so there's no way to verify this claim. And while the article speculates that the hacks were a retaliation for Russia being banned due to doping, that doesn't ring true to me. If they tried to blame North Korea, it's more likely that they're trying to disrupt something between North Korea, South Korea, and the US. But I don't know.

Worse Than FailureCodeSOD: What a Stream

In Java 8, they added the Streams API. Coupled with lambdas, this means that developers can write the concise and expressive code traditionally oriented with functional programming. It’s the best bits of Java blended with the best bits of Clojure! The good news, is that it allows you to write less code! The better news is that you can abuse it to write more code, if you’re so inclined.

Antonio inherited some code written by “Frenk”, who was thus inclined. Frenk wasn’t particularly happy with their job, but were one of the “rockstar programmers” in the eyes of management, so Frenk was given the impossible-to-complete tasks and given complete freedom in the solution.

Frenk had a problem, though. Nothing Frenk was doing was actually all that impossible. If they solved everything with code that anyone else could understand, they wouldn’t look like an amazing genius. So Frenk purposefully obfuscated every line of code, ignoring indentation, favoring one-character variable names, and generally trying to solve each problem in the most obtuse way possible.

Which yielded this.

    Resource[] r; //@Autowired ndr
    Map<File, InputStream> m = null;
    if (file != null)
    m.put(file, new FileInputStream(file));}else

    m = -> { try { return x.getFile(); }
catch (Exception e) { throw new IllegalStateException(e);}},
    x -> {try{return x.getInputStream();}catch (Exception e){throw new IllegalStateException(e)

As purposefully unreadable code, I’d say that Frenk fails. That’s not to say that it isn’t bad, but Frenk’s attempts to make it unreadable… just make it annoying. I understand what the code does, but I’m just left wondering at why.

I can definitely say that this has never been tested in a case where the file variable is non-null, because that wouldn’t work. Antonio confirms that their IDE was throwing up plenty of warnings about calling a method on a variable that was probably null, with the m.put(…) line. It’s nice that they half-way protect against nulls- one variable is checked, but the other isn’t.

Frenk’s real artistry is in employing streams to convert an array to a map. On its surface, it’s not an objectively wrong approach- this is the kind of things streams are theoretically good at. Examine each element in the array, and apply a lambda that extracts the key and another lambda that extracts the value and put it into a map.

There are many real-world cases where I might use this exact technique. But in this case, Antonio refactored it to something a bit cleaner:

        Resource[] resources; //@Autowired again
        Map<File, InputStream> resourceMap = new HashMap<>();
        if (file != null)
            resourceMap.put(file, new FileInputStream(file));
            for (Resource res : resources)
                resourceMap.put(res.getFile(), res.getInputStream());

Here, the iterative approach is much simpler, and the intent of the code is more clear. Just because you have a tool doesn’t make it the right tool for the job. And before you wonder about the lack of exception handling- both the original block and the refactored version were already wrapped up in an exception handling block that can handle the IOException that failed access to the files would throw.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Cory DoctorowHey, Sydney! I’m coming to see you tonight (then Adelaide and Wellington!)

I’m just about to go to the airport to fly to Sydney for tonight’s event, What should we do about Democracy?

It’s part of the Australia/New Zealand tour for Walkaway, and from Sydney, I’m moving on to the Adelaide Festival and then to Wellington for Writers and Readers Week and the NetHui one-day event on copyright.

It feels like democracy is under siege, even in rich, peaceful countries like Australia that have escaped financial shocks and civil strife. Populist impulses have been unleashed in the UK and USA. There is a record lack of trust in the institutions of politics and government, exacerbated by the ways in which social media and digital technology can spread ‘fake news’ and are being harnessed by foreign powers to meddle in politics. Important issues that citizens care about, like climate change, are sidelined by professional politicians, enhancing the appeal of outsider figures. Do these problems add up to the failure of democracy? Are Brexit and Trump outliers, or the new normal? Join a lively panel of experts and commentators explore some big questions about the future of democracy, and think more clearly about what we ought to do.

Speakers Cory Doctorow, A.C. Grayling, Rebecca Huntley and Lenore Taylor

Chair Jeremy Moss

Cory DoctorowMy short story about better cities, where networks give us the freedom to schedule our lives to avoid heat-waves and traffic jams

I was lucky enough to be invited to submit a piece to Ian Bogost’s Atlantic series on the future of cities (previously: James Bridle, Bruce Sterling, Molly Sauter, Adam Greenfield); I told Ian I wanted to build on my 2017 Locus column about using networks to allow us to coordinate our work and play in a way that maximized our freedom, so that we could work outdoors on nice days, or commute when the traffic was light, or just throw an impromptu block party when the neighborhood needed a break.

The story is out today, with a gorgeous illustration by Molly Crabapple; the Atlantic called it “The City of Coordinated Leisure,” but in my heart it will always be “Coase’s Day Off: a microeconomics of coordinated leisure.”

There had been some block parties on Lima Street when Arturo had been too small to remember them, but then there had been a long stretch of unreasonably seasonable weather and no one had tried it, not until the year before, on April 18, a Thursday after a succession of days that vied to top each other for inhumane conditions, the weather app on the hallway wall showing 112 degrees before breakfast.

Mr. Papazian was the block captain for that party, and the first they’d known of it was when Arturo’s dad called out to his mom that Papazian had messaged them about a block party, and there was something funny in Dad’s tone, a weird mix of it’s so crazy and let’s do it.

That had been a day to remember, and Arturo had remembered, and watched the temperature.

The City of Coordinated Leisure [Cory Doctorow/The Atlantic]

Krebs on SecurityHow to Fight Mobile Number Port-out Scams

T-Mobile, AT&T and other mobile carriers are reminding customers to take advantage of free services that can block identity thieves from easily “porting” your mobile number out to another provider, which allows crooks to intercept your calls and messages while your phone goes dark. Tips for minimizing the risk of number porting fraud are available below for customers of all four major mobile providers, including Sprint and Verizon.

Unauthorized mobile phone number porting is not a new problem, but T-Mobile said it began alerting customers about it earlier this month because the company has seen a recent uptick in fraudulent requests to have customer phone numbers ported over to another mobile provider’s network.

“We have been alerting customers via SMS that our industry is experiencing a phone number port out scam that could impact them,” T-Mobile said in a written statement. “We have been encouraging them to add a port validation feature, if they’ve not already done so.”

Crooks typically use phony number porting requests when they have already stolen the password for a customer account (either for the mobile provider’s network or for another site), and wish to intercept the one-time password that many companies send to the mobile device to perform two-factor authentication.

Porting a number to a new provider shuts off the phone of the original user, and forwards all calls to the new device. Once in control of the mobile number, thieves can request any second factor that is sent to the newly activated device, such as a one-time code sent via text message or or an automated call that reads the one-time code aloud.

In these cases, the fraudsters can call a customer service specialist at a mobile provider and pose as the target, providing the mark’s static identifiers like name, date of birth, social security number and other information. Often this is enough to have a target’s calls temporarily forwarded to another number, or ported to a different provider’s network.

Port out fraud has been an industry problem for a long time, but recently we’ve seen an uptick in this illegal activity,” T-Mobile said.  “We’re not providing specific metrics, but it’s been enough that we felt it was important to encourage customers to add extra security features to their accounts.”

In a blog post published Tuesday, AT&T said bad guys sometimes use illegal porting to steal your phone number, transfer the number to a device they control and intercept text authentication messages from your bank, credit card issuer or other companies.

“You may not know this has happened until you notice your mobile device has lost service,” reads a post by Brian Rexroad, VP of security relations at AT&T. “Then, you may notice loss of access to important accounts as the attacker changes passwords, steals your money, and gains access to other pieces of your personal information.”

Rexroad says in some cases the thieves just walk into an AT&T store and present a fake ID and your personal information, requesting to switch carriers. Porting allows customers to take their phone number with them when they change phone carriers.

The law requires carriers to provide this number porting feature, but there are ways to reduce the risk of this happening to you.

T-Mobile suggests adding its port validation feature to all accounts. To do this, call 611 from your T-Mobile phone or dial 1-800-937-8997 from any phone. The T-Mobile customer care representative will ask you to create a 6-to-15-digit passcode that will be added to your account.

“We’ve included alerts in the T-Mobile customer app and on, but we don’t want customers to wait to get an alert to take action,” the company said in its statement. “Any customer can call 611 at any time from their mobile phone and have port validation added to their accounts.”

Verizon requires a match on a password or a PIN associated with the account for a port to go through. Subscribers can set their PIN via their Verizon Wireless website account or by visiting a local shop.

Sprint told me that in order for a customer to port their number to a different carrier, they must provide the correct Sprint account number and PIN number for the port to be approved. Sprint requires all customers to create a PIN during their initial account setup.

AT&T calls its two-factor authentication “extra security,” which involves creating a unique passcode on your AT&T account that requires you to provide that code before any changes can be made — including ports initiated through another carrier. Follow this link for more information. And don’t use something easily guessable like your SSN (the last four of your SSN is the default PIN, so make sure you change it quickly to something you can remember but that’s non-obvious).

Bigger picture, these porting attacks are a good reminder to use something other than a text message or a one-time code that gets read to you in an automated phone call. Whenever you have the option, choose the app-based alternative: Many companies now support third-party authentication apps like Google Authenticator and Authy, which can act as powerful two-factor authentication alternatives that are not nearly as easy for thieves to intercept.

Several of the mobile companies referred me to the work of a Mobile Authentication task force created by the carriers last fall. They say the issue of unauthorized ports to commit fraud is being addressed by this initiative.

For more on tightening your mobile security stance, see last year’s story, “Is Your Mobile Carrier Your Weakest Link?

CryptogramApple to Store Encryption Keys in China

Apple is bowing to pressure from the Chinese government and storing encryption keys in China. While I would prefer it if it would take a stand against China, I really can't blame it for putting its business model ahead of its desires for customer privacy.

Two more articles.

Worse Than FailureCodeSOD: The Part Version

Once upon a time, there was a project. Like most projects, it was understaffed, under-budgeted, under-estimated, and under the gun. Death marches ensued, and 80 hour weeks became the norm. The attrition rate was so high that no one who was there at the start of the project was there at the end of the project. Like the Ship of Theseus, each person was replaced at least once, but it was still the same team.

Eric wasn’t on that team. He was, however, a consultant. When the project ended and nothing worked, Eric got called in to fix it. And then called back to fix it some more. And then called back to implement new features. And called back…

While diagnosing one problem, Eric stumbled across the method getPartVersions. A part number was always something like “123456–1”, where the first group of numbers were the part number itself, and the portion after the “-” was the version of that part.

So, getPartVersions, then, should be something like:

String getPartVersions(String part) {
    //sanity checks omitted
    return part.split("-")[1];

The first hint that things weren’t implemented in a sane way was the method’s signature:

    private List<Integer> getPartVersions(final String searchString)

Why was it returning a list? The calling code always used the first element in the list, and the list was always one element long.

    private List<Integer> getPartVersions(final String searchString) {
        final List<Integer> partVersions = new ArrayList<>();
        if (StringUtils.indexOfAny(searchString, DELIMITER) != -1) {
            final String[] splitString = StringUtils.split(searchString, DELIMITER);
            if (splitString != null && splitString.length > 1) {
                //this is the partIdentifier, we make it empty it so it will not be parsed as a version
                splitString[0] = "";
                for (String s : splitString) {
                    s = s.trim();
                    try {
                        if (s.length() <= 2) {
                    } catch (final NumberFormatException ignored) {
                        //Do nothing probably not an partVersion
        return partVersions;

A part number is always in the form “{PART}-{VERSION}”. That is what the variable searchString should contain. So, they do their basic sanity checks- is there a dash there, does it split into two pieces, etc. Even these sanity checks hint at a WTF, as StringUtils obviously is just wrappers around built-in string functions.

Things get really odd, though, with this:

                splitString[0] = "";
                for (String s : splitString) //…

Throw away the part number, then iterate across the entire series of strings we made by splitting. Check the length- if it’s less than or equal to two, it must be the part version. Parse it into an integer and put it in the list. The real “genius” element of this code is that since the first entry in the splitString array is set to an empty string, Integer.parseInt will throw an exception, thus ensuring we don’t accidentally put the part number in our list.

I’ve personally written methods that have this sort of tortured logic, and given what Eric tells us about the history of the project, I suspect I know what happened here. This method was written before the requirement it fulfilled was finalized. No one, including the business users, actually knew the exact format or structure of a part number. The developer got five different explanations, which turned out to be wrong in 15 different ways, and implemented a compromise that just kept getting tweaked until someone looked at the results and said, “Yes, that’s right.” The dev then closed out the requirement and moved onto the next one.

Eric left the method alone: he wasn’t being paid to refactor things, and too much downstream code depended on the method signature returning a List<Integer>.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Krebs on SecurityBot Roundup: Avalanche, Kronos, NanoCore

It’s been a busy few weeks in cybercrime news, justifying updates to a couple of cases we’ve been following closely at KrebsOnSecurity. In Ukraine, the alleged ringleader of the Avalanche malware spam botnet was arrested after eluding authorities in the wake of a global cybercrime crackdown there in 2016. Separately, a case that was hailed as a test of whether programmers can be held accountable for how customers use their product turned out poorly for 27-year-old programmer Taylor Huddleston, who was sentenced to almost three years in prison for making and marketing a complex spyware program.

First, the Ukrainian case. On Nov. 30, 2016, authorities across Europe coordinated the arrest of five individuals thought to be tied to the Avalanche crime gang, in an operation that the FBI and its partners abroad described as an unprecedented global law enforcement response to cybercrime. Hundreds of malicious web servers and hundreds of thousands of domains were blocked in the coordinated action.

The global distribution of servers used in the Avalanche crime machine. Source:

The alleged leader of the Avalanche gang — 33-year-old Russian Gennady Kapkanov — did not go quietly at the time. Kapkanov allegedly shot at officers with a Kalashnikov assault rifle through the front door as they prepared to raid his home, and then attempted to escape off of his 4th floor apartment balcony. He was later released, after police allegedly failed to file proper arrest records for him.

But on Monday Agence France-Presse (AFP) reported that Ukrainian authorities had once again collared Kapkanov, who was allegedly living under a phony passport in Poltav, a city in central Ukraine. No word yet on whether Kapkanov has been charged, which was supposed to happen Monday.

Kapkanov’s drivers license. Source:


Lawyers for Taylor Huddleston, a 27-year-old programmer from Hot Springs, Ark., originally asked a federal court to believe that the software he sold on the sprawling hacker marketplace Hackforums — a “remote administration tool” or “RAT” designed to let someone remotely administer one or many computers remotely — was just a benign tool.

The bad things done with Mr. Huddleston’s tools, the defendant argued, were not Mr. Huddleston’s doing. Furthermore, no one had accused Mr. Huddleston of even using his own software.

The Daily Beast first wrote about Huddleston’s case in 2017, and at the time suggested his prosecution raised questions of whether a programmer could be held criminally responsible for the actions of his users. My response to that piece was “Dual-Use Software Criminal Case Not So Novel.

Photo illustration by Lyne Lucien/The Daily Beast

The court was swayed by evidence that yes, Mr. Huddleston could be held criminally responsible for those actions. It sentenced him to 33 months in prison after the defendant acknowledged that he knew his RAT — a Remote Access Trojan dubbed “NanoCore RAT” — was being used to spy on webcams and steal passwords from systems running the software.

Of course Huddleston knew: He didn’t market his wares on some Craigslist software marketplace ad, or via video promos on his local cable channel: He marketed the NanoCore RAT and another software licensing program called Net Seal exclusively on Hackforums[dot]net.

This sprawling, English language forum has a deep bench of technical forum discussions about using RATs and other tools to surreptitiously record passwords and videos of “slaves,” the derisive term for systems secretly infected with these RATs.

Huddleston knew what many of his customers were doing because many NanoCore users also used Huddleston’s Net Seal program to keep their own RATs and other custom hacking tools from being disassembled or “cracked” and posted online for free. In short: He knew what programs his customers were using Net Seal on, and he knew what those customers had done or intended to do with tools like NanoCore.

The sentencing suggests that where you choose to sell something online says a lot about what you think of your own product and who’s likely buying it.

Daily Beast author Kevin Poulsen noted in a July 2017 story that Huddleston changed his tune and pleaded guilty. The story pointed to an accompanying plea in which Huddleston stipulated that he “knowingly and intentionally aided and abetted thousands of unlawful computer intrusions” in selling the program to hackers and that he “acted with the purpose of furthering these unauthorized computer intrusions and causing them to occur.”


Bleeping Computer’s Catalin Cimpanu observes that Huddleston’s case is similar to another being pursued by U.S. prosecutors against Marcus “MalwareTech” Hutchins, the security researcher who helped stop the spread of the global WannaCry ransomware outbreak in May 2017. Prosecutors allege Hutchins was the author and proprietor of “Kronos,” a strain of malware designed to steal online banking credentials.

Marcus Hutchins, just after he was revealed as the security expert who stopped the WannaCry worm. Image:

On Sept. 5, 2017, KrebsOnSecurity published “Who is Marcus Hutchins?“, a breadcrumbs research piece on the public user profiles known to have been wielded by Hutchins. The data did not implicate him in the Kronos trojan, but it chronicles the evolution of a young man who appears to have sold and published online quite a few unique and powerful malware samples — including several RATs and custom exploit packs (as well as access to hacked PCs).

MalwareTech declined to be interviewed by this publication in light of his ongoing prosecution. But Hutchins has claimed he never had any customers because he didn’t write the Kronos trojan.

Hutchins has pleaded not guilty to all four counts against him, including conspiracy to distribute malicious software with the intent to cause damage to 10 or more affected computers without authorization, and conspiracy to distribute malware designed to intercept protected electronic communications.

Hutchins said through his @MalwareTechBlog account on Twitter Feb. 26 that he wanted to publicly dispute my Sept. 2017 story. But he didn’t specify why other than saying he was “not allowed to.”

MWT wrote: “mrw [my reaction when] I’m not allowed to debunk the Krebs article so still have to listen to morons telling me why I’m guilty based on information that isn’t even remotely correct.”

Hutchins’ tweet on Feb. 26, 2018.

According to a story at BankInfoSecurity, the evidence submitted by prosecutors for the government includes:

  • Statements made by Hutchins after he was arrested.
  • A CD containing two audio recordings from a county jail in Nevada where he was detained by the FBI.
  • 150 pages of Jabber chats between the defendant and an individual.
  • Business records from Apple, Google and Yahoo.
  • Statements (350 pages) by the defendant from another internet forum, which were seized by the government in another district.
  • Three to four samples of malware.
  • A search warrant executed on a third party, which may contain some privileged information.

The case against Hutchins continues apace in Wisconsin. A scheduling order for pretrial motions filed Feb. 22 suggests the court wishes to have a speedy trial that concludes before the end of April 2018.

TEDYou are here for a reason: 4 questions with Halla Tómasdóttir

Cartier and TED believe in the power of bold ideas to empower local initiatives to have global impact. To celebrate Cartier’s dedication to launching the ideas of female entrepreneurs into concrete change, TED has curated a special session of talks around the theme “Bold Alchemy” for the Cartier Women’s Initiative Awards, featuring a selection of favorite TED speakers.

Leading up to the session, TED talked with financier, entrepreneur and onetime candidate for president of Iceland, Halla Tómasdóttir, about what influences, inspires and drives her to be bold.

TED: Tell us who you are.
Halla Tómasdóttir: I think of myself first and foremost as a change catalyst who is passionate about good leadership and a gender-balanced world. My leadership career started in corporate America with Mars and Pepsi Cola, but since then I have served as an entrepreneur, educator, investor, board director, business leader and presidential candidate. I am married, a proud mother of two teenagers and a dog and am perhaps best described by the title given to me by the New Yorker: “A Living Emoji of Sincerity.”

TED: What’s a bold move you’ve made in your career?
HT: I left a high-profile position as the first female CEO of the Iceland Chamber of Commerce to become an entrepreneur with the vision to incorporate feminine values into finance. I felt the urge to show a different way in a sector that felt unsustainable to me, and I longed to work in line with my own values.

TED: Tell us about a woman who inspires you.
HT: The women of Iceland inspired me at an early age, when they showed incredible courage, solidarity and sisterhood and “took the day off” (went on a strike) and literally brought the country to its knees — as nothing worked when women didn’t do any work. Five years later, Iceland was the first country in the world to democratically elect a woman as president. I was 11 years old at the time, and her leadership has inspired me ever since. Her clarity on what she cares about and her humble way of serving those causes is truly remarkable.

TED: If you could go back in time, what would you tell your 18-year-old self?
HT: I would say: Halla, just be you and know that you are enough. People will frequently tell you things like: “This is the way we do things around here.” Don’t ever take that as a valid answer if it doesn’t feel right to you. We are not here to continue to do more of the same if it doesn’t work or feel right anymore. We are here to grow, ourselves and our society. You are here for a reason: make your life and leadership matter.

The private TED session at Cartier takes place April 26 in Singapore. It will feature talks from a diverse range of global leaders, entrepreneurs and change-makers, exploring topics ranging from the changing global workforce to maternal health to data literacy, and it will include a performance from the only female double violinist in the world.

Worse Than Failure-0//

In software development, there are three kinds of problems: small, big and subtle. The small ones are usually fairly simple to track down; a misspelled label, a math error, etc. The large ones usually take longer to find; a race condition that you just can't reproduce, an external system randomly feeding you garbage, and so forth.

Internet word cloud

The subtle problems are an entirely different beast. It can be as simple as somebody entering 4321 instead of 432l (432L), or similar with 'i', 'l', '1', '0' and 'O'. It can be an interchanged comma and period. It can be something more complex, such as an unsupported third party library that throws back errors for undefined conditions, but randomly provides so little information as to be useful to neither user nor developer.

Brujo B encountered such a beast back in 2003 in a sub-equatorial bank that had been especially fond of VB6. This bank had tried to implement standards. In particular, they wanted all of their error messages to appear consistently for their users. To this end, they put a great deal of time and effort into building a library to display error messages in a consistent format. Specifically:


An example error message might be:

  File Not Found - 127 / File 'your.file' could not be found / FileImporter

Unfortunately, the designers of this routine could not compensate for all of the third party tools and libraries that did NOT set some/most/all of those variables. This led to interesting presentations of errors to both users and developers:

  - 34 / Network Connection Lost /
  Unauthorized - 401 //

Crystal Reports was particularly unhelpful, in that it refused to populate any field from which error details could be obtained, leading to the completely unhelpful:


...which could only be interpreted as Something really bad happened, but we don't know what that is and you have no way to figure that out. It didn't matter what Brujo and peers did. Everything that they tried to cajole Crystal Reports into giving context information failed to varying degrees; they could only patch specific instances of errors; but the Ever-Useless™ -0// error kept popping up to bite them in the arse.

After way too much time trying to slay the beast, they gave up, accepted it as one of their own and tried their best to find alternate ways of figuring out what the problems were.

Several years after moving on to saner pastures, Brujo returned to visit old friends. On the wall they had added a cool painting with many words that "describe the company culture". Layered in were management approved words, like "Trust" and "Loyalty". Some were more specific in-jokes, names of former employees, or references to big achievements the organization had made.

One of them was -0//

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Don MartiWhat I don't get about Marketing

I want to try to figure out something I still don't understand about Marketing.

First, read this story by Sarah Vizard at Marketing Week: Why Google and Facebook should heed Unilever’s warnings.

All good points, right?

With the rise of fake news and revelations about how the Russians used social platforms to influence both the US election and EU referendum, the need for change is pressing, both for the platforms and for the advertisers that support them.

We know there's a brand equity crisis going on. Brand-unsafe placements are making mainstream brands increasingly indistinguishable from scams. So the story makes sense so far. But here's what I don't get.

For the call to action to work, Unilever really needs other brands to rally round but these have so far been few and far between.

Other brands? Why?

If brands are worth anything, they can at least help people tell one product apart from another.

Think Small VW ad

Saying that other brands need to participate in saving Unilever's brands from the three-ring shitshow of brand-unsafe advertising is like saying that Volkswagen really needs other brands to get into simple layouts and natural-sounding copy just because Volkswagen's agency did.

Not everybody has to make the same stuff and sell it the same way. Brands being different from each other is a good thing. (Right?)

generic food

Sometimes a problem on the Internet isn't a "let's all work together" kind of problem. Sometimes it's an opportunity for one brand to get out ahead of another.

What if every brand in a category kept on playing in the trash fire except one?

Planet Linux AustraliaLev Lafayette: Drupal "Access denied" Message

It happens rarely enough, but on occasion (such as an upgrade to a database system (e.g., MySQL, MariaDB) or system version of a web-scripting language (e.g., PHP), you can end up with one's Drupal site failing to load, displaying only the error message similar to:

PDOException: SQLSTATE[HY000] [1044] Access denied for user 'username'@'localhost' to database 'database' in lock_may_be_available() (line 167 of /website/includes/

read more


CryptogramE-Mail Leaves an Evidence Trail

If you're going to commit an illegal act, it's best not to discuss it in e-mail. It's also best to Google tech instructions rather than asking someone else to do it:

One new detail from the indictment, however, points to just how unsophisticated Manafort seems to have been. Here's the relevant passage from the indictment. I've bolded the most important bits:

Manafort and Gates made numerous false and fraudulent representations to secure the loans. For example, Manafort provided the bank with doctored [profit and loss statements] for [Davis Manafort Inc.] for both 2015 and 2016, overstating its income by millions of dollars. The doctored 2015 DMI P&L submitted to Lender D was the same false statement previously submitted to Lender C, which overstated DMI's income by more than $4 million. The doctored 2016 DMI P&L was inflated by Manafort by more than $3.5 million. To create the false 2016 P&L, on or about October 21, 2016, Manafort emailed Gates a .pdf version of the real 2016 DMI P&L, which showed a loss of more than $600,000. Gates converted that .pdf into a "Word" document so that it could be edited, which Gates sent back to Manafort. Manafort altered that "Word" document by adding more than $3.5 million in income. He then sent this falsified P&L to Gates and asked that the "Word" document be converted back to a .pdf, which Gates did and returned to Manafort. Manafort then sent the falsified 2016 DMI P&L .pdf to Lender D.

So here's the essence of what went wrong for Manafort and Gates, according to Mueller's investigation: Manafort allegedly wanted to falsify his company's income, but he couldn't figure out how to edit the PDF. He therefore had Gates turn it into a Microsoft Word document for him, which led the two to bounce the documents back-and-forth over email. As attorney and blogger Susan Simpson notes on Twitter, Manafort's inability to complete a basic task on his own seems to have effectively "created an incriminating paper trail."

If there's a lesson here, it's that the Internet constantly generates data about what people are doing on it, and that data is all potential evidence. The FBI is 100% wrong that they're going dark; it's really the golden age of surveillance, and the FBI's panic is really just its own lack of technical sophistication.

Krebs on SecurityUSPS Finally Starts Notifying You by Mail If Someone is Scanning Your Snail Mail Online

In October 2017, KrebsOnSecurity warned that ne’er-do-wells could take advantage of a relatively new service offered by the U.S. Postal Service that provides scanned images of all incoming mail before it is slated to arrive at its destination address. We advised that stalkers or scammers could abuse this service by signing up as anyone in the household, because the USPS wasn’t at that point set up to use its own unique communication system — the U.S. mail — to alert residents when someone had signed up to receive these scanned images.

Image: USPS

The USPS recently told this publication that beginning Feb. 16 it started alerting all households by mail whenever anyone signs up to receive these scanned notifications of mail delivered to that address. The notification program, dubbed “Informed Delivery,” includes a scan of the front of each envelope destined for a specific address each day.

The Postal Service says consumer feedback on its Informed Delivery service has been overwhelmingly positive, particularly among residents who travel regularly and wish to keep close tabs on any bills or other mail being delivered while they’re on the road. It has been available to select addresses in several states since 2014 under a targeted USPS pilot program, but it has since expanded to include many ZIP codes nationwide. U.S. residents can find out if their address is eligible by visiting

According to the USPS, some 8.1 million accounts have been created via the service so far (Oct. 7, 2017, the last time I wrote about Informed Delivery, there were 6.3 million subscribers, so the program has grown more than 28 percent in five months).

Roy Betts, a spokesperson for the USPS’s communications team, says post offices handled 50,000 Informed Delivery notifications the week of Feb. 16, and are delivering an additional 100,000 letters to existing Informed Delivery addresses this coming week.

Currently, the USPS allows address changes via the USPS Web site or in-person at any one of more than 35,000 USPS retail locations nationwide. When a request is processed, the USPS sends a confirmation letter to both the old address and the new address.

If someone already signed up for Informed Delivery later posts a change of address request, the USPS does not automatically transfer the Informed Delivery service to the new address: Rather, it sends a mailer with a special code tied to the new address and to the username that requested the change. To resume Informed Delivery at the new address, that code needs to be entered online using the account that requested the address change.

A review of the methods used by the USPS to validate new account signups last fall suggested the service was wide open to abuse by a range of parties, mainly because of weak authentication and because it is not easy to opt out of the service.

Signing up requires an eligible resident to create a free user account at, which asks for the resident’s name, address and an email address. The final step in validating residents involves answering four so-called “knowledge-based authentication” or KBA questions.

The USPS told me it uses two ID proofing vendors: Lexis Nexisand, naturally, recently breached big three credit bureau Equifax — to ask the magic KBA questions, rotating between them randomly.

KrebsOnSecurity has assailed KBA as an unreliable authentication method because so many answers to the multiple-guess questions are available on sites like Spokeo and Zillow, or via social networking profiles.

It’s also nice when Equifax gives away a metric truckload of information about where you’ve worked, how much you made at each job, and what addresses you frequented when. See: How to Opt Out of Equifax Revealing Your Salary History for how much leaks from this lucrative division of Equifax.

All of the data points in an employee history profile from Equifax will come in handy for answering the KBA questions, or at least whittling away those that don’t match salary ranges or dates and locations of the target identity’s previous addresses.

Once signed up, a resident can view scanned images of the front of each piece of incoming mail in advance of its arrival. Unfortunately, anyone able to defeat those automated KBA questions from Equifax and Lexis Nexis — be they stalkers, jilted ex-partners or private investigators — can see who you’re communicating with via the Postal mail.

Maybe this is much ado about nothing: Maybe it’s just a reminder that people in the United States shouldn’t expect more than a post card’s privacy guarantee (which in can leak the “who” and “when” of any correspondence, and sometimes the “what” and “why” of the communication). We’d certainly all be better off if more people kept that guarantee in mind for email in addition to snail mail. At least now the USPS will deliver your address a piece of paper letting you know when someone signs up to look at those W’s in your snail mail online.

Cory DoctorowPodcast: The Man Who Sold the Moon, Part 05

Here’s part five of my reading (MP3) (part four, part three, part two, part one) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.


Worse Than FailureCodeSOD: Waiting for the Future

One of the more interesting things about human psychology is how bad we are at thinking about the negative consequences of our actions if those consequences are in the future. This is why the death penalty doesn’t deter crime, why we dump massive quantities of greenhouse gases into the atmosphere, and why the Y2K bug happened in the first place, and why we’re going to do it again when every 32-bit Unix system explodes in 2038. If the negative consequence happens well after the action which caused it, humans ignore the obvious cause and effect and go on about making problems that have to be fixed later.

Fran inherited a bit of technical debt. Specifically, there’s an auto-numbered field in the database. Due to their business requirements, when the field hits 999,999, it needs to wrap back around to 000,001. Many many years ago, the original developer “solved” that problem thus:

function getstan($callingMethod = null)

    $sequence = 1;

    // get insert id back
    $rs = db()->insert("sequence", array(
        'processor' => 'widgetsinc',
        'RID'       => $this->data->RID,
        'method'    => $callingMethod,
        'card'      => $this->data->cardNumber
    ), false, false);
    if ($rs) { // if query succeeded...
        $sequence = $rs;
        if ($sequence > 999999) {
            db()->q("delete from sequence where processor='widgetsinc'");
                array('processor' => 'widgetsinc', 'RID' => $this->data->RID, 'card' => $this->data->cardNumber), false,
            $sequence = 1;

    return (substr(str_pad($sequence, 6, "0", STR_PAD_LEFT), -6));

The sequence table uses an auto-numbered column. They insert a row into the table, which returns the generated ID used. If that ID is greater than 999,999, they… delete the old rows. They then insert a new row. Then they return “000001”.

Unfortunately, sequences don’t work this way in MySQL, or honestly any other database. They keep counting up unless you alter or otherwise reset the sequence. So, the counter keeps ticking up, and this method keeps deleting the old rows and returning “000001”. The original developer almost certainly never tested what this code does when the counter breaks 999,999, because that day was so far out into the future that they could put off the problem.

Speaking of putting off solving problems, Fran also adds:

For the past 2 years this function has been returning 000001 and is starting to screw up reports.

Broken for at least two years, but only now is it screwing up reports badly enough that anyone wants to do anything to fix it.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaOpenSTEM: At Mercy of the Weather

It is the time of year when Australia often experiences extreme weather events. February is renowned as the hottest month and, in some parts of the country, also the wettest month. It often brings cyclones to our coasts and storms, which conversely enough, may trigger fires as lightening strikes the hot, dry bush. Aboriginal people […]


Planet Linux AustraliaChris Samuel: Vale Dad

[I’ve been very quiet here for over a year for reasons that will become apparent in the next few days when I finish and publish a long post I’ve been working on for a while – difficult to write, hence the delay]

It’s 10 years ago today that my Dad died, and Alan and I lost the father who had meant so much to both of us. It’s odd realising that it’s over 1/5th of my life since he died, it doesn’t seem that long.

Vale dad, love you…

This item originally posted here:

Vale Dad


Rondam RamblingsDevin Nunes doesn't realize that he's part of the government

I was reading about the long anticipated release of the Democratic rebuttal to the famous Republican dossier memo.  I've been avoiding writing about this, or any aspect of the Russia investigation, because there is just so much insanity going on there and I didn't want to get sucked into that tar pit.  But I could not let this slide: [O]n Saturday, committee chairman Devin Nunes (R-Calif.)

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main March 2018 Meeting: Unions - Hacking society's operating system

Mar 6 2018 18:30
Mar 6 2018 20:30
Mar 6 2018 18:30
Mar 6 2018 20:30
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Tuesday, March 6, 2018

6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000


Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

March 6, 2018 - 18:30

read more


Sam VargheseJoyce affair: incestuous relationship between pollies and journos needs some exposure

Barnaby Joyce has come (no pun intended) and Barnaby Joyce has gone, but one issue that is intimately connected with the circus that surrounded him for the last three weeks has yet to be subjected to any scrutiny.

And that is the highly incestuous relationship that exists between Australian journalists and politicians and often results in news being concealed from the public.

The Australian media examined the scandal around Deputy Prime Minister Joyce from many angles, ever since a picture of his pregnant mistress, Vikki Campion, appeared on the front page of the The Daily Telegraph.

Various high-profile journalists tried to offer mea culpas to justify their non-reporting of the affair.

This is not the first time that journalists in Canberra have known about newsworthy stories connected to politicians and kept quiet.

In 2005, journalists Michael Brissenden, Tony Wright and Paul Daley were at a dinner with former treasurer Peter Costello at which he told them he had set next April (2006) as the absolute deadline “that is, mid-term,” for John Howard to stand aside; if not, he would challenge him.

Costello was said by Brissenden to have declared that a challenge “will happen then” if “Howard is still there”. “I’ll do it,” he said. He said he was “prepared to go the backbench”. He said he’d “carp” at Howard’s leadership “from the backbench” and “destroy it” until he “won” the leadership.

But the three journalists kept mum about what would have been a big scoop, because Costello’s press secretary asked them not to write the yarn.

There was a great deal of speculation in the run-up to the 2007 election as to whether Howard would step down; one story in July 2006 said there had been an unspoken 1994 agreement between him and Costello to vacate the PM’s seat and make way for Costello to get the top job.

Had the three journalists at that 2005 dinner gone ahead and reported the story — as journalists are supposed to do — it is unlikely that Howard would have been able to carry on as he did. It would have forced Costello to challenge for the leadership or quit. In short, it would have changed the course of politics.

But Brissenden, Daley and Wright kept mum.

In the case of Joyce, it has been openly known since at least April 2017 that he was schtupping Campion. Indeed, the picture of Campion on the front page of the Telegraph indicates she was at least seven months pregnant — later it became known that the baby is due in April — which means Joyce must have been sleeping with her at least from June onwards.

The story was in the public interest, because Joyce and Campion are both paid from the public purse. When their affair became an issue, Joyce had her moved around to the offices of his National Party mates, Matt Canavan and Damian Drum, at salaries that went as high as $190,000. Joyce is also no ordinary politician – he is the deputy prime minister and thus acts as the head of the country whenever the prime minister is out of the country. Thus anything that affects his functioning is of interest to the public as he can make decisions that affect them.

But journalists like Katharine Murphy of the Guardian and Jacqueline Maley of the Sydney Morning Herald kept mum. A female journalist who is not part of this clique, Sharri Markson, broke the story. She was roundly criticised by many who belong the Murphy-Maley school of thinking.

Chris Uhlmann kept mum. So did Malcolm Farr and a host of others like Fran Bailey.

Both Murphy and Maley cited what they called “ethics” to justify keeping mum. But after the story broke, they leapt on it with claws extended. Another journalist, Julia Baird, tried to spin the story as one that showed how a woman in Joyce’s position would have been treated – much worse, was her opinion. She chose former prime minister Julia Gillard as her case study but did not offer the fact that Gillard was also a highly incompetent prime minister and that the flak she earned was also due to this aspect of her character.

Baird once was a columnist for Fairfax’s Weekend magazine and her profile pic in the publication at the time showed her in Sass & Bide jeans – the very business in which her husband was involved. Given that, when she moralises, one needs to take it with a kilo of salt.

But the central point is that, though she has a number of platforms to break a story, Baird never wrote a word about Joyce’s philandering. He promoted himself as a man who espoused family values by being photographed with his wife and four daughters repeatedly. He moralised more times than any other about the sanctity of marriage. Thus, he was fair game. Or so commonsense would dictate.

Why do these journalists and many others keep quiet and try to stay in the good books of politicians? The answer is simple: though the jobs of journalists and public relations people are diametric opposites, journalists have no qualms about crossing the divide because the money in PR is much more.

Salaries are much higher if a journalist gets onto the PR team of a senior politician. And with jobs in journalism disappearing at a rate of knots year after year, journalists like Murphy, Maley and Baird hedge their bets in order to stay in politicians’ good books. Remember Mark Simkin, a competent news reporter at the ABC? He joined the staff of — hold your breath — Tony Abbott when the man was prime minister. Simkin is rarely seen in public these days.

Nobody calls journalists on this deception and fraud. It emboldens them to continue to pose as people who act in the public interest when in reality they are no different from the average worker. Yet they climb on pulpits week after week and pontificate to the masses.

It has been said that journalists are like prostitutes: first, they do it for the fun of it, then they do it for a few friends, and finally they end up doing it for money. You won’t find too many arguments from me about that characterisation.

CryptogramFriday Squid Blogging: The Symbiotic Relationship Between the Bobtail Squid and a Particular Microbe

This is the story of the Hawaiian bobtail squid and Vibrio fischeri.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sociological ImagesDigital Drag?

Screenshot used with permission

As I was scrolling through Facebook a few weeks ago, I noticed a new trend: Several friends posted pictures (via an app) of what they would look like as “the opposite sex.” Some of them were quite funny—my female-identified friends sported mustaches, while my male-identified friends revealed long flowing locks. But my sociologist-brain was curious: What makes this app so appealing? How does it decide what the “opposite sex” looks like? Assuming it grabs the users’ gender from their profiles, what would it do with users who listed their genders as non-binary, trans, or genderqueer? Would it assign them male or female? Would it crash? And, on a basic level, why are my friends partaking in this “game?”

Gender is deeply meaningful for our social world and for our identities—knowing someone’s gender gives us “cues” about how to categorize and connect with that person. Further, gender is an important way our social world is organizedfor better or worse. Those who use the app engage with a part of their own identities and the world around them that is extremely significant and meaningful.

Gender is also performative. We “do” gender through the way we dress, talk, and take up space. In the same way, we read gender on people’s bodies and in how they interact with us. The app “changes people’s gender” by changing their gender performance; it alters their hair, face shape, eyes, and eyebrows. The app is thus a outlet to “play” with gender performance. In other words, it’s a way of doing digital drag. Drag is a term that is often used to refer to male-bodied people dressing in a feminine way (“drag queens”) or female-bodied people dressing in a masculine way (“drag kings”), but all people who do drag do not necessarily fit in this definition. Drag is ultimately about assuming and performing a gender. Drag is increasingly coming into the mainstream, as the popular reality TV series RuPaul’s Drag Race has been running for almost a decade now. As more people are exposed to the idea of playing with gender, we might see more of them trying it out in semi-public spaces like Facebook.

While playing with gender may be more common, it’s not all fun and games. The Facebook app in particular assumes a gender binary with clear distinctions between men and women, and this leaves many people out. While data on individuals outside of the gender binary is limited, a 2016 report from The Williams Institute estimated that 0.6% of the U.S. adult population — 1.4 million people — identify as transgender. Further, a Minnesota study of high schoolers found about 3% of the student population identify as transgender or gender nonconforming, and researchers in California estimate that 6% of adolescents are highly gender nonconforming and 20% are androgynous (equally masculine and feminine) in their gender performances.

The problem is that the stakes for challenging the gender binary are still quite high. Research shows people who do not fit neatly into the gender binary can face serious negative consequences, like discrimination and violence (including at least 28 killings of transgender individuals in 2017 and 4 already in 2018).  And transgender individuals who are perceived as gender nonconforming by others tend to face more discrimination and negative health outcomes.

So, let’s all play with gender. Gender is messy and weird and mucking it up can be super fun. Let’s make a digital drag app that lets us play with gender in whatever way we please. But if we stick within the binary of male/female or man/woman, there are real consequences for those who live outside of the gender binary.

Recommended Readings:

Allison Nobles is a PhD candidate in sociology at the University of Minnesota and Graduate Editor at The Society Pages. Her research primarily focuses on sexuality and gender, and their intersections with race, immigration, and law.

(View original at

Planet Linux AustraliaTim Serong: Strange Bedfellows

The Tasmanian state election is coming up in a week’s time, and I’ve managed to do a reasonable job of ignoring the whole horrible thing, modulo the promoted tweets, the signs on the highway, the junk the major (and semi-major) political parties pay to dump in my letterbox, and occasional discussions with friends and neighbours.

Promoted tweets can be blocked. The signs on the highway can (possibly) be re-purposed for a subsequent election, or can be pulled down and used for minor windbreak/shelter works for animal enclosures. Discussions with friends and neighbours are always interesting, even if one doesn’t necessarily agree. I think the most irritating thing is the letterbox junk; at best it’ll eventually be recycled, at worst it becomes landfill or firestarters (and some of those things do make very satisfying firestarters).

Anyway, as I live somewhere in the wilds division of Franklin, I thought I’d better check to see who’s up for election here. There’s no independents running this time, so I’ve essentially got the choice of four parties; Shooters, Fishers and Farmers Tasmania, Tasmanian Greens, Tasmanian Labor and Tasmanian Liberals (the order here is the same as on the TEC web site; please don’t infer any preference based on the order in which I list parties in this blog post).

I feel like I should be setting party affiliations aside and voting for individuals, but of the sixteen candidates listed, to the best of my knowledge I’ve only actually met and spoken with two of them. Another I noticed at random in a cafe, and I was ignored by a fourth who was milling around with some cronies at a promotional stand out the front of Woolworths in Huonville a few weeks ago. So, party affiliations it is, which leads to an interesting thought experiment.

When you read those four party names above, what things came most immediately to mind? For me, it was something like this:

  • Shooters, Fishers & Farmers: Don’t take our guns. Fuck those bastard Greenies.
  • Tasmanian Greens: Protect the natural environment. Renewable energy. Try not to kill anything. Might collaborate with Labor. Liberals are big money and bad news.
  • Tasmanian Labor: Mellifluous babble concerning health, education, housing, jobs, pokies and something about workers rights. Might collaborate with the Greens. Vehemently opposed to the Liberals.
  • Tasmanian Liberals: Mellifluous babble concerning jobs, health, infrastructure, safety and the Tasmanian way of life, peppered with something about small business and family values. Vehemently opposed to Labor and the Greens.

And because everyone usually automatically thinks in terms of binaries (e.g. good vs. evil, wrong vs. right, one vs. zero), we tend to end up imagining something like this:

  • Shooters, Fishers & Farmers vs. Greens
  • Labor vs. Liberal
  • …um. Maybe Labor and the Greens might work together…
  • …but really, it’s going to be Labor or Liberal in power (possibly with some sort of crossbench or coalition support from minor parties, despite claims from both that it’ll be majority government all the way).

It turns out that thinking in binaries is remarkably unhelpful, unless you’re programming a computer (it’s zeroes and ones all the way down), or are lost in the wilderness (is this plant food or poison? is this animal predator or prey?) The rest of the time, things tend to be rather more colourful (or grey, depending on your perspective), which leads back to my thought experiment: what do these “naturally opposed” parties have in common?

According to their respective web sites, the Shooters, Fishers & Farmers and the Greens have many interests in common, including agriculture, biosecurity, environmental protection, tourism, sustainable land management, health, education, telecommunications and addressing homelessness. There are differences in the policy details of course (some really are diametrically opposed), but in broad strokes these two groups seem to care strongly about – and even agree on – many of the same things.

Similarly, Labor and Liberal are both keen to tell a story about putting the people of Tasmania first, about health, education, housing, jobs and infrastructure. Honestly, for me, they just kind of blend into one another; sure there’s differences in various policy details, but really if someone renamed them Labal and Liberor I wouldn’t notice. These two are the status quo, and despite fighting it out with each other repeatedly, are, essentially, resting on their laurels.

Here’s what I’d like to see: a minority Tasmanian state government formed from a coalition of the Tasmanian Greens plus the Shooters, Fishers & Farmers party, with the Labor and Liberal parties together in opposition. It’ll still be stuck in that irritating Westminster binary mode, but at least the damn thing will have been mixed up sufficiently that people might actually talk to each other rather than just fighting.

CryptogramElection Security

I joined a letter supporting the Secure Elections Act (S. 2261):

The Secure Elections Act strikes a careful balance between state and federal action to secure American voting systems. The measure authorizes appropriation of grants to the states to take important and time-sensitive actions, including:

  • Replacing insecure paperless voting systems with new equipment that will process a paper ballot;

  • Implementing post-election audits of paper ballots or records to verify electronic tallies;

  • Conducting "cyber hygiene" scans and "risk and vulnerability" assessments and supporting state efforts to remediate identified vulnerabilities.

    The legislation would also create needed transparency and accountability in elections systems by establishing clear protocols for state and federal officials to communicate regarding security breaches and emerging threats.

Worse Than FailureError'd: Everybody's Invited!

"According to Outlook, it seems that I accidentally invited all of the EU and US citizens combined," writes Wouter.


"Just an array a month sounds like a pretty good deal to me! And I do happen to have some arrays to spare..." writes Rutger W.


Lucas wrote, "VMWare is on the cutting edge! They can support TWICE as much Windows 10 as their competitors!"


"I just wish it was CurrentMonthName so that I could take advantage of the savings!" Ken wrote.


Mark B. "I had no idea that Redboxes were so cultured."


"I'm a little uncomfortable about being connected to an undefined undefined," writes Joel B.


[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Krebs on SecurityChase ‘Glitch’ Exposed Customer Accounts

Multiple customers have reported logging in to their bank accounts, only to be presented with another customer’s bank account details. Chase has acknowledged the incident, saying it was caused by an internal “glitch” Wednesday evening that did not involve any kind of hacking attempt or cyber attack.

Trish Wexler, director of communications for the retail side of JP Morgan Chase, said the incident happened Wednesday evening, for “a pretty limited number of customers” between 6:30 pm  and 9 pm ET who “sporadically during that time while logged in to could see someone else’s account details.”

“We know for sure the glitch was on our end, not from a malicious actor,” Wexler said, noting that Chase is still trying to determine how many customers may have been affected. “We’re going through Tweets from customers and making sure that if anyone is calling us with issues we’re working one on one with customers. If you see suspicious activity you should give us a call.”

Wexler urged customers to “practice good security hygiene” by regularly reviewing their account statements, and promptly reporting any discrepancies. She said Chase is still working to determine the precise cause of the mix-up, and that there have been no reports of JPMC commercial customers seeing the account information of other customers.

“This was all on our side,” Wexler said. “I don’t know what did happen yet but I know what didn’t happen. What happened last night was 100 percent not the result of anything malicious.”

The account mix-up was documented on Wednesday by Fly & Dine, an online publication that chronicles the airline food industry. Fly & Dine included screenshots of one of their writer’s spouses logged into the account of a fellow Chase customer with an Amazon and Chase card and a balance of more than $16,000.

Kenneth White, a security researcher and director of the Open Crypto Audit Project, said the reports he’s seen on Twitter and elsewhere suggested the screwup was somehow related to the bank’s mobile apps. He also said the Chase retail banking app offered an update first thing Thursday morning.

Chase says the oddity occurred for both and users of the Chase mobile app. 

“We don’t have any evidence it was related to any update,” Wexler said.

“There’s only so many kind of logic errors where Kenn logs in and sees Brian’s account,” White said.  “It can be a devil to track down because every single time someone logs in it’s a roll of the dice — maybe they get something in the warmed up cache or they get a new hit. It’s tricky to debug, but this is like as bad as it gets in terms of screwup of the app.”

White said the incident is reminiscent of a similar glitch at online game giant Steam, which caused many customers to see account information for other Steam users for a few hours. He said he suspects the problem was a configuration error someplace within “caching servers,” which are designed to ease the load on a Web application by periodically storing some common graphical elements on the page — such as images, videos and GIFs.

“The images, the site banner, all that’s fine to be cached, but you never want to cache active content or raw data coming back,” White said. “If you’re CNN, you’re probably caching all the content on the homepage. But for a banking app that has access to live data, you never want that to be cached.”

“It’s fairly easy to fix once you identify the problem,” he added. “I can imagine just getting the basics of the core issue [for Chase] would be kind of tricky and might mean a lot of non techies calling your Tier 1 support people.”

Update, 8:10 p.m. ET: Added comment from Chase about the incident affecting both mobile device and Web browser users.


Planet Linux AustraliaRussell Coker: Dell PowerEdge T30

I just did a Debian install on a Dell PowerEdge T30 for a client. The Dell web site is a bit broken at the moment, it didn’t list the price of that server or give useful specs when I was ordering it. I was under the impression that the server was limited to 8G of RAM, that’s unusually small but it wouldn’t be the first time a vendor crippled a low end model to drive sales of more expensive systems. It turned out that the T30 model I got has 4*DDR4 sockets with only one used for an 8G DIMM. It apparently can handle up to 64G of RAM.

It has space for 4*3.5″ SATA disks but only has 4*SATA connectors on the motherboard. As I never use the DVD in a server this isn’t a problem for me, but if you want 4 disks and a DVD then you need to buy a PCI or PCIe SATA card.

Compared to the PowerEdge T130 I’m using at home the new T30 is slightly shorter and thinner while seeming to have more space inside. This is partly due to better design and partly due to having 2 hard drives in the top near the DVD drive which are a little inconvenient to get to. The T130 I have (which isn’t the latest model) has 4*3.5″ SATA drive bays at the bottom which are very convenient for swapping disks.

It has two PCIe*16 slots (one of which is apparently quad speed), one shorter PCIe slot, and a PCI slot. For a cheap server a PCI slot is a nice feature, it means I can use an old PCI Ethernet card instead of buying a PCIe Ethernet card. The T30 cost $1002 so using an old Ethernet card saved 1% of the overall cost.

The T30 seems designed to be more of a workstation or personal server than a straight server. The previous iterations of the low end tower servers from Dell didn’t have built in sound and had PCIe slots that were adequate for a RAID controller but vastly inadequate for video. This one has built in line in and out for audio and has two DisplayPort connectors on the motherboard (presumably for dual-head support). Apart from the CPU (an E3-1225 which is slower than some systems people are throwing out nowadays) the system would be a decent gaming system.

It has lots of USB ports which is handy for a file server, I can attach lots of backup devices. Also most of the ports support “super speed”, I haven’t yet tested out USB devices that support such speeds but I’m looking forward to it. It’s a pity that there are no USB-C ports.

One deficiency of the T30 is the lack of a VGA port. It has one HDMI and two DisplayPort sockets on the motherboard, this is really great for a system on or under your desk, any monitor you would want on your desk will support at least one of those interfaces. But in a server room you tend to have an old VGA monitor that’s there because no-one wants it on their desk. Not supporting VGA may force people to buy a $200 monitor for their server room. That increases the effective cost of the system by 20%. It has a PC serial port on the motherboard which is a nice server feature, but that doesn’t make up for the lack of VGA.

The BIOS configuration has an option displayed for enabling charging devices from USB sockets when a laptop is in sleep mode. It’s disappointing that they didn’t either make a BIOS build for a non-laptop or have the BIOS detect at run-time that it’s not on laptop hardware and hide that.


The PowerEdge T30 is a nice low-end workstation. If you want a system with ECC RAM because you need it to be reliable and you don’t need the greatest performance then it will do very well. It has Intel video on the motherboard with HDMI and DisplayPort connectors, this won’t be the fastest video but should do for most workstation tasks. It has a PCIe*16 quad speed slot in case you want to install a really fast video card. The CPU is slow by today’s standards, but Dell sells plenty of tower systems that support faster CPUs.

It’s nice that it has a serial port on the motherboard. That could be used for a serial console or could be used to talk to a UPS or other server-room equipment. But that doesn’t make up for the lack of VGA support IMHO.

One could say that a tower system is designed to be a desktop or desk-side system not run in any sort of server room. However it is cheaper than any rack mounted systems from Dell so it will be deployed in lots of small businesses that have one server for everything – I will probably install them in several other small businesses this year. Also tower servers do end up being deployed in server rooms, all it takes is a small business moving to a serviced office that has a proper server room and the old tower servers end up in a rack.

Rack vs Tower

One reason for small businesses to use tower servers when rack servers are more appropriate is the issue of noise. If your “server room” is the room that has your printer and fax then it typically won’t have a door and you just can’t have the noise of a rack mounted server in there. 1RU systems are inherently noisy because the small diameter of the fans means that they have to spin fast. 2RU systems can be made relatively quiet if you don’t have high-end CPUs but no-one seems to be trying to do that.

I think it would be nice if a company like Dell sold low-end servers in a rack mount form-factor (19 inches wide and 2RU high) that were designed to be relatively quiet. Then instead of starting with a tower server and ending up with tower systems in racks a small business could start with a 19 inch wide system on a shelf that gets bolted into a rack if they move into a better office. Any laptop CPU from the last 10 years is capable of running a file server with 8 disks in a ZFS array. Any modern laptop CPU is capable of running a file server with 8 SSDs in a ZFS array. This wouldn’t be difficult to design.

Google AdsenseIntroducing AdSense Auto ads

Finding the time to create great content for your users is an essential part of growing your publishing business. Today we are introducing AdSense Auto ads, a powerful new way to place ads on your site. Auto ads use machine learning to make smart placement and monetization decisions on your behalf, saving you time. Place one piece of code just once to all of your pages, and let Google take care of the rest.
Some of the benefits of Auto ads include:
  • Optimization: Using machine learning, Auto ads show ads only when they are likely to perform well and provide a good user experience.
  • Revenue opportunities: Auto ads will identify any available ad space and place new ads there, potentially increasing your revenue.
  • Easy to use: With Auto ads you only need to place the ad code on your pages once. When you’re ready to use new features and ad formats, simply turn them on and off with the flick of a switch -- there’s no need to change the code again.

How do Auto ads work?

  Select the ad formats you want to show on your pages by switching them on with a simple toggle

 Place the Auto ads code on your pages

Auto ads will now start working for you by analyzing your pages, finding potential ad placements, and showing new ads when they’re likely to perform well and provide a good user experience.
And if you want to have different formats on different pages you can use the new Advanced URL settings feature (e.g. you can choose to place In-feed ads on but not on
Getting started with AdSense Auto ads
Auto ads can work equally well on new sites and on those already showing ads.
Have you manually placed ads on your page?
There’s no need to remove them if you don’t want to. Auto ads will take into account all existing Google ads on your pages.

Already using Anchor or Vignette ads?
Auto ads include Anchor and Vignette ads and many more additional formats such as Text and display, In-feed, and Matched content. Note that all users that used Page-level ads are automatically migrated over to Auto ads without any need to add code to their pages again.

To get started with AdSense Auto ads:
  1. Sign in to your AdSense account.
  2. In the left navigation panel, visit My ads and select Get Started.
  3. On the "Choose your global settings" page, select the ad formats that you'd like to show and click Save.
  4. On the next page, click Copy code.
  5. Paste the ad code between the < head > and </ head > tags of each page where you want to show Auto ads.
  6. Auto ads will start to appear on your pages in about 10-20 minutes.

We'd love to hear what you think about Auto ads in the comments section below this post.

Posted by:
Tom Long, AdSense Engineering Manager
Violetta Kalathaki, AdSense Product Manager

CryptogramHarassment By Package Delivery

People harassing women by delivering anonymous packages purchased from Amazon.

On the one hand, there is nothing new here. This could have happened decades ago, pre-Internet. But the Internet makes this easier, and the article points out that using prepaid gift cards makes this anonymous. I am curious how much these differences make a difference in kind, and what can be done about it.

Worse Than FailureCodeSOD: Functional IsFunction

Julio S recently had to attempt to graft a third-party document viewer onto an internal web app. The document viewer was from a company which specialized in enterprise “document solutions”, which can be purchased for enterprise-sized licensing fees.

Gluing the document viewer onto their internal app didn’t go terribly well. While debugging, and browsing through the vendor’s javascript, he saw a lot of calls to a function called IsFunction. It was loaded from a “utilities.js”-type do-everything library file. Curious, Julio pulled up the implementation.

function IsFunction ( func ) {
    var bChk=false;
    if (func != "undefined") bChk=true;
    else bChk=false;
    return bChk;

I cannot emphasize enough how beautiful this block of code is, by the standards of bad code. There’s so much there. One variable, bChk uses Hungarian notation. Nothing else seems to. It’s a totally superfluous variable, as we could just do return func != "undefined".

Then again why would we even do that? The real beauty, though, is how the name of the function and its implementation have no relationship to each other, and the implementation is utterly useless. For example:

IsFunction("Hello World"); //true
IsFunction({spam: "eggs"}); //true
IsFunction(function() {}); //true, but it was probably an accident
IsFunction(undefined); //true
IsFunction("undefined"); //false

Yes, the only time this function returns false is the specific case where you pass it the string “undefined”. Everything else IsFunction apparently. The useless function sounds important. Someone wrote it, probably as a quick attempt at vaguely defensive programming. “I should make sure my inputs are valid”. They didn’t test it. The certainly didn’t think about it. But they wrote it. And then someone else saw the function in use, and said, “Oh… I should probably use that, too.” Somewhere, there’s probably a “Style Guide”, which mandates that, before attempting to invoke a variable that should contain a function, you use IsFunction to confirm it does. It comes up in code reviews, and code has been held from going into production because someone didn't use IsFunction.

And Julio probably is the first person to actually check the implementation since it was first written.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


TEDRemembering pastor Billy Graham, and more news in brief

Behold, your recap of TED-related news:

Remembering Billy Graham. For more than 60 years, pastor Billy Graham inspired countless people around the world with his sermons. On Wednesday, February 21, he passed away at his home in North Carolina after struggling with numerous illnesses over the past few years. He was 99 years old. Raised on a dairy farm in N.C., Graham used the power of new technologies, like radio and television, to spread his message of personal salvation to an estimated 215 million people globally, while simultaneously reflecting on technology’s limitations. Reciting the story of King David to audiences at TED1998, “David found that there were many problems that technology could not solve. There were many problems still left. And they’re still with us, and you haven’t solved them, and I haven’t heard anybody here speak to that,” he said, referring to human evil, suffering, and death. To Graham, the answer to these problems was to be found in God. Even after his death, through the work of the Billy Graham Evangelistic Association, led by his son Franklin, his message of personal salvation will live on. (Watch Graham’s TED Talk)

Fashion inspired by Black Panther. TED Fellow and fashion designer Walé Oyéjidé draws on aesthetics from around the globe to create one-of-a-kind pieces that dismantle bias and celebrate often-marginalized groups. For New York Fashion Week, Oyéjidé designed a suit with a coat and scarf for a Black Panther-inspired showcase, sponsored by Marvel Studios. One of Oyéjidé’s scarves is also worn in the movie by its protagonist, King T’Challa. “The film is very much about the joy of seeing cultures represented in roles that they are generally not seen in. There’s villainy and heros, tech genius and romance,” Oyéjidé told the New York Times, “People of color are generally presented as a monolithic image. I’m hoping it smashes the door open to show that people can occupy all these spaces.” (Watch Oyéjidé’s TED Talk)

Nuclear energy advocate runs for governor. Environmentalist and nuclear energy advocate Michael Shellenberger has launched his campaign for governor of California as an independent candidate. “I think both parties are corrupt and broken. We need to start fresh with a fresh agenda,” he says. Shellenberger intends to run on an energy and environmental platform, and he hopes to involve student environmental activists in his campaign. California’s gubernatorial election will be held in November 2018. (Watch Shellenberger’s TED Talk)

Can UV light help us fight the flu? Radiation scientist David Brenner and his research team at Columbia University’s Irving Medical Center are exploring whether a type of ultraviolet light known as far-UVC could be used to kill the flu virus. To test their theory, they released a strain of the flu virus called H1N1 in an enclosed chamber and exposed it to low doses of UVC. In a paper published in Nature’s Scientific Reports, they report that far-UVC successfully deactivated the virus. Previous research has shown that far-UVC doesn’t penetrate the outer layer of human skin or eyes, unlike conventional UV rays, which means that it appears to be safe to use on humans. Brenner suggests that far-UVC could be used in public spaces to fight the flu. “Think about doctors’ waiting rooms, schools, airports and airplanes—any place where there’s a likelihood for airborne viruses,” Brenner told Time. (Watch Brenner’s TED Talk.)

A beautiful sculpture for Madrid. For the 400 anniversary of Madrid’s Plaza Mayor, artist Janet Echelman created a colorful, fibrous sculpture, which she suspended above the historic space. The sculpture, titled “1.78 Madrid,” aims to provoke contemplation of the interconnectedness of time and our spatial reality. The title refers to the number of microseconds that a day on Earth was shortened as a result of the 2011 earthquake in Japan, which was so strong it caused the planet’s rotation to accelerate. At night, colorful lights are projected onto the sculpture, which makes it an even more dynamic, mesmerizing sight for the city’s residents. (Watch Echelman’s TED Talk)

A graduate program that doesn’t require a high school degree. Economist Esther Duflo’s new master’s program at MIT is upending how we think about graduate school admissions. Rather than requiring the usual test scores and recommendation letters, the program allows anyone to take five rigorous, online courses for free. Students only pay to take the final exam, the cost of which ranges from $100 to $1,000 depending on income. If they do well on the final exam, they can apply to MIT’s master’s program in data, economics and development policy. “Anybody could do that. At this point, you don’t need to have gone to college. For that matter, you don’t need to have gone to high school,” Duflo told WBUR. Already, more than 8,000 students have enrolled online. The program intends to raise significant aid to cover the cost of the master’s program and living in Cambridge, with the first class arriving in 2020. (Watch Duflo’s TED Talk)

Have a news item to share? Write us at and you may see it included in this weekly round-up.

CryptogramNew Spectre/Meltdown Variants

Researchers have discovered new variants of Spectre and Meltdown. The software mitigations for Spectre and Meltdown seem to block these variants, although the eventual CPU fixes will have to be expanded to account for these new attacks.

Worse Than FailureShiny Side Up


It feels as though disc-based media have always been with us, but the 1990s were when researchers first began harvesting these iridescent creatures from the wild in earnest, pressing data upon them to create the beast known as CD-ROM. Click-and-point adventure games, encyclopedias, choppy full-motion video ... in some cases, ambition far outweighed capability. Advances in technology made the media cheaper and more accessible, often for the worst. There are some US households that still burn America Online 7.0 CDs for fuel.

But we’re not here to delve into the late-90s CD marketing glut. We’re nestling comfortably into the mid-90s, when Internet was too slow and unreliable for anyone to upload installers onto a customer portal and call it a day. Software had to go out on physical media, and it had to be as bug-free as possible before shipping.

Chris, a developer fresh out of college, worked on product catalog database applications that were mailed to customers on CDs. It was a small shop with no Tech Support department, so he and the other developers had to take turns fielding calls from customers having issues with the admittedly awful VB4 installer. It was supposed to launch automatically, but if the auto-play feature was disabled in Windows 95, or the customer canceled the installer pop-up without bothering to read it, Chris or one of his colleagues was likely to hear about it.

And then came the caller who had no clue what Chris meant when he suggested, "Why don't we open up the CD through the file system and launch the installer manually?"

These were the days before remote desktop tools, and the caller wasn't the savviest computer user. Talking him through minimizing his open programs, double-clicking on My Computer, and browsing into the CD drive took Chris over half an hour.

"There's nothing here," the caller said.

So close to the finish line, and yet so far. Chris stifled his exasperation. "What do you mean?"

"I opened the CD like you said, and it's completely empty."

This was new. Chris frowned. "You're definitely looking at the right drive? The one with the shiny little disc icon?"

"Yes, that's the one. It's empty."

Chris' frown deepened. "Then I guess you got a bad copy of the CD. I'm sorry about that! Let me copy down your name and address, and I'll get a new one sent out to you."

The customer provided his mailing address accordingly. Chris finished scribbling it onto a Post-it square. "OK, lemme read that back to—"

"The shiny side is supposed to be turned upwards, right?" the customer blurted. "Like a gramophone record?"

Chris froze, then slapped the mute button before his laughter spilled out over the line. After composing himself, he returned to the call as the model of professionalism. "Actually, it should be shiny-side down."

"Really? Huh. The little icon's lying, then."

"Yeah, I guess it is," Chris replied. "Unfortunately, that's on Microsoft to fix. Let's turn the disc over and try again."

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaColin Charles: MariaDB Developer’s unconference & M|18

Been a while since I wrote anything MySQL/MariaDB related here, but there’s the column on the Percona blog, that has weekly updates.

Anyway, I’ll be at the developer’s unconference this weekend in NYC. Even managed to snag a session on the schedule, MySQL features missing in MariaDB Server (Sunday, 12.15–13.00). Signup on meetup?

Due to the prevalence of “VIP tickets”, I too signed up for M|18. If you need a discount code, I’ll happily offer them up to you to see if they still work (though I’m sure a quick Google will solve this problem for you). I’ll publish notes, probably in my weekly column.

If you’re in New York and want to say hi, talk shop, etc. don’t hesitate to drop me a line.