Planet Russell


Planet DebianPetter Reinholdtsen: Web browser integration of VLC with Bittorrent support

Bittorrent is as far as I know, currently the most efficient way to distribute content on the Internet. It is used all by all sorts of content providers, from national TV stations like NRK, Linux distributors like Debian and Ubuntu, and of course the Internet archive.

Almost a month ago a new package adding Bittorrent support to VLC became available in Debian testing and unstable. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

Since the plugin was made available for the first time in Debian, several improvements have been made to it. In version 2.2-4, now available in both testing and unstable, a desktop file is provided to teach browsers to start VLC when the user click on torrent files or magnet links. The last part is thanks to me finally understanding what the strange x-scheme-handler style MIME types in desktop files are used for. By adding x-scheme-handler/magnet to the MimeType entry in the desktop file, at least the browsers Firefox and Chromium will suggest to start VLC when selecting a magnet URI on a web page. The end result is that now, with the plugin installed in Buster and Sid, one can visit any Internet Archive page with movies using a web browser and click on the torrent link to start streaming the movie.

Note, there is still some misfeatures in the plugin. One is the fact that it will hang and block VLC from exiting until the torrent streaming starts. Another is the fact that it will pick and play a random file in a multi file torrent. This is not always the video file you want. Combined with the first it can be a bit hard to get the video streaming going. But when it work, it seem to do a good job.

For the Debian packaging, I would love to find a good way to test if the plugin work with VLC using autopkgtest. I tried, but do not know enough of the inner workings of VLC to get it working. For now the autopkgtest script is only checking if the .so file was successfully loaded by VLC. If you have any suggestions, please submit a patch to the Debian bug tracking system.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.


Planet DebianMichal Čihař: Weblate 3.2.2

Weblate 3.2.2 has been released today. It's a second bugfix release for 3.2 fixing several minor issues which appeared in the release.

Full list of changes:

  • Remove no longer needed Babel dependency.
  • Updated language definitions.
  • Improve documentation for addons, LDAP and Celery.
  • Fixed enabling new dos-eol and auto-java-messageformat flags.
  • Fixed running test from PyPI package.
  • Improved plurals handling.
  • Fixed translation upload API failure in some corner cases.
  • Fixed updating Git configuration in case it was changed manually.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Planet DebianSteve Kemp: So I wrote a basic BASIC

So back in June I challenged myself to write a BASIC interpreter in a weekend. The next time I mentioned it was to admit defeat. I didn't really explain in any detail, because I thought I'd wait a few days and try again and I was distracted at the time I wrote my post.

As it happened that was over four months ago, so clearly it didn't work out. The reason for this was because I was getting too bogged down in the wrong kind of details. I'd got my heart set on doing this the "modern" way:

  • Write a lexer to spit the input into tokens
    • LINE-NUMBER:10, PRINT, "Hello, World"
  • Then I'd take those tokens and form an abstract syntax tree.
  • Finally I'd walk the tree evaluating as I went.

The problem is that almost immediately I ran into problems, my naive approach didn't have a good solution for identifying line-numbers. So I was too paralysed to proceed much further.

I sidestepped the initial problem and figured maybe I should just have a series of tokens, somehow, which would be keyed off line-number. Obviously when you're interpreting "traditional" BASIC you need to care about lines, and treat them as important because you need to handle fun-things like this:

20 GOTO 10

Anyway I'd parse each line, assuming only a single statement upon a line (ha!) you can divide it into:

  • Number - i.e. line-number.
  • Statement.
  • Newline to terminate.

Then you could have:

code{blah} ..
code[10] = "PRINT STEVE ROCKS"
code[20] = "GOTO 10"

Obviously you spot the problem there, if you think it through. Anyway. I've been thinking about it off and on since then, and the end result is that for the past two evenings I've been mostly writing a BASIC interpreter, in golang, in 20-30 minute chunks.

The way it works is as you'd expect (don't make me laugh ,bitterly):

  • Parse the input into tokens.
  • Store those as an array.
  • Interpet each token.
    • No AST
    • No complicated structures.
    • Your program is literally an array of tokens.

I cheated, horribly, in parsing line-numbers which turned out to be exactly the right thing to do. The output of my naive lexer was:

INT:10, PRINT, STRING:"Hello World", NEWLINE, INT:20, GOTO, INT:10

Guess what? If you (secretly) prefix a newline to the program you're given you can identify line-numbers just by keeping track of your previous token in the lexer. A line-number is any number that follows a newline. You don't even have to care if they sequential. (Hrm. Bug-report?)

Once you have an array of tokens it becomes almost insanely easy to process the stream and run your interpreter:

 program[] = { LINE_NUMBER:10, PRINT, "Hello", NEWLINE, LINE_NUMBER:20 ..}

 let offset := 0
 for( offset < len(program) ) {
    token = program[offset]

    if ( token == GOTO ) { handle_goto() ; }
    if ( token == PRINT ) { handle_print() ; }
    .. handlers for every other statement

Make offset a global. And suddenly GOTO 10 becomes:

  • Scan the array, again, looking for "LINE_NUMBER:10".
  • Set offset to that index.

Magically it all just works. Add a stack, and GOSUB/RETURN are handled with ease too by pushing/popping the offset to it.

In fact even the FOR-loop is handled in only a few lines of code - most of the magic happening in the handler for the "NEXT" statement (because that's the part that needs to decide if it needs to jump-back to the body of the loop, or continue running.

OK this is a basic-BASIC as it is missing primtives (CHR(), LEN,etc) and it only cares about integers. But the code is wonderfully simple to understand, and the test-case coverage is pretty high.

I'll leave with an example:

10 REM This is a program
00 REM
 01 REM This program should produce 126 * 126 * 10
 02 REM  = 158760
 03 REM
 05 GOSUB 100
 10 FOR i = 0 TO 126
 20  FOR j = 0 TO 126 STEP 1
 30   FOR k = 0 TO 10
 40    LET a = i * j * k
 50   NEXT k
 60  NEXT j
 70 NEXT i
 75 PRINT a, "\n"
 80 END
100 PRINT "Hello, I'm multiplying your integers"

Loops indented for clarity. Tokens in upper-case only for retro-nostalgia.

Find it here, if you care:

I had fun. Worth it.

I even "wrote" a "game":

Rondam RamblingsWhere is the body?

The Washington Post reports: The Saudi government acknowledged early Saturday that journalist Jamal Khashoggi was killed while visiting the Saudi consulate in Istanbul, saying he died during a fist fight. ... The announcement marks the first time that Saudi officials have acknowledged that Khashoggi was killed inside the consulate. Ever since he disappeared on Oct. 2 while visiting the mission,


CryptogramFriday Squid Blogging: Roasted Squid with Tomatillo Salsa

Recipe and commentary.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianRobert McQueen: GNOME Foundation Hackfest 2018

This week, the GNOME Foundation Board of Directors met at the Collabora office in Cambridge, UK, for the second annual Foundation Hackfest. We were also joined by the Executive Director, Neil McGovern, and Director of Operations, Rosanna Yuen. This event was started by last year’s board and is a great opportunity for the newly-elected board to set out goals for the coming year and get some uninterrupted hacking done on policies, documents, etc. While it’s fresh in our mind, we wanted to tell you about some of the things we have been working on this week and what the community can hope to see in the coming months.

Wednesday: Goals

On Wednesday we set out to define the overall goals of the Foundation, so we could focus our activities for the coming years, ensuring that we were working on the right priorities. Neil helped to facilitate the discussion using the Charting Impact process. With that input, we went back to the purpose of the Foundation and mapped that to ten and five year goals, making sure that our current strategies and activities would be consistent with reaching those end points. This is turning out to be a very detailed and time-consuming process. We have made a great start, and hope to have something we can share for comments and input soon. The high level 10-year goals we identified boiled down to:

  • Sustainable project and foundation
  • Wider awareness and mindshare – being a thought leader
  • Increased user base

As we looked at the charter and bylaws, we identified a long-standing issue which we need to solve — there is currently no formal process to cover the “scope” of the Foundation in terms of which software we support with our resources. There is the release team, but that is only a subset of the software we support. We have some examples such as GIMP which “have always been here”, but at present there is no clear process to apply or be included in the Foundation. We need a clear list of projects that use resources such as CI, or have the right to use the GNOME trademark for the project. We have a couple of similar proposals from Allan Day and Carlos Soriano for how we could define and approve projects, and we are planning to work with them over the next couple of weeks to make one proposal for the board to review.

Thursday: Budget forecast

We started the second day with a review of the proposed forecast from Neil and Rosanna, because the Foundation’s financial year starts in October. We have policies in place to allow staff and committees to spend money against their budget without further approval being needed, which means that with no approved budget, it’s very hard for the Foundation to spend any money. The proposed budget was based off the previous year’s actual figures, with changes to reflect the increased staff headcount, increased spend on CI, increased staff travel costs, etc, and ensure after the year’s spending, we follow the reserves policy to keep enough cash to pay the foundation staff for a further year. We’re planning to go back and adjust a few things (internships, marketing, travel, etc) to make sure that we have the right resources for the goals we identified.

We had some “hacking time” in smaller groups to re-visit and clarify various policies, such as the conference and hackfest proposal/approval process, travel sponsorship process and look at ways to support internationalization (particularly to indigenous languages).

Friday: Foundation Planning

The Board started Friday with a board-only (no staff) meeting to make sure we were aligned on the goals that we were setting for the Executive Director during the coming year, informed by the Foundation goals we worked on earlier in the week. To avoid the “seven bosses” problem, there is one board member (myself) responsible for managing the ED’s priorities and performance. It’s important that I take advantage of the opportunity of the face to face meeting to check in with the Board about their feedback for the ED and things I should work together with Neil on over the coming months.

We also discussed a related topic, which is the length of the term that directors serve on the Foundation Board. With 7 staff members, the Foundation needs consistent goals and management from one year to the next, and the time demands on board members should be reduced from previous periods where the Foundation hasn’t had an Executive Director. We want to make sure that our “ten year goals” don’t change every year and undermine the strategies that we put in place and spend the Foundation resources on. We’re planning to change the Board election process so that each director has a two year term, so half of the board will be re-elected each year. This also prevents the situation where the majority of the Board is changed at the same election, losing continuity and institutional knowledge, and taking months for people to get back up to speed.

We finished the day with a formal board meeting to approve the budget, more hack time on various policies (and this blog!). Thanks to Collabora for use of their office space, food, and snacks – and thanks to my fellow Board members and the Foundation’s wonderful and growing staff team

Planet DebianMichal Čihař: translation-finder 0.1

Setting up translation components in Weblate can be tricky in some cases, especially if you lack knowledge of the translation format you are using. Also this is something we wanted to automate from the very beginning, but there were always more pressing things to implement. But now the time is coming as I've just made first beta release of translation-finder, tool to help with this.

The translation-finder will look at filesystem (eg. checked out repository) and tries to find translatable files. So far the heuristics is pretty simple, but still it detects just fine most of the projects currently hosted on our hosted localization platform. Still if you find issue with that, you're welcome to provide feedback in our issue tracker.

The integration into Weblate will come in next weeks and will be able to enjoy this new feature in the 3.3 release.

Filed under: Debian English SUSE Weblate

CryptogramWest Virginia Using Internet Voting

This is crazy (and dangerous). West Virginia is allowing people to vote via a smart-phone app. Even crazier, the app uses blockchain -- presumably because they have no idea what the security issues with voting actually are.

Worse Than FailureError'd: Real Formatting Advice

"VMware Team decided to send me some useful advice via e-mail," writes Antti T.


"Costco and Dell have teamed up to offer the latest in gaming storage technology...the Hard Frive?" wrote Scott H.


William K. wrote, "Yeah, more like City Faillines."


Sam writes, "They don't look so angry to me...or very bird-like for that matter either."


Mike S. writes, "Yeah, but what if I want to pay in GBP?


"Well, to be fair, they did not list graphing as an area of expertise," wrote Louis G.


[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianDaniel Pocock: Debian GSoC 2018 report

One of my major contributions to Debian in 2018 has been participation as a mentor and admin for Debian in Google Summer of Code (GSoC).

Here are a few observations about what happened this year, from my personal perspective in those roles.

Making a full report of everything that happens in GSoC is close to impossible. Here I consider issues that span multiple projects and the mentoring team. For details on individual projects completed by the students, please see their final reports posted in August on the mailing list.

Thanking our outgoing administrators

Nicolas Dandrimont and Sylvestre Ledru retired from the admin role after GSoC 2016 and Tom Marble has retired from the Outreachy administration role, we should be enormously grateful for the effort they have put in as these are very demanding roles.

When the last remaining member of the admin team, Molly, asked for people to step in for 2018, knowing the huge effort involved, I offered to help out on a very temporary basis. We drafted a new delegation but didn't seek to have it ratified until the team evolves. We started 2018 with Molly, Jaminy, Alex and myself. The role needs at least one new volunteer with strong mentoring experience for 2019.

Project ideas

Google encourages organizations to put project ideas up for discussion and also encourages students to spontaneously propose their own ideas. This latter concept is a significant difference between GSoC and Outreachy that has caused unintended confusion for some mentors in the past. I have frequently put teasers on my blog, without full specifications, to see how students would try to respond. Some mentors are much more precise, telling students exactly what needs to be delivered and how to go about it. Both approaches are valid early in the program.

Student inquiries

Students start sending inquiries to some mentors well before GSoC starts. When Google publishes the list of organizations to participate (that was on 12 February this year), the number of inquiries increases dramatically, in the form of personal emails to the mentors, inquiries on the debian-outreach mailing list, the IRC channel and many project-specific mailing lists and IRC channels.

Over 300 students contacted me personally or through the mailing list during the application phase (between 12 February and 27 March). This is a huge number and makes it impossible to engage in a dialogue with every student. In the last years where I have mentored, 2016 and 2018, I've personally but a bigger effort into engaging other mentors during this phase and introducing them to some of the students who already made a good first impression.

As an example, Jacob Adams first inquired about my PKI/PGP Clean Room idea back in January. I was really excited about his proposals but I knew I simply didn't have the time to mentor him personally, so I added his blog to Planet Debian and suggested he put out a call for help. One mentor, Daniele Nicolodi replied to that and I also introduced him to Thomas Levine. They both generously volunteered and together with Jacob, ensured a successful project. While I originally started the clean room, they deserve all the credit for the enhancements in 2018 and this emphasizes the importance of those introductions made during the early stages of GSoC.

In fact, there were half a dozen similar cases this year where I have interacted with a really promising student and referred them to the mentor(s) who appeared optimal for their profile.

After my recent travels in the Balkans, a number of people from Albania and Kosovo expressed an interest in GSoC and Outreachy. The students from Kosovo found that their country was not listed in the application form but the Google team very promptly added it, allowing them to apply for GSoC for the first time. Kosovo still can't participate in the Olympics or the World Cup, but they can compete in GSoC now.

At this stage, I was still uncertain if I would mentor any project myself in 2018 or only help with the admin role, which I had only agreed to do on a very temporary basis until the team evolves. Nonetheless, the day before student applications formally opened (12 March) and after looking at the interest areas of students who had already made contact, I decided to go ahead mentoring a single project, the wizard for new students and contributors.

Student selections

The application deadline closed on 27 March. At this time, Debian had 102 applications, an increase over the 75 applications from 2016. Five applicants were female, including three from Kosovo.

One challenge we've started to see is that since Google reduced the stipend for GSoC, Outreachy appears to pay more in many countries. Some women put more effort into an Outreachy application or don't apply for GSoC at all, even though there are far more places available in GSoC each year. GSoC typically takes over 1,000 interns in each round while Outreachy can only accept approximately 50.

Applicants are not evenly distributed across all projects. Some mentors/projects only receive one applicant and then mentors simply have to decide if they will accept the applicant or cancel the project. Other mentors receive ten or more complete applications and have to spend time studying them, comparing them and deciding on the best way to rank them and make a decision.

Given the large number of project ideas in Debian, we found that the Google portal didn't allow us to use enough category names to distinguish them all. We contacted the Google team about this and they very quickly increased the number of categories we could use, this made it much easier to tag the large number of applications so that each mentor could filter the list and only see their own applicants.

The project I mentored personally, a wizard for helping new students get started, attracted interest from 3 other co-mentors and 10 student applications. To help us compare the applications and share data we gathered from the students, we set up a shared spreadsheet using Debian's Sandstorm instance and Ethercalc. Thanks to Asheesh and Laura for setting up and maintaining this great service.

Slot requests

Switching from the mentor hat to the admin hat, we had to coordinate the requests from each mentor to calculate the total number of slots we wanted Google to fund for Debian's mentors.

Once again, Debian's Sandstorm instance, running Ethercalc, came to the rescue.

All mentors were granted access, reducing the effort for the admins and allowing a distributed, collective process of decision making. This ensured mentors could see that their slot requests were being counted correctly but it means far more than that too. Mentors put in a lot of effort to bring their projects to this stage and it is important for them to understand any contention for funding and make a group decision about which projects to prioritize if Google doesn't agree to fund all the slots.

Management tools and processes

Various topics were discussed by the team at the beginning of GSoC.

One discussion was about the definition of "team". Should the new delegation follow the existing pattern, reserving the word "team" for the admins, or should we move to the convention followed by the DebConf team, where the word "team" encompasses a broader group of the volunteers? A draft delegation text was prepared but we haven't asked for it to be ratified, this is a pending task for the 2019 team (more on that later).

There was discussion about the choice of project management tools, keeping with Debian's philosophy of only using entirely free tools. We compared various options, including Redmine with the Agile (Kanban) plugin, Kanboard (as used by DebConf team), and more Sandstorm-hosted possibilities, such as Wekan and Scrumblr. Some people also suggested ideas for project management within their Git repository, for example, using Org-mode. There was discussion about whether it would be desirable for admins to run an instance of one of these tools to manage our own workflow and whether it would be useful to have all students use the same tool to ease admin supervision and reporting. Personally, I don't think all students need to use the same tool as long as they use tools that provide public read-only URLs, or even better, a machine-readable API allowing admins to aggregate data about progress.

Admins set up a Git repository for admin and mentor files on Debian's new GitLab instance, Salsa. We tried to put in place a process to synchronize the mentor list on the wiki, the list of users granted team access in Salsa and the list of mentors maintained in the GSoC portal. This could be taken further by asking mentors and students to put a Moin Category tag on the bottom of their personal pages on the wiki, allowing indexes to be built automatically.

Students accepted

On 23 April, the list of selected students was confirmed. Shortly afterward, a Debian blog appeared welcoming the students.

OSCAL 2018, Albania and Kosovo visit

I traveled to Tirana, Albania for OSCAL'18 where I was joined by two of the Kosovan students selected by Debian. They helped run the Debian booth, comprising a demonstration of software defined radio from Debian Hams.

Enkelena Haxhiu and I gave a talk together about communications technology. This was Enkelena's first talk. In the audience was Arjen Kamphuis, he was one of the last people to ask a question at the end. His recent disappearance is a disturbing mystery.


A GSoC session took place at DebConf18, the video is available here and includes talks from GSoC and Outreachy participants past and present.

Final results

Many of the students have already been added to Planet Debian where they have blogged about what they did and what they learned in GSoC. More will appear in the near future.

If you like their project, if you have ideas for an event where they could present it or if you simply live in the same region, please feel free to contact the students directly and help them continue their free software adventure with us.

Meeting more students

Google's application form for organizations like Debian asks us what we do to stay in contact with students after GSoC. Crossing multiple passes in the Swiss and Italian alps to find Sergio Alberti at Capo di Lago is probably one of the more exotic answers to that question.

Looking back at past internships

I first mentored students in GSoC 2013. Since then, I've been involved in mentoring a total of 12 students in GSoC and 3 interns in Outreachy as well as introducing many others to mentors and organizations. Several of them stay in touch and it's always interesting to hear about their successes as they progress in their careers and in their enjoyment of free software.

The Outreachy organizers have chosen a picture of two of my former interns, Urvika Gola (Outreachy 2016) and Pranav Jain (GSoC 2016) for the mentors page of their web site. This is quite fitting as both of them have remained engaged and become involved in the mentoring process.

Lessons from GSoC 2018, preparing for 2019

One of the big challenges we faced this year is that as the new admin team was only coming together for the first time, we didn't have any policies in place before mentors and students started putting significant effort in to their proposals.

Potential mentors start to put in significant effort from February, when the list of participating organizations is usually announced by Google. Therefore, it seems like a good idea to make any policies clear to potential mentors before the end of January.

We faced a similar challenge with selecting mentors to attend the GSoC mentor summit. While some ideas were discussed about the design of a selection process or algorithm, the admins fell back on the previous policy based on a random selection as mentors may have anticipated that policy was still in force when they signed up.

As I mentioned already, there are several areas where GSoC and Outreachy are diverging, this already led to some unfortunate misunderstandings in both directions, for example, when people familiar with Outreachy rules have been unaware of GSoC differences and vice-versa and I'll confess to being one of several people who has been confused at least once. Mentors often focus on the projects and candidates and don't always notice the annual rule changes. Unfortunately, this requires involvement and patience from both the organizers and admins to guide the mentors through any differences at each step.

The umbrella organization question

One of the most contentious topics in Debian's GSoC 2018 program was the discussion of whether Debian can and should act as an umbrella organization for smaller projects who are unlikely to participate in GSoC in their own right.

As an example, in 2016, four students were mentored by Savoir Faire Linux (SFL), makers of the Ring project, under the Debian umbrella. In 2017, Ring joined the GNU Project and they mentored students under the GNU Project umbrella organization. DebConf17 coincidentally took place in Montreal, Canada, not far from the SFL headquarters and SFL participated as a platinum sponsor.

Google's Mentor Guide explicitly encourages organizations to consider this role, but does not oblige them to do so either:

Google’s program administrators actually look quite fondly on the umbrella organizations that participate each year.

For an organization like Debian, with our philosophy, independence from the cloud and distinct set of tools, such as the Salsa service mentioned earlier, being an umbrella organization gives us an opportunity to share the philosophy and working methods for mutual benefit while also giving encouragement to related projects that we use.

Some people expressed concern that this may cut into resources for Debian-centric projects, but it appears that Google has not limited the number of additional places in the program for this purpose. This is one of the significant differences with Outreachy, where the number of places is limited by funding constraints.

Therefore, if funding is not a constraint, I feel that the most important factor to evaluate when considering this issue is the size and capacity of the admin team. Google allows up to five people to be enrolled as admins and if enough experienced people volunteer, it can be easier for everybody whereas with only two admins, the minimum, it may not be feasible to act as an umbrella organization.

Within the team, we observed various differences of opinion: for example some people were keen on the umbrella role while others preferred to restrict participation to Debian-centric projects. We have the same situation with Outreachy: some mentors and admins only want to do GSoC, while others only do Outreachy and there are others, like myself, who have supported both programs equally. In situations like this, nobody is right or wrong.

Once that fundamental constraint, the size of the admin team, is considered, I personally feel that any related projects engaged on this basis can be evaluated for a wide range of synergies with the Debian community, including the people, their philosophy, the tools used and the extent to which their project will benefit Debian's developers and users. In other words, this doesn't mean any random project can ask to participate under the Debian umbrella but those who make the right moves may have a chance of doing so.


Google pays each organization an allowance of USD 500 for each slot awarded to the organization, plus some additional funds related to travel. This generally corresponds to the number of quality candidates identified by the organization during the selection process, regardless of whether the candidate accepts an internship or not. Where more than one organization requests funding (a slot) for the same student, both organizations receive a bounty, we had at least one case like this in 2018.

For 2018, Debian has received USD 17,200 from Google.

GSoC 2019 and beyond

Personally, as I indicated in January that I would only be able to do this on a temporary basis, I'm not going to participate as an admin in 2019 so it is a good time for other members of the community to think about the role. Each organization who wants to participate needs to propose a full list of admins to Google in January 2019, therefore, now is the time for potential admins to step forward, decide how they would like to work together as a team and work out the way to recruit mentors and projects.

Thanks to all the other admins, mentors, the GSoC team at Google, the Outreachy organizers and members of the wider free software community who supported this initiative in 2018. I'd particularly like to thank all the students though, it is really exciting to work with people who are so open minded, patient and remain committed even when faced with unanticipated challenges and adversity.


Sociological ImagesHorror Films Are Our Collective Nightmares

Sociology reveals the invisible in our world. Sociologists explore the parts of our society that remain “in the dark,” and this has a lot in common with the horror genre. Both sociologists and horror fans find value in delving into the qualities and behaviors of people that others would rather not address. Both focus on things we don’t want to confront. More than many other genres, horror films are rife with sociological implications.

We are sociologists who host the Collective Nightmares podcast. Our podcast examines horror films from a sociological perspective. We focus on issues such as the representation of individuals of different genders, sexualities, and racial/ethnic backgrounds as well as the ideological messages of the film narratives.

Horror movies are a great teaching tool for undergraduate classes. For example, two recent films, Summer of ’84 and The First Purge, are a good fit for sociology courses focusing on gender, sexuality, deviance, and social problems. We’ve used discussion of horror films in our classes with great success – and what better time than Halloween to inspire students to think sociologically about horror?!

Summer of ’84 (2018)

Summer of ’84 models itself on popular media of the 1980s in look, tone, and story (The Goonies (1985), Explorers (1985), Stand By Me (1986), etc.). Our lead protagonist, an upper-middle class, white, heterosexual, boy, Davey, played by Graham Verchere, suspects his neighbor of being a serial killer. He convinces his friends to help him spy and investigate. Hijinks and horror ensue.

A Reagan Bush ’84 campaign sign in a neighborhood yard, signaling the political era of the film.

In our discussion of Summer of ‘84, we examine the representation of young women in adolescent boy-centric summer adventure movies. We also discuss the ubiquity of troublesome, but “oh so palatable” tropes. These include the representation of women, people of color, and political ideology that, when couched in a nostalgic 1980’s setting (which we both grew up smack in the middle of) can feel homey. The cultural climate of our youth seems to have clouded our ability to see the way Summer of ’84 depicted first and foremost women, but also racial inequity and the political climate of the 1980’s.

To address these ideas in your classroom, consider a discussion centered on the following argument, which we make in the podcast: Summer of ‘84 presented women largely as sexual currency for young men’s bonding.

Davey and his friends in their clubhouse discussing women while looking at an adult magazine.

Horror is a genre that relies on stigmatized topics and transgressing boundaries, and it therefore has unique potential to challenge or reinforce common conceptions of normalcy. One of the ways the core group of boys are cast as normal, good, and moral, in contrast to the suspicious neighbor, is via their hegemonic heterosexuality. This is largely done by showing them discussing women as potential sexual trophies, engaging the male gaze toward adult magazines, and taking advantage of Davey’s vantage point to watch his neighbor Nikki, played by Tiera Skovbye, undressing.

Nikki is relegated to the role of  “love interest” as a willing participant in these exchanges. She takes pride in her ability to give the boys status through her flirtations, exalting them as her only true friends. She finds their covert attempts to see her naked as flattering, rather than a stark invasion of privacy. For a deeper discussion, we take this argument a step further and ask ourselves why we, both trained sociologists (one of whom specializes in gender) found the film enjoyable in spite of these deeply problematic behaviors. What does that say about the pervasiveness of these gender ideologies in our society?

The First Purge (2018)

The annual purge announcement from The Purge: Election Year (2016)
Staten Island residents rallying against the proposal to enact The Purge in their neighborhood.

The concept of The Purge (2013) film and now TV series is that once a year in the U.S. for 12 hours, all crime, including murder, is legal. The most recent film in the series, The First Purge, arrived in theaters this summer. In the film, the right-wing New Founding Fathers of America political party conducts an experiment on Staten Island, a borough of primarily poor people of color. This experiment is a trial run of the Purge concept that is rolled out nationally in the other films.

This premise offers director Gerard McMurray an allegory to explore a host of sociological issues relevant to current U.S. society. The film works as a basis for a discussion of class inequality, racial injustices, gendered violence, and social control. In our discussion of the film, we address deviance, racial stereotypes, anomie, solidarity, and the social psychological influences on behavior, especially the internalization of norms.

Though the horror genre is notorious for being particularly white-dominated, The First Purge is directed by a Black man (Gerard McMurray) and the primary stars of the film are people of color (Y’lan NoelLex Scott DavisJoivan Wade). While critical and thought-provoking in many ways, the film is also disappointing when it comes to portrayals of gender and sexuality. Questions for class discussion could include how social structure influences individual agency within the film’s narrative. How does the film perpetuate and challenge race, gender, and racial stereotypes? What is the role of intersectionality in these stereotypes?

In preparation for Halloween, we will soon have a follow-up post detailing which of our prior podcasts are relevant to different sociology courses. We will also have an example assignment to share that instructors can adapt to their own needs/classes to help you discuss horror films with your students.

Marshall Smith earned his PhD in sociology from the University of Colorado at Boulder in 2011 focusing on gender, sexuality, youth, and media. He currently teaches sociology classes at CU Boulder for the Farrand Residential Academic Program. 

Laura Patterson earned her PhD from the University of Colorado at Boulder in 2011, focusing on environmental issues and the impacts of HIV/AIDS in rural South Africa.  She’s currently a research consultant with a Colorado-based pregnancy prevention program and other federally-funded evaluation efforts, in addition to teaching at CU Boulder and Adams State University.

(View original at

Planet DebianPetter Reinholdtsen: Release 0.2 of free software archive system Nikita announced

This morning, the new release of the Nikita Noark 5 core project was announced on the project mailing list. The free software solution is an implementation of the Norwegian archive standard Noark 5 used by government offices in Norway. These were the changes in version 0.2 since version 0.1.1 (from

  • Fix typos in REL names
  • Tidy up error message reporting
  • Fix issue where we used Integer.valueOf(), not Integer.getInteger()
  • Change some String handling to StringBuffer
  • Fix error reporting
  • Code tidy-up
  • Fix issue using static non-synchronized SimpleDateFormat to avoid race conditions
  • Fix problem where deserialisers were treating integers as strings
  • Update methods to make them null-safe
  • Fix many issues reported by coverity
  • Improve equals(), compareTo() and hash() in domain model
  • Improvements to the domain model for metadata classes
  • Fix CORS issues when downloading document
  • Implementation of case-handling with registryEntry and document upload
  • Better support in Javascript for OPTIONS
  • Adding concept description of mail integration
  • Improve setting of default values for GET on ny-journalpost
  • Better handling of required values during deserialisation
  • Changed tilknyttetDato (M620) from date to dateTime
  • Corrected some opprettetDato (M600) (de)serialisation errors.
  • Improve parse error reporting.
  • Started on OData search and filtering.
  • Added Contributor Covenant Code of Conduct to project.
  • Moved repository and project from Github to Gitlab.
  • Restructured repository, moved code into src/ and web/.
  • Updated code to use Spring Boot version 2.
  • Added support for OAuth2 authentication.
  • Fixed several bugs discovered by Coverity.
  • Corrected handling of date/datetime fields.
  • Improved error reporting when rejecting during deserializatoin.
  • Adjusted default values provided for ny-arkivdel, ny-mappe, ny-saksmappe, ny-journalpost and ny-dokumentbeskrivelse.
  • Several fixes for korrespondansepart*.
  • Updated web GUI:
    • Now handle both file upload and download.
    • Uses new OAuth2 authentication for login.
    • Forms now fetches default values from API using GET.
    • Added RFC 822 (email), TIFF and JPEG to list of possible file formats.

The changes and improvements are extensive. Running diffstat on the changes between git tab 0.1.1 and 0.2 show 1098 files changed, 108666 insertions(+), 54066 deletions(-).

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

CryptogramGovernment Perspective on Supply Chain Security

This is an interesting interview with a former NSA employee about supply chain security. I consider this to be an insurmountable problem right now.

Worse Than FailureThe Theater of the Mind

Hamza has some friends in the theater business. These friends had an in-house developed Java application to manage seating arrangements, and they had some problems with it. They had lots of problems with it. So Hamza cut them a deal and agreed to take a look.

There were the usual litany of problems: performance was garbage, features bugged out if you didn’t precisely follow a certain path, it crashed all the time, etc. There was also an important missing feature.

Seats in a theater section lay out roughly in a grid. Roughly- not every row has exactly the same number of seats, sometimes there are gaps in the middle of the row to make way for the requirements of the theater itself, and the application could handle that- for one arrangement. Seats could be removed to create standing-room sections, seats could be added to rows, seats could be reserved for various purposes, and so on. None of this was supported by the application.

Hamza dug into the code which rendered the seating arrangements. Did it read from a database? Did it read from a config file? Did it have a hard-coded array?

for (int r = 0; r < NUMROWS; r++) {
  for (int c = 0; c < NUMCOLS; c++) {
    Rectangle rec = new Rectangle(24, 24, Color.rgb(204, 0, 0));
    fat[r][c] = new Seat();
    changeRecColor(rec, r, c);
    StackPane s = new StackPane();
    gridPane.add(s, c, r);
    if (
        r == 14 ||
            c == 0 ||
            c == 1 && r != 23 ||
            c == 2 && r != 23 ||
            c == 3 && (r != 22 && r != 23) ||
            c == 4 && (r != 20 && r != 21 && r != 22 && r != 23) ||
            c == 5 && (r != 12 && r != 13 && r != 18 && r != 19 && r != 20 && r != 21 && r != 22 && r != 23) ||
            c == 6 && (r != 10 && r != 11 && r != 12 && r != 13 && r != 16 && r != 17 && r != 18 && r != 19 && r != 20 && r != 21 && r != 22 && r != 23) ||
            c == 7 && (r != 8 && r != 9 && r != 10 && r != 11 && r != 12 && r != 13 && r != 16 && r != 17 && r != 18 && r != 19 && r != 20 && r != 21 && r != 22 && r != 23) ||
            c == 8 && (r == 0 || r == 1 || r == 2 || r == 3 || r == 4 || r == 5 || r == 24) ||
            c == 9 && (r == 0 || r == 1 || r == 2 || r == 3 || r == 24) ||
            c == 10 && (r == 1 || r == 24) ||
            (c == 11 || c == 12 || c == 13) && r == 24 ||
            (c == 14 || c == 15 || c == 16) && r == 24 ||
            c == 17 ||
            c == 18 ||
            c == 19 && (r != 0) ||
            c == 20 && (r == 15 || r == 23 || r == 24) ||
            c == 21 && (r == 23 || r == 24) ||
            (c == 22 || c == 23) && (r == 23 || r == 24) ||
            (c == 24 || c == 25 || c == 26 || c == 27 || c == 28 || c == 29 || c == 30) && (r == 23 || r == 24) ||
            (c == 31 || c == 32 || c == 33 || c == 34 || c == 35) && (r == 23 || r == 24) ||
            c == 36 && (r != 0 && r != 16 && r != 17 && r != 18 && r != 19 && r != 20 && r != 21 && r != 22) ||
            c == 37 && (r != 23) ||
            c == 38 && (r != 23) ||
            c == 39 && (r == 15 || r == 16 || r == 17 || r == 18 || r == 19 || r == 20 || r == 21 || r == 22 || r == 24) ||
            c == 40 && r == 24 ||
            (c == 41 || c == 42 || c == 43 || c == 44) && r == 24 ||
            c == 45 && (r == 1 || r == 24) ||
            c == 46 && (r < 14 && (r == 0 || r == 1 || r == 2 || r == 3) || r == 24) ||
            c == 47 && (r < 14 && (r == 0 || r == 1 || r == 2 || r == 3 || r == 4 || r == 5) || r == 24) ||
            c == 48 && (r != 8 && r != 9 && r != 10 && r != 11 && r != 12 && r != 13 && r != 15 && r != 16 && r != 17 && r != 18 && r != 19 && r != 20 && r != 21 && r != 22 && r != 23) ||
            c == 49 && (r != 10 && r != 11 && r != 12 && r != 13 && r != 16 && r != 17 && r != 18 && r != 19 && r != 20 && r != 21 && r != 22 && r != 23) ||
            c == 50 && (r != 12 && r != 13 && r != 16 && r != 17 && r != 18 && r != 19 && r != 20 && r != 21 && r != 22 && r != 23) ||
            c == 51 && (r < 14 || r > 14 && (r == 15 || r == 16 || r == 17 || r == 24)) ||
            c == 52 && (r != 20 && r != 21 && r != 22 && r != 23) ||
            c == 53 && (r != 22 && r != 23) ||
            c == 54 ||
            r > 24
    ) {
      fat[r][c] = null;

No, it had a 37 line if condition.

Hamza writes: “Refactoring this application was fun. Not.”

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Don MartiConsent management: can it even work?

Read the whole thing: Why Data Privacy Based on Consent Is Impossible, an interview with Helen Nissenbaum.

The farce of consent as currently deployed is probably doing more harm as it gives the misimpression of meaningful control that we are guiltily ceding because we are too ignorant to do otherwise and are impatient for, or need, the proffered service. There is a strong sense that consent is still fundamental to respecting people’s privacy. In some cases, yes, consent is essential. But what we have today is not really consent.

And, in Big Data's End Run Around Anonymity and Consent (PDF):

So long as a data collector can overcome sampling bias with a relatively small proportion of the consenting population, this minority will determine the range of what can be inferred for the majority and it will discourage firms from investing their resources in procedures that help garner the will- ing consent of more than the bare minimum number of people. In other words, once a critical threshold has been reached, data collectors can rely on more easily observable information to situate all individuals according to these patterns, rendering irrelevant whether or not those individuals have consented to allowing access to the critical information in question. Withholding consent will make no difference to how they are treated!

Is consent management even possible? Is a large company that seeks consent from an individual similar to a Freedom Monster?

What would happen if consent had to be informed?

And what's going on with Judge Judy and skin care products? There are thousands of skin care scams on Facebook and other places on the internet that falsely state that their product is endorsed by celebrities. These scams all advertise a free sample of their product if you pay $4.95 for the shipping. Along the way, you have to agree to the terms and conditions....The terms and conditions are only viewable through a link you have to click, which most of these people never do.

Or Martin Lewis and fake bitcoin ads? He launched a lawsuit in April 2018, claiming scammers are using his trusted reputation to ensnare people into bitcoin and Cloud Trader "get-rich-quick schemes" on Facebook.

The problem is that ad media that have more data, and are better at facilitating targeting, are also better for deceptive advertisers. Somehow an ad-supported medium needs consent for just enough data to make the ads saleable, no more. As soon as excess consent enters the system, the incentive to produce ad-supported news and cultural works goes down, and the returns to scamming goes up.

See you at Mozfest? Related sessions: Consent management at Mozfest 2018

FBI Brings Gun to Advertising Knife Fight

John Hegarty: Globalisation has hurt the marketing industry

What's There To Laugh About?

Advertising only ever works by consent

Mainstream Advertising is Still Showing Up on Conspiracy and Extremist Websites

Some dark thoughts on content marketing.

Planet Linux AustraliaLev Lafayette: Performance Improvements with GPUs for Marine Biodiversity: A Cross-Tasman Collaboration

Identifying probable dispersal routes and for marine populations is a data and processing intensive task of which traditional high performance computing systems are suitable, even for single-threaded applications. Whilst processing dependencies between the datasets exist, a large level of independence between sets allows for use of job arrays to significantly improve processing time. Identification of bottle-necks within the code base suitable for GPU optimisation however had led to additional performance improvements which can be coupled with the existing benefits from job arrays. This small example offers an example of how to optimise single-threaded applications suitable for GPU architectures for significant performance improvements. Further development is suggested with the expansion of the GPU capability of the University of Melbourne’s “Spartan” HPC system.

A presentation to EResearchAustralasia 2018.


TEDPreview our new podcast: The TED Interview

TED is launching a new way for curious audiences to immerse themselves more deeply in some of the most compelling ideas on our platform: The TED Interview, a long-form TED original podcast series. Beginning October 16, weekly episodes of The TED Interview will feature head of TED Chris Anderson deep in conversation with TED speakers about the ideas they shared in their TED Talks. Guests will include Elizabeth Gilbert and Sir Ken Robinson, as well as Sam Harris, Mellody Hobson, Daniel Kahneman, Ray Kurzweil and more. Listen to the trailer here.

NEW: Listen to the first episode, our conversation with Elizabeth Gilbert.

“If you look at the cast of characters who have given TED Talks over the past few years, it’s a truly remarkable group of people, and includes many of the world’s most remarkable minds,” Chris said. “We got a glimpse of their thinking in their TED Talk, but there is so much more there. That’s what this podcast series seeks to uncover. We get to dive deeper, much deeper than was possible in their original talk, allowing them to further explain, amplify, illuminate and, in some cases, defend their thinking. For anyone turned on by ideas, these conversations are a special treat.”

The launch comes at an exciting time when TED is testing multiple new formats and channels to reach even wider global audiences. In the past year TED has experimented with original podcasts, including WorkLife with Adam Grant, Facebook Watch series like Constantly Curious, primetime international television in India with TED Talks India Nayi Soch and more.

“We’ve been very ambitious in our goal of developing and testing new formats and channels that can support TED’s mission of Ideas Worth Spreading,” said Colin Helms, head of media at TED. “A decade after TED began posting talks online, there are so many more differing media habits to contend with—and, lucky for us, so many more formats to more to play with. The TED Interview is an exciting new way for us to offer curious audiences a front-row seat to some of the day’s most fascinating and challenging conversations.”

The first episode of the TED Interview debuts Tuesday, October 16, on Apple Podcasts, the TED Android app or wherever you like to listen to podcasts. Season 1 features eleven episodes, roughly 40 minutes each. New episodes will be made available every Tuesday. Subscribe and check out the trailer here.

Planet DebianMichal Čihař: wlc 0.9

wlc 0.9, a command line utility for Weblate, has been just released. There are several new commands like translation file upload or repository cleanup. The codebase has been also migrated to use requests instead of urllib.

Full list of changes:

  • Switched to requests
  • Added support for cleanup command.
  • Added support for upload command.

wlc is built on API introduced in Weblate 2.6 and still being in development, you need at least Weblate 2.10 (or use on our hosting offering). You can find usage examples in the wlc documentation.

Filed under: Debian English SUSE Weblate

Worse Than FailureCodeSOD: A Load of ProductCodes

“Hey, Kim H, can you sit in on a tech-screen for a new hire?”

The Big Boss had a candidate they wanted hired, but before the hiring could actually happen, a token screening process needed to happen. Kim and a few other staffers were pulled in to screen the candidate, and the screen turned into a Blue Screen, because the candidate crashed hard. Everyone in the room gave them a thumbs down, and passed their report up the chain to the Big Boss.

The Big Boss ignored their comments and hired the candidate anyway. A week later, this ended up in source control:

public static ProductCodeModel GetProductCode(int id) {
    for (int i = 0; i <  GetProductCodes().size(); i++){
        if(i==id) return GetProductCodes().get(i);
    return null;

Obviously, the loop is unnecessary. The real kicker is that GetProductCodes loads its data out of a file each time it’s called. It contains thousands of lines, which means to access any individual product, the entire file has to be read into memory, id times, and if it's the last product code in the datafile, you have read the file N times.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!


TEDTitus Kaphar and Vijay Gupta named MacArthur Fellows, a musical tribute to #MeToo and other TED news

As usual, the TED community is busy with new projects and news — here are a few highlights.

Meet two newly minted MacArthur “geniuses.” Visual artist Titus Kaphar and violinist Vijay Gupta have been named 2018 MacArthur Fellows! The fellowship, established in 1981, awards $625,000 over the course of five years to individuals of exemplary creative merit, to spend as they like. Kaphar’s recent projects include The Jerome Project, a painting series on mass incarceration and The Next Haven project, a community space that offers fellowships to artists and curators and mentorship to local high-schoolers. In a video profile, Kaphar said, “I think merging art and history can help motivate social change.” Gupta is a social justice advocate who founded Street Symphony, a nonprofit that centers homeless and incarcerated communities through creative and educational programming in downtown Los Angeles. On his work, Gupta said, “It is as much our job to heal and inspire as it is to disrupt and provoke. It is our job to be the truth tellers of our time.” Congratulations to them both! (Watch Kaphar’s TED Talk and Gupta’s TED Talk.)

SpaceX achieves first California ground landing. Rocket company SpaceX, led by CEO Elon Musk and President Gwynne Shotwell, has landed one of their previously used Falcon 9 rockets on California land for the first time. The Falcon 9 was launched on October 7 to deliver the first of two 3,500-pound Argentinian satellites into low Earth orbit; following the drop-off, the rocket returned to Earth faster than the speed of sound and landed on SpaceX’s new landing pad at Vandenberg Air Force Base, north of LA. The full video of the launch and landing is about 30 minutes long and well worth the watch — it’s history in the making! (Watch Musk’s TED Talk and Shotwell’s TED Talk.)

A powerful musical tribute to #MeToo. In collaboration with singer Jasmine Power, Amanda Palmer has released a new song and video called “Mr. Weinstein Will See You Now,” marking the one-year anniversary of The New York Times exposé on Harvey Weinstein that catalyzed the #MeToo movement. Directed and choreographed by Noémie Lafrance, the video (NSFW) weaves striking visuals and haunting lyrics into a poignant reflection on sexual violence. In a statement, Palmer said, “As we directed the chorus members through our song chorus, I felt this overwhelming emotion come over me as I gazed into the eyes of each and every woman singing along … Women are rising up, everywhere. Change is happening at every level.” All proceeds from the song’s sales on Bandcamp will be forwarded to the Time’s Up legal defense fund. (Watch Palmer’s TED Talk.)

Rethink Robotics shutters. Widely regarded as the company that introduced the world to collaborative robots, Rethink Robotics, co-founded by Rodney Brooks, has closed. Rethink’s starred products, the Sawyer and Baxter robots, were breakthroughs, the first industrial robots built to work safely with people, rather than operated at a distance. The robots were designed to be used by factory floor workers who could program them by moving their “arms” to complete repetitive or dangerous tasks; they also had animated faces to communicate with their human co-workers. In The Verge, Rethink’s lack of commercial success was listed as the main reason for closing. (Watch Brooks’ TED Talk.)

Spittin’ Venom. Musician Reggie Watts debuted a new track on The Late Late Show with James Corden, paying homage to ’90s hip-hop with a hilarious take on Marvel’s new thriller Venom. Written by Demi Adejuyigbe and featuring Jenny Slate, along with a slew of aggressively ’90s outfits, the skit is a fun, quick watch with a surprisingly catchy beat. (Watch Watts’ TED Talk.)

Planet DebianMatthew Garrett: Initial thoughts on MongoDB's new Server Side Public License

MongoDB just announced that they were relicensing under their new Server Side Public License. This is basically the Affero GPL except with section 13 largely replaced with new text, as follows:

If you make the functionality of the Program or a modified version available to third parties as a service, you must make the Service Source Code available via network download to everyone at no charge, under the terms of this License. Making the functionality of the Program or modified version available to third parties as a service includes, without limitation, enabling third parties to interact with the functionality of the Program or modified version remotely through a computer network, offering a service the value of which entirely or primarily derives from the value of the Program or modified version, or offering a service that accomplishes for users the primary purpose of the Software or modified version.

“Service Source Code” means the Corresponding Source for the Program or the modified version, and the Corresponding Source for all programs that you use to make the Program or modified version available as a service, including, without limitation, management software, user interfaces, application program interfaces, automation software, monitoring software, backup software, storage software and hosting software, all such that a user could run an instance of the service using the Service Source Code you make available.

MongoDB admit that this license is not currently open source in the sense of being approved by the Open Source Initiative, but say:We believe that the SSPL meets the standards for an open source license and are working to have it approved by the OSI.

At the broadest level, AGPL requires you to distribute the source code to the AGPLed work[1] while the SSPL requires you to distribute the source code to everything involved in providing the service. Having a license place requirements around things that aren't derived works of the covered code is unusual but not entirely unheard of - the GPL requires you to provide build scripts even if they're not strictly derived works, and you could probably make an argument that the anti-Tivoisation provisions of GPL3 fall into this category.

A stranger point is that you're required to provide all of this under the terms of the SSPL. If you have any code in your stack that can't be released under those terms then it's literally impossible for you to comply with this license. I'm not a lawyer, so I'll leave it up to them to figure out whether this means you're now only allowed to deploy MongoDB on BSD because the license would require you to relicense Linux away from the GPL. This feels sloppy rather than deliberate, but if it is deliberate then it's a massively greater reach than any existing copyleft license.

You can definitely make arguments that this is just a maximalist copyleft license, the AGPL taken to extreme, and therefore it fits the open source criteria. But there's a point where something is so far from the previously accepted scenarios that it's actually something different, and should be examined as a new category rather than already approved categories. I suspect that this license has been written to conform to a strict reading of the Open Source Definition, and that any attempt by OSI to declare it as not being open source will receive pushback. But definitions don't exist to be weaponised against the communities that they seek to protect, and a license that has overly onerous terms should be rejected even if that means changing the definition.

In general I am strongly in favour of licenses ensuring that users have the freedom to take advantage of modifications that people have made to free software, and I'm a fan of the AGPL. But my initial feeling is that this license is a deliberate attempt to make it practically impossible to take advantage of the freedoms that the license nominally grants, and this impression is strengthened by it being something that's been announced with immediate effect rather than something that's been developed with community input. I think there's a bunch of worthwhile discussion to have about whether the AGPL is strong and clear enough to achieve its goals, but I don't think that this SSPL is the answer to that - and I lean towards thinking that it's not a good faith attempt to produce a usable open source license.

(It should go without saying that this is my personal opinion as a member of the free software community, and not that of my employer)

[1] There's some complexities around GPL3 code that's incorporated into the AGPLed work, but if it's not part of the AGPLed work then it's not covered

comment count unavailable comments

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #181

Here’s what happened in the Reproducible Builds effort between Sunday October 7 and Saturday October 13 2018:

Another brief reminder that another Reproducible Builds summit will be taking place between 11th—13th December 2018 in Mozilla’s offices in Paris. If you are interested in attending please send an email to More details can also be found on the corresponding event page of our website.

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages) was updated this week, including contributions from:

Packages reviewed and fixed, and bugs filed

Test framework development

There were a large number of updates to our Jenkins-based testing framework that powers by Holger Levsen this month, including:

In addition, Mattia Rizzolo performed some node administration (1, 2).


This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Rondam RamblingsYet another ominous development

A federal judge has dismissed Stormy Daniels' defamation lawsuit against Donald Trump and ordered her to pay his legal fees: The Court agrees with Mr. Trump's argument because the tweet in question constitutes 'rhetorical hyperbole' normally associated with politics and public discourse in the United States. The First Amendment protects this type of rhetorical statement.  [Emphasis added.] This

Sociological ImagesSocial Psych-ics

One of the most important ideas in social psychology is that there are different ways to think. Sometimes we consciously process information by reasoning through it. Other times we rely on snap judgements, emotional reactions, habit and instinct. These two ways of thinking (sometimes called “cold” and “hot”, “discursive” and “practical”, or System 1 and System 2) are important for studying society and culture. Is an advertisement trying to persuade you with an argument, or just trying to get you to feel a certain way when you pick up a product? We all think that System 1 is thinking, but once you start noticing System 2 at work, plain old thinking can seem a bit more magical.

Photo Credit: Robbi Baba, Flickr CC

Psychics are a fun way to see these ideas at work. Check out this short clip of actor Orson Welles talking about his experience with “cold reading”—learning and practicing the techniques that psychics use to draw conclusions and make predictions about people. Notice how the story he tells moves across the different kinds of thinking.

At first, cold readers consciously rely on a set of observations and rules, but as they get better this process becomes instinctual. They start relying on snap judgements, and they sometimes start believing that their instincts reflect actual psychic abilities. What’s actually happening is a practical insight from their training, it is just packaged and sold like it came from carefully considering a mystical knowledge or power.

But if a psychic doesn’t believe in what they are doing, is selling readings unethical? If the insights they get are based on real observations and instincts, are they just helping people think about their lives in a different way? If you have a little more time to ponder this, check out this cool documentary about Tarot reader Enrique Enriquez. He makes no claims to a mystical power or secret knowledge here; he just lays out cards and talks to people about what they bring to mind. The commentators say this is closer to poetry or performance art than psychic work. What kinds of thinking are going on here?

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

Planet DebianJulien Danjou: More GitHub workflow automation

More GitHub workflow automation

The more you use computers, the more you see the potentials for automating everything. Who doesn't love that? By building Mergify those last months, we've decided it was time bring more automation to the development workflow.

Mergify's first version was a minimal viable product around automating the merge of pull requests. As I wrote a few months ago, we wanted to automate the merge of pull requests when it was ready to be merged. For most projects, this is easy and consists of a simple rule: "it must be approved by a developer and pass the CI".

Evolving on Feedback

For the first few months, we received a lot of feedback from our users. They were enthusiastic about the product but were frustrated by a couple of things.

First, Mergify would mess up with branch protections. We thought that people wanted the GitHub UI to match their rules. As I'll explain later, it turns out to be only partially true, and we found a workaround.

Then, Mergify's abilities were capped by some of the limitations of the GitHub workflow and API. For example, GitHub would only allow rules per branch, whereas our users wanted to have rules applied based on a lot of different criteria.

Building the Next Engine

We rolled up our sleeves and started to build that new engine. The first thing was to get rid of the GitHub branch protection feature altogether and leveraging the Checks API to render something useful to the users in the UI. You can now have a complete overview of the rules that will be applied to your pull requests in the UI, making it easy to understand what's happening.

More GitHub workflow automation

Then, we wrote a new matching engine that would allow matching any pull requests based on any of its attributes. You can now automate your workflow with a finer-grained configuration.

What Does It Look Like?

Here's a simple rule you could write:

  - name: automatic merge on approval and CI pass
     - "#approved-reviews-by>=1"
     - status-success=continuous-integration/travis-ci/pr
     - label!=work-in-progress
        method: merge

With that, any pull request that has been approved by a collaborator, passes the Travis CI job and does not have the label work-in-progress will be automatically merged by Mergify.

You could use even more actions to backport this pull request to another branch, close the pull request or add/remove labels. We're starting to see users building amazing workflow with that engine!

We're thrilled by this new version we launched this week and glad we're getting amazing feedback (again) from our users.

When you give it a try, drop me a note and let me know what you think about it!

CryptogramPrivacy for Tigers

Ross Anderson has some new work:

As mobile phone masts went up across the world's jungles, savannas and mountains, so did poaching. Wildlife crime syndicates can not only coordinate better but can mine growing public data sets, often of geotagged images. Privacy matters for tigers, for snow leopards, for elephants and rhinos ­ and even for tortoises and sharks. Animal data protection laws, where they exist at all, are oblivious to these new threats, and no-one seems to have started to think seriously about information security.

Video here.

Worse Than FailureCodeSOD: My Condition is Complicated

Anneke’s organization is the sort of company where “working” takes precedence over “working well”. Under-staffed, under-budgeted, and under unrealistic deadlines, there simply isn’t any emphasis on code quality. The result is your pretty standard pile of badness: no tests, miles of spaghetti code, fragile features and difficult to modify implementations.

Recently, the powers that be discovered that they could hire half a dozen fresh-out-of-school developers on the cheap, and threw a bunch of fresh-faced kids into that mountain of garbage with no supervision. And that’s how this happened.

XmlNodeList nodeList = NodeListFromSomewhere();

  // A short operation which is very cheap to perform
while (index < nodeList.Count
          && (nodeList.Item(index).CloneNode(true).SelectSingleNode(xpathPolicyNo, XmlNsMngr).InnerText.Replace("\n", "").Replace("\t", "").Replace("\r", "") == policyNumberSystem)
          && (nodeList.Item(index).CloneNode(true).SelectSingleNode(xpathDocumentType, XmlNsMngr).InnerText.Replace("\n", "").Replace("\t", "").Replace("\r", "").ToUpper().Trim() != packageContent)


          && (nodeList.Item(index).CloneNode(true).SelectSingleNode(xpathOfferNumber, XmlNsMngr).InnerText.Replace("\n", "").Replace("\t", "").Replace("\r", "") == offerNumber)
          && (nodeList.Item(index).CloneNode(true).SelectSingleNode(xpathDocumentType, XmlNsMngr).InnerText.Replace("\n", "").Replace("\t", "").Replace("\r", "").ToUpper().Trim() != packageContent)
          && (nodeList.Item(index).CloneNode(true).SelectSingleNode(xpathPolicyNo, XmlNsMngr).InnerText.Replace("\n", "").Replace("\t", "").Replace("\r", "") == "")


While Anneke hasn’t had a chance to profile this code, they’re almost certain that the body of the loop is cheaper to evaluate than the loop condition, and given the number of regexes chained on, and nodes getting cloned, that’s not implausible. When Anneke found this code, whitespace wasn’t used to make it readable- it was just one extremely long line of code. Anneke’s only contribution to this section of code was to improve the indenting, because as mentioned before: if it works, it ships, and the code, as it stands, works.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

TEDRemembering Paul Allen

The directory of the Allen Brain Atlas, at, a huge collection of data from brains (mouse and human) that any scientist can use. The Allen Institute for Brain Science, like several other scientific and technical institutes funded by Paul Allen, does fundamental research that is made openly available.

What’s an appropriate second act after co-founding Microsoft? When Paul Allen left the massive software company, sure, he bought a sports team or two, founded a museum, funded schools and a telescope array, built some lovely buildings. But his deepest impact — even beyond the game-changing software he brought to market — may turn out to be his funding of foundational research in science and tech, driven by public spiritedness and a passion for inquiry.

The Allen Institute — composed of the Allen Institute for Brain Science, the Allen Institute for Cell Science and The Paul G. Allen Frontiers Group — explores fundamental questions like Can we build an atlas of the brain? and How does a single cell work within a complex system? Meanwhile, the Allen Institute for Artificial Intelligence, led by Oren Etzioni, conducts AI research and engineering “all for the common good.” In exploring ground-level questions and openly sharing their findings, these efforts empower future scientists and technologists to push further, faster.

Allen was a longtime TEDster, and this evening, Chris Anderson wrote on Twitter: “It’s been such an honor to have Paul as part of the TED community the past two decades. Despite being so smart and so powerful, he was extraordinarily humble, and contributed to numerous ideas and projects with zero fanfare. We’ll miss him terribly. RIP, Paul.”

CryptogramHow DNA Databases Violate Everyone's Privacy

If you're an American of European descent, there's a 60% chance you can be uniquely identified by public information in DNA databases. This is not information that you have made public; this is information your relatives have made public.

Research paper:

"Identity inference of genomic data using long-range familial searches."

Abstract: Consumer genomics databases have reached the scale of millions of individuals. Recently, law enforcement authorities have exploited some of these databases to identify suspects via distant familial relatives. Using genomic data of 1.28 million individuals tested with consumer genomics, we investigated the power of this technique. We project that about 60% of the searches for individuals of European-descent will result in a third cousin or closer match, which can allow their identification using demographic identifiers. Moreover, the technique could implicate nearly any US-individual of European-descent in the near future. We demonstrate that the technique can also identify research participants of a public sequencing project. Based on these results, we propose a potential mitigation strategy and policy implications to human subject research.

A good news article.

Planet Linux AustraliaOpenSTEM: Children in Singapore will no longer be ranked by exam results. Here’s why | World Economic Forum The island nation is changing its educational focus to encourage school children to develop the life skills they will need when they enter the world of work.


Planet DebianMichal Čihař: uTidylib 0.4

Two years ago, I've taken over uTidylib maintainership. Two years has passed without any bigger contribution, but today there is a new version with support for recent html-tidy and Python 3.

The release still can't be uploaded to PyPI (see, but it's available for download from my website or tagged in the GitHub repository.

Full list of changes is quite small:

  • Compatibility with html-tidy 5.6.0.
  • Added support for Python 3.

Anyway as I can not update PyPI entry, the downloads are currently available only on my website:

Filed under: Debian English SUSE uTidylib

Planet DebianRobert McQueen: Flatpaks, sandboxes and security

Last week the Flatpak community woke to the “news” that we are making the world a less secure place and we need to rethink what we’re doing. Personally, I’m not sure this is a fair assessment of the situation. The “tl;dr” summary is: Flatpak confers many benefits besides the sandboxing, and even looking just at the sandboxing, improving app security is a huge problem space and so is a work in progress across multiple upstream projects. Much of what has been achieved so far already delivers incremental improvements in security, and we’re making solid progress on the wider app distribution and portability problem space.

Sandboxing, like security in general, isn’t a binary thing – you can’t just say because you have a sandbox, you have 100% security. Like having two locks on your front door, two front doors, or locks on your windows too, sensible security is about defense in depth. Each barrier that you implement precludes some invalid or possibly malicious behaviour. You hope that in total, all of these barriers would prevent anything bad, but you can never really guarantee this – it’s about multiplying together probabilities to get a smaller number. A computer which is switched off, in a locked faraday cage, with no connectivity, is perfectly secure – but it’s also perfectly useless because you cannot actually use it. Sandboxing is very much the same – whilst you could easily take systemd-nspawn, Docker or any other container technology of choice and 100% lock down a desktop app, you wouldn’t be able to interact with it at all.

Network services have incubated and driven most of the container usage on Linux up until now but they are fundamentally different to desktop applications. For services you can write a simple list of permissions like, “listen on this network port” and “save files over here” whereas desktop applications have a much larger number of touchpoints to the outside world which the user expects and requires for normal functionality. Just thinking off the top of my head you need to consider access to the filesystem, display server, input devices, notifications, IPC, accessibility, fonts, themes, configuration, audio playback and capture, video playback, screen sharing, GPU hardware, printing, app launching, removable media, and joysticks. Without making holes in the sandbox to allow access to these in to your app, it either wouldn’t work at all, or it wouldn’t work in the way that people have come to expect.

What Flatpak brings to this is understanding of the specific desktop app problem space – most of what I listed above is to a greater or lesser extent understood by Flatpak, or support is planned. The Flatpak sandbox is very configurable, allowing the application author to specify which of these resources they need access to. The Flatpak CLI asks the user about these during installation, and we provide the flatpak override command to allow the user to add or remove these sandbox escapes. Flatpak has introduced portals into the Linux desktop ecosystem, which we’re really pleased to be sharing with snap since earlier this year, to provide runtime access to resources outside the sandbox based on policy and user consent. For instance, document access, app launching, input methods and recursive sandboxing (“sandbox me harder”) have portals.

The starting security position on the desktop was quite terrible – anything in your session had basically complete access to everything belonging to your user, and many places to hide.

  • Access to the X socket allows arbitrary input and output to any other app on your desktop, but without it, no app on an X desktop would work. Wayland fixes this, so Flatpak has a fallback setting to allow Wayland to be used if present, and the X socket to be shared if not.
  • Unrestricted access to the PulseAudio socket allows you to reconfigure audio routing, capture microphone input, etc. To ensure user consent we need a portal to control this, where by default you can play audio back but device access needs consent and work is under way to create this portal.
  • Access to the webcam device node means an app can capture video whenever it wants – solving this required a whole new project.
  • Sandboxing access to configuration in dconf is a priority for the project right now, after the 1.0 release.

Even with these caveats, Flatpak brings a bunch of default sandboxing – IPC filtering, a new filesystem, process and UID namespace, seccomp filtering, an immutable /usr and /app – and each of these is already a barrier to certain attacks.

Looking at the specific concerns raised:

  • Hopefully from the above it’s clear that sandboxing desktop apps isn’t just a switch we can flick overnight, but what we already have is far better than having nothing at all. It’s not the intention of Flatpak to somehow mislead people that sandboxed means somehow impervious to all known security issues and can access nothing whatsoever, but we do want to encourage the use of the new technology so that we can work together on driving adoption and making improvements together. The idea is that over time, as the portals are filled out to cover the majority of the interfaces described, and supported in the major widget sets / frameworks, the criteria for earning a nice “sandboxed” badge or submitting your app to Flathub will become stricter. Many of the apps that access --filesystem=home are because they use old widget sets like Gtk2+ and frameworks like Electron that don’t support portals (yet!). Contributions to improve portal integration into other frameworks and desktops are very welcome and as mentioned above will also improve integration and security in other systems that use portals, such as snap.
  • As Alex has already blogged, the 1.6 runtime was something we threw together because we needed something distro agnostic to actually be able to bootstrap the entire concept of Flatpak and runtimes. A confusing mishmash of Yocto with flatpak-builder, it’s thankfully nearing some form of retirement after a recent round of security fixes. The replacement freedesktop-sdk project has just released its first stable 18.08 release, and rather than “one or two people in their spare time because something like this needs to exist”, is backed by a team from Codethink and with support from the Flatpak, GNOME and KDE communities.
  • I’m not sure how fixing and disclosing a security problem in a relatively immature pre-1.0 program (in June 2017, Flathub had less than 50 apps) is considered an ongoing problem from a security perspective. The wording in the release notes?

Zooming out a little bit, I think it’s worth also highlighting some of the other reasons why Flatpak exists at all – these are far bigger problems with the Linux desktop ecosystem than app security alone, and Flatpak brings a huge array of benefits to the table:

  • Allowing apps to become agnostic of their underlying distribution. The reason that runtimes exist at all is so that apps can specify the ABI and dependencies that they need, and you can run it on whatever distro you want. Flatpak has had this from day one, and it’s been hugely reliable because the sandboxed /usr means the app can rely on getting whatever they need. This is the foundation on which everything else is built.
  • Separating the release/update cadence of distributions from the apps. The flip side of this, which I think is huge for more conservative platforms like Debian or enterprise distributions which don’t want to break their ABIs, hardware support or other guarantees, is that you can still get new apps into users hands. Wider than this, I think it allows us huge new freedoms to move in a direction of reinventing the distro – once you start to pull the gnarly complexity of apps and their dependencies into sandboxes, your constraints are hugely reduced and you can slim down or radically rethink the host system underneath. At Endless OS, Flatpak literally changed the structure of our engineering team, and for the first time allowed us to develop and deliver our OS, SDK and apps in independent teams each with their own cadence.
  • Disintermediating app developers from their users. Flathub now offers over 400 apps, and (at a rough count by Nick Richards over the summer) over half of them are directly maintained by or maintained in conjunction with the upstream developers. This is fantastic – we get the releases when they come out, the developers can choose the dependencies and configuration they need – and they get to deliver this same experience to everyone.
  • Decentralised. Anyone can set up a Flatpak repo! We started our own at Flathub because there needs to be a center of gravity and a complete story to build out a user and developer base, but the idea is that anyone can use the same tools that we do, and publish whatever/wherever they want. GNOME uses GitLab CI to publish nightly Flatpak builds, KDE is setting up the same in their infrastructure, and Fedora is working on completely different infrastructure to build and deliver their packaged applications as Flatpaks.
  • Easy to build. I’ve worked on Debian packages, RPMs, Yocto, etc and I can honestly say that flatpak-builder has done a very good job of making it really easy to put your app manifest together. Because the builds are sandboxed and each runtimes brings with it a consistent SDK environment, they are very reliably reproducible. It’s worth just calling this out because when you’re trying to attract developers to your platform or contributors to your app, hurdles like complex or fragile tools and build processes to learn and debug all add resistance and drag, and discourage contributions. GNOME Builder can take any flatpak’d app and build it for you automatically, ready to hack within minutes.
  • Different ways to distribute apps. Using OSTree under the hood, Flatpak supports single-file app .bundles, pulling from OSTree repos and OCI registries, and at Endless we’ve been working on peer-to-peer distribution like USB sticks and LAN sharing.

Nobody is trying to claim that Flatpak solves all of the problems at once, or that what we have is anywhere near perfect or completely secure, but I think what we have is pretty damn cool (I just wish we’d had it 10 years ago!). Even just in the security space, the overall effort we need is huge, but this is a journey that we are happy to be embarking together with the whole Linux desktop community. Thanks for reading, trying it out, and lending us a hand.

Worse Than FailureCodeSOD: Eine Kleine ProductListItems

Art received a job offer that had some generous terms, and during the interview process, there was an ominous sense that the hiring team was absolutely desperate for someone who had done anything software related.

Upon joining the team, Art found out why. Two years ago, someone had decided they needed to create a web-based storefront, and in a fit of NIH syndrome, it needed to be built from scratch. Unfortunately, they didn't have anyone working at the company with a web development background or even a software development background, so they just threw a book on JavaScript at the network admin and hoped for the best.

Two years on, and they didn't have a working storefront. But they did have code like this:

productListItems= function(zweier,controll){ var cartProductListItems = []; if (zweier=="zweier"){ controll.destroyItems(); for (var y = 0; y <= 999; y=y+2) { controll.addItem(new sap.ui.core.ListItem({ text: y+2, key: y+2 })); } controll.setSelectedKey(2) } else if(zweier=="einer"){ controll.destroyItems(); for (var _i = 0; _i <= 999; _i++) { controll.addItem(new sap.ui.core.ListItem({ text: _i+1, key: _i+1 })); } controll.setSelectedKey(1) } else{ for (var _i = 0; _i <= 999; _i++) { cartProductListItems.push(new sap.ui.core.ListItem({ text: _i+1, key: _i+1 })); } } return cartProductListItems; };

controll is the on-screen control we're populating with list items. zweier controls the skip pattern- if it's "zweier", then skip by two. If it's "einer", skip by one, and if it's neither, populate an array. Return the array, populated or otherwise, at the end.

Now, someone didn't like that function, so they implemented an alternative which takes more parameters:

productListItems2= function(zweier,index,response,model){ var cartProductListItems = []; if (model!="" && model!=undefined ){ var data = response.oModel.getProperty(response.sPath) var mindestbestellmenge = data.BOMRABATT; if (mindestbestellmenge!="" && mindestbestellmenge != undefined){ if (mindestbestellmenge.toString().length == 1){ //do nothing }else{ mindestbestellmenge=mindestbestellmenge.split(".")[0] } } if (mindestbestellmenge!="1" && mindestbestellmenge!="0" && mindestbestellmenge != undefined && data.VERPACKUNGSEINHEIT=="ZS"){ var mindestbestellmenge = parseInt(mindestbestellmenge); cartProductListItems.push(new sap.ui.core.ListItem({ text: mindestbestellmenge, key: mindestbestellmenge })); }else{ cartProductListItems.push(new sap.ui.core.ListItem({ text: 1, key: 1 })); } return cartProductListItems } else{ if (zweier=="zweier"){ cartProductListItems.push(new sap.ui.core.ListItem({ text: 2, key: 2 })); } else if(zweier=="einer"){ cartProductListItems.push(new sap.ui.core.ListItem({ text: 1, key: 1 })); } else{ cartProductListItems.push(new sap.ui.core.ListItem({ text: 1, key: 1 })); } } return cartProductListItems; };

Okay, once again, we do some weird munging to populate a list, but we still have this bizarre zweier variable. Which, by the way, ein is one, zwei is two, so they're obviously using the spelled out version of a number instead of the actual number, and I could do with a little g'suffa at this point.

But you know what? This wasn't enough. They had to add another version of productListItems.

productListItems4 = function( control, min, max, step ){ step = +step || 1; min = +min ? +min + ( +min % +step ) : +step; max = +max || 999; var items = []; var ListItem = sap.ui.core.ListItem; var i; for ( i = min; i <= max; i += step ) items.push( new ListItem({ text: i, key: i }) ); if ( control ) { control.removeAllItems(); items.forEach( control.addItem.bind( control ) ); control.setSelectedKey( min ); } return items; };

This one is kinda okay. I don't love it, but it just about makes sense. But wait a second, why is it productListItems4? What happened to 3?

productListItems3= function(oControll, sLosgroesse){ var anzahl = window.anzahl; var cartProductListItems = []; if(oControll){ if(sLosgroesse>1 || anzahl>1){ if (sLosgroesse>1){ oControll.destroyItems(); for (var y = 1; y <= 999; y++) { oControll.addItem(new sap.ui.core.ListItem({ text: y*sLosgroesse, key: y*sLosgroesse })); } oControll.setSelectedKey( sLosgroesse || anzahl || 1 ); // oControll.setSelectedKey(2) } if(anzahl) if(anzahl>1){ oControll.destroyItems(); for (var y = 1; y <= 999; y++) { oControll.addItem(new sap.ui.core.ListItem({ text: y*anzahl, key: y*anzahl })); } oControll.setSelectedKey( sLosgroesse || anzahl || 1 ); // oControll.setSelectedKey(2) } } else{ oControll.destroyItems(); for (var y = 1; y <= 999; y++) { oControll.addItem(new sap.ui.core.ListItem({ text: y, key: y })); } } } else{ if(sLosgroesse>1){ for (var _i = 1; _i <= 999; _i++) { cartProductListItems.push(new sap.ui.core.ListItem({ text: _i*sLosgroesse, key: _i*sLosgroesse })); } } else{ for (var _i = 1; _i <= 999; _i++) { cartProductListItems.push(new sap.ui.core.ListItem({ text: _i, key: _i })); } } } return cartProductListItems; };

Oh. There it is. I'm sorry I asked. Nice use of the anzahl global variable. Some global variables are exactly what this needed. In this case, it holds the skip number again, so a riff on einer and zweier but without spelling things out.

A quick search for calls to any variation of productListItems shows that there are 2,000 different invocations to one of these methods, and a sampling shows that it's a little of all of them, depending on the relative age of a given code module.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

CryptogramUpcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Planet DebianLars Wirzenius: Rewrote summain from Python to Rust

I've been learning Rust lately. As part of that, I rewrote my summain program from Python to Rust (see summainrs). It's not quite a 1:1 rewrite: the Python version outputs RFC822-style records, the Rust one uses YAML. The Rust version is my first attempt at using multithreading, something I never added to the Python version.


  • Input is a directory tree with 8.9 gigabytes of data in 9650 files and directories.
  • Each file gets stat'd, and regular files get SHA256 computed.
  • Run on a Thinkpad X220 laptop with a rotating hard disk. Two CPU cores, 4 hyperthreads. Mostly idle, but desktop-py things running in the background. (Not a very systematic benchmark.)
  • Python version: 123 seconds wall clock time, 54 seconds user, 6 second system time.
  • Rust version: 61 seconds wall clock (50% of the speed), 56 seconds user (104%), and 4 seconds system time (67&).

A nice speed improvement, I think. Especially, since the difference between the single and multithreaded version of the Rust program is four characters (par_iter instead of iter in the process_chunk function).

Planet DebianLouis-Philippe Véronneau: A Good Harvest of the Devil's Lettuce

Hop cones layed out for drying

You might have heard that Canada's legalising marijuana in 2 days. Even though I think it's a pretty good idea, this post is not about pot, but about another type of Devil's Lettuce: hops.

As we all know, homebrewing beer is a gateway into growing hops, a highly suspicious activity that attracts only marginals and deviants. Happy to say I've been successfully growing hops for two years now and this year's harvest has been bountiful.

Two years ago, I planted two hops plants, one chinook and one triple pearl. A year prior to this I had tried to grow a cascade plant in a container on my balcony, but it didn't work out well. This time I got around to plant the rhizomes in the ground under my balcony and had the bines grow on ropes.

Although I've been having trouble with the triple pearl (the soil where I live is thick and heavy clay - not the best for hops), the chinook has been growing pretty well.

Closeup of my chinook hops on the bines

Harvest time is always fun and before taking the bines down, I didn't know how much cones I would get this year. I'd say compared to last year, I tripled my yield. With some luck (and better soil), I should be able to get my triple pearl to produce cones next year.

Here a nice poem about the usefulness of hops written by Thomas Tusser in 1557:

      The hop for his profit I thus do exalt,
      It strengtheneth drink and it flavoureth malt;
      And being well-brewed long kept it will last,
      And drawing abide, if ye draw not too fast.

So remember kids, don't drink and upload and if you decide to grow some of the Devil's Lettuce, make sure you use it to flavoureth malt and not your joint. The ones waging war on drugs might not like it.

CryptogramAccess Now Is Looking for a Chief Security Officer

The international digital human rights organization Access Now (I am on the board) is looking to hire a Chief Security Officer.

I believe that, somewhere, there is a highly qualified security person who has had enough of corporate life and wants instead to make a difference in the world. If that's you, please consider applying.


Planet DebianDirk Eddelbuettel: RcppCCTZ 0.2.5

A new bugfix release 0.2.5 of RcppCCTZ got onto CRAN this morning – just a good week after the previous release.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now at least three others do—but decided in their infinite wisdom to copy the sources yet again into their packages. Sigh.

This version corrects two bugs. We were not properly accounting for those poor systems that do not natively have nanosecond resolution. And I missed a feature in the Rcpp DatetimeVector class by not setting the timezone on newly created variables; this too has been fixed.

Changes in version 0.2.5 (2018-10-14)

  • Parsing to Datetime was corrected on systems that do not have nanosecond support in C++11 chrono (#28).

  • DatetimeVector objects are now created with their timezone attribute when available.

  • The toTz functions is now vectorized (#29).

  • More unit tests were added, and some conditioning on Solaris (mostly due to missing timezone info) was removed.

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Valerie AuroraDouble Union is dead, long live Double Union!

As of today, I am certain Double Union is no longer a space that prioritizes people who identify as a woman in a way that is significant to them. This is because I have been permanently banned from Double Union for refusing to prioritize the inclusion of people who do not identify that way.

Some background: In 2013, I co-founded the feminist hackerspace Double Union. At that time, we envisioned a tiny space where the constant hurricane roar of sexism was damped down to a gentle breeze while we worked on our funky art and science projects. We decided to restrict membership to people who identified as a woman in a way that was significant to them for many reasons, one of which was to make sure we always prioritized that group of people.

This year, Double Union changed its membership criteria to “identifies as a woman or non-binary in a way that is significant to you.” To explain what happened next, I need to talk a little about gender and privilege and society.

As any trans person can tell you, the mere act of identifying in your own mind that you are a particular gender does not automatically result in society treating you as that gender–that’s one reason transitioning is so significant. Telling other people that you’re a man won’t make society at large treat you like one–that is, grant you male privilege–you’ll also have to look and act in certain stereotyped ways to get that male privilege from others. At the same time, because we define masculinity as fragile and easily destroyed, when someone publicly identifies as a woman–even partially–they pretty quickly lose most of any male privilege they previously had.

The problem for me personally with the change to the Double Union membership criteria is that some non-binary people are granted significant male privilege by society despite being non-binary, and that has repercussions for a group that, until recently, only included people with relatively little male privilege. It’s possible to be non-binary and receive more male privilege from society than someone who is a cis man–masculinity is complex and fragile. All non-binary people are the targets of transphobia and cis-sexism (the idea that your gender is determined by certain bodily features), but only some non-binary people are the primary targets of sexism (the systemic oppression of women).

After this change, an open question in my mind was: what happens when the interests of members who do identify as women in a way that is significant to them come into conflict with the interests of members who do not? Who will be prioritized?

Last week, I sent an email to the Double Union members list talking about how the Kavanaugh hearings reminded me of one reason why I want a space that, by default, does not include people with significant male privilege. I want a break from the constant background threat of violence–violence that will be covered up and swept under the rug because the person committing it has significant male privilege, the way Kavanaugh’s assault of Dr. Ford was swept under the rug. I talked about how Double Union was no longer that space, and asked if anyone else had experience dealing with this problem, specifically in a group only for members of a marginalized group that includes people who can pass as the privileged group.

The code of conduct committee let me know that I could not even discuss this topic on the members mailing list because it violated the code of conduct by making people who did not identify as women in a way that was significant to them to feel excluded and harmed. I told them I would not agree to this restriction. They banned me.

Whatever Double Union is now, it’s no longer an organization that prioritizes people who identify as a woman in a way that is significant to them. That’s fine; change happens and groups evolve. Double Union is now a place for women and all non-binary people and when the interests of those groups clash, they’ll probably continue to prioritize the inclusion of people who don’t identify as women in a way that is significant to them. I hope they will update the code of conduct to make this clearer; I certainly didn’t understand that my question broke the code of conduct, despite writing a good chunk of it.

Double Union is dead, long live Double Union! It was a fun experiment, and now it is a new, different experiment.

Post-script: Here are a few common criticisms, along with my response:

“You think non-binary people who present as masculine aren’t non-binary. ” Nope, I believe non-binary people are non-binary regardless of their presentation. I also observe that our society grants privileges to people based on a wide variety of signals, and someone’s gender identity is only one of those signals. I wish it weren’t so.

“Non-binary people are more oppressed on the basis of gender than women, all other things being equal. So when the interests of women and non-binary people conflict, we should prioritize non-binary people to fight oppression.” I definitely don’t agree with this. When it comes to gender, the less you are perceived as a woman, the better off you are, roughly. (“Woman” is the “marked” state of gender and “man” is the “unmarked” state–think of a cartoon character, unless it is “marked” female, it is assumed to be male, not female or non-binary.) Some but not all non-binary people experience more oppression than women, all other things being equal. Non-binary people who are granted a lot of male privilege are less likely to experience more oppression on the basis of gender than women.

“Even talking about non-binary people with significant male privilege reinforces the oppressive idea that those non-binary people are really male.” I don’t understand this. Some trans people can pass as cis; talking about that doesn’t reinforce transphobia. Some people of color can pass as white; talking about that doesn’t reinforce racism. This sounds similar to the idea that talking about oppression reinforces oppression, which I also disagree with.

“Non-binary people with significant male privilege don’t have the same experience as men because the privilege doesn’t match their gender identity, which can be oppressive to non-binary people.” I agree, it’s not the same experience and it can be oppressive. That doesn’t stop society from prioritizing their needs over those of people who identify as women in a way that is significant to them.

“All women’s groups should include all non-binary people.” I disagree. I think there are a lot of valid groupings of people along the lines of gender or features we currently associate with gender. I am in favor of groups only for trans women, people with uteruses, non-binary trans masculine people, people assigned female at birth, people questioning their gender, and people who identify as women in a way that is significant to them, to name just a few appropriate groupings.

“Some Double Union members are afraid of people with white privilege or cis privilege, but they don’t get to exclude all white or cis people. Therefore we should not exclude people because they have male privilege.” I don’t get this one; as far as I can tell the argument is that you can never eliminate differences in privilege between members of a marginalized group, so… you should never create a group that excludes people based on any element of identity? If one person is afraid… the group can’t have any boundaries at all? By this argument, Double Union should start including people of all genders. Personally, I’d rather put more effort into stopping racism and cis-sexism and other forms of oppression at Double Union, which is why I budgeted a significant fraction of our income for that when I was on the board of directors. I support groups for people at the intersection of oppressed groups, such as Black Girls Code. Double Union already has events only for members who are people of color and other marginalized groups; I want more of those events. My best guess for why this argument keeps coming up is that many people are socialized to think it is wrong for people who identify as women in a way that is significant to them to prioritize themselves as a group.

Planet DebianJeremy Bicha: Google Cloud Print in Ubuntu

There is an interesting hidden feature available in Ubuntu 18.04 LTS and newer. To enable this feature, first install cpdb-backend-gcp.

sudo apt install cpdb-backend-gcp

Make sure you are signed in to Google with GNOME Online Accounts. Open the Settings app1 to the Online Accounts page. If your Google account is near the top above the Add an account section, then you’re all set.

Currently, only LibreOffice is supported. Hopefully, for 19.04, other GTK+ apps will be able to use the feature.

This feature was developed by Nilanjana Lodh and Abhijeet Dubey when they were Google Summer of Code 2017 participants. Their mentors were Till Kamppeter, Aveek Basu, and Felipe Borges.

Till has been trying to get this feature installed by default in Ubuntu since 18.04 LTS, but it looks like it won’t make it in until 19.04.

I haven’t seen this feature packaged in any other Linux distros yet. That might be because people don’t know about this feature so that’s why I’m posting about it today! If you are a distro packager, the 3 packages you need are cpdb-libs , cpdb-backend-gcp, and cpdb-backend-cups. The final package enables easy printing to any IPP printer. (I didn’t mention it earlier because I believe Ubuntu 18.04 LTS already supports that feature through a different package.)

Save to Google Drive

In my original blog post, I confused the cpdb feature with a feature that already exists in GTK3 built with GNOME Online Accounts support. This should already work on most distros.

When you print a document, there will be an extra Save to Google Drive option. Saving to Google Drive saves a PDF of your document to your Google Drive account.

This post was edited on October 16 to mention that cpdb only supports LibreOffice now and that Save to Google Drive is a GTK3 feature instead.

October 17: Please see Felipe’s comments. It turns out that even Google Cloud Print works fine in distros with recent GTK3. The point of the cpdb feature is to make this work in apps that don’t use GTK3. So I guess the big benefit now is that you can use Google Cloud Print or Save to Google Drive from LibreOffice.


Planet DebianJulian Andres Klode: The demise of G+ and return to blogging (w/ mastodon integration)

I’m back to blogging, after shutting down my hosted blog in spring. This time, fully privacy aware, self hosted, and integrated with mastodon.

Let’s talk details: In spring, I shutdown my hosted blog, due to concerns about GDPR implications with comment hosting and ads and stuff. I’d like to apologize for using that, back when I did this (in 2007), it was the easiest way to get into blogging. Please forgive me for subjecting you to that!

Recently, Google announced the end of Google+. As some of you might know, I posted a lot of medium-long posts there, rather than doing blog posts; especially after I disabled the wordpress site.

With the end of Google+, I want to try something new: I’ll host longer pieces on this blog, and post shorter messages on If you follow the Mastodon account, you will see toots for each new blog post as well, linking to the blog post.

Mastodon integration and privacy

Now comes the interesting part: If you reply to the toot, your reply will be shown on the blog itself. This works with a tiny bit of JavaScript that talks to a simple server-side script that finds toots from me mentioning the blog post, and then replies to that.

This protects your privacy, because does not see which blog post you are looking at, because it is contacted by the server, not by you. Rendering avatars requires loading images from’s file server, however - to improve your privacy, all avatars are loaded with referrerpolicy='no-referrer', so assuming your browser is half-way sane, it should not be telling which post you visited either. In fact, the entire domain also sets Referrer-Policy: no-referrer as an http header, so any link you follow will not have a referrer set.

The integration was originally written by – I have done some moderate improvements to adapt it to my theme, make it more reusable, and replace and extend the caching done in a JSON file with a Redis database.

Source code

This blog is free software; generated by the Hugo snap. All source code for it is available:

(Yes I am aware that hosting the repositories on GitHub is a bit ironic given the whole focus on privacy and self-hosting).

The theme makes use of Hugo pipes to minify and fingerprint JavaScript, and vendorizes all dependencies instead of embedding CDN links, to, again, protect your privacy.

Future work

I think I want to make the theme dark, to be more friendly to the eyes. I also might want to make the mastodon integration a bit more friendly to use. And I want to get rid of jQuery, it’s only used for a handful of calls in the Mastodon integration JavaScript.

If you have any other idea for improvements, feel free to join the conversation in the mastodon toot, send me an email, or open an issue at the github projects.

Closing thoughts

I think the end of Google+ will be an interesting time, requring a lot of people in the open source world to replace one of their main communication channels with a different approach.

Mastodon and Diaspora are both in the race, and I fear the community will split or everyone will have two accounts in the end. I personally think that Mastodon + syndicated blogs provide a good balance: You can quickly write short posts (up to 500 characters), and you can host long articles on your own and link to them.

I hope that one day diaspora* and mastodon federate together. If we end up with one federated network that would be the best outcome.

Planet DebianIngo Juergensmann: Xen & Databases

I'm running PostgreSQL and MySQL on my server that both serve different databases to Wordpress, Drupal, Piwigo, Friendica, Mastodon, whatever...

In the past the databases where colocated in my mailserver VM whereas the webserver was running on a different VM. Somewhen I moved the databases from domU to dom0, maybe because I thought that the databases would be faster running on direct disk I/O in the dom0 environment, but can't remember the exact rasons anymore.

However, in the meantime the size of the databases grew and the number of the VMs did, too. MySQL and PostgreSQL are both configured/optimized to run with 16 GB of memory in dom0, but in the last months I experienced high disk I/O especially for MySQL and slow I/O performance in all the domU VMs because of that.

Currently iotop shows something like this:

Total DISK READ :     131.92 K/s | Total DISK WRITE :    1546.42 K/s
Actual DISK READ:     131.92 K/s | Actual DISK WRITE:       2.40 M/s
 6424 be/4 mysql       0.00 B/s    0.00 B/s  0.00 % 60.90 % mysqld
18536 be/4 mysql      43.97 K/s   80.62 K/s  0.00 % 35.59 % mysqld
 6499 be/4 mysql       0.00 B/s   29.32 K/s  0.00 % 13.18 % mysqld
20117 be/4 mysql       0.00 B/s    3.66 K/s  0.00 % 12.30 % mysqld
 6482 be/4 mysql       0.00 B/s    0.00 B/s  0.00 % 10.04 % mysqld
 6495 be/4 mysql       0.00 B/s    3.66 K/s  0.00 % 10.02 % mysqld
20144 be/4 postgres    0.00 B/s   73.29 K/s  0.00 %  4.87 % postgres: hubzilla hubzi~
 2920 be/4 postgres    0.00 B/s 1209.28 K/s  0.00 %  3.52 % postgres: wal writer process
11759 be/4 mysql       0.00 B/s   25.65 K/s  0.00 %  0.83 % mysqld
18736 be/4 mysql       0.00 B/s   14.66 K/s  0.00 %  0.17 % mysqld
21768 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.02 % [kworker/1:0]
 2922 be/4 postgres    0.00 B/s   69.63 K/s  0.00 %  0.00 % postgres: stats collector process

MySQL data site is below configured max memory size for MySQL, so everything should more or less fit into memory. Yet, there is still a large amount of disk I/O by MySQL, much more than by PostgreSQL. Of course there is much I/O done by writes to the database.

However, I'm thinking of changing my setup again back to domU based database setup again, maybe one dedicated VM for both DBMS' or even two dedicated VMs for each of them? I'm not quite sure how Xen reacts to the current work load?

Back in the days when I did 3D computer graphic I did a lot of testing with different settings in regards of priorities and such. Basically one would think that giving the renderer more CPU time would speed of the rendering, but this turned out to be wrong: the higher the render tasks priority was, the slower the rendering got, because disk I/O (and other tasks that were necessary for the render task to work) got slowed down. When running the render task at lowest priority all the other necessary tasks could run on higher speed and return the CPU more quickly, which resulted in shorter render times.

So, maybe I experience something similar with the databases on dom0 here as well: dom0 is busy doing database work and this slows down all the other tasks (== domU VMs). When I would move databases back to domU this would enable dom0 again to better do its basic job of taking care of the domUs?

Of course, this is also a quite philosophical question, but what is the recommended setup? Is it better to separate the databases in two different VMs or just one? Or is running the databases on dom0 the best option?

I'm interested in your feedback, so please comment! :-)

UPDATE: you can also contact me on Mastodon or on Friendica at


Planet DebianJeremy Bicha: Shutter removed from Debian & Ubuntu

This week, the popular screenshot app Shutter was removed from Debian Unstable & Ubuntu 18.10. (It had already been removed from Debian “Buster” 6 months ago and some of its “optional” dependencies had already been removed from Ubuntu 18.04 LTS).

Shutter will need to be ported to gtk3 before it can return to Debian. (Ideally, it would support Wayland desktops too but that’s not a blocker for inclusion in Debian.)

See the Debian bug for more discussion.

I am told that flameshot is a nice well-maintained screenshot app.

I believe Snap or Flatpak are great ways to make apps that use obsolete libraries available on modern distros that can no longer keep those libraries around. There isn’t a Snap or Flatpak version of Shutter yet, so hopefully someone interested in that will help create one.

CryptogramSecurity Vulnerabilities in US Weapons Systems

The US Government Accounting Office just published a new report: "Weapons Systems Cyber Security: DOD Just Beginning to Grapple with Scale of Vulnerabilities" (summary here). The upshot won't be a surprise to any of my regular readers: they're vulnerable.

From the summary:

Automation and connectivity are fundamental enablers of DOD's modern military capabilities. However, they make weapon systems more vulnerable to cyber attacks. Although GAO and others have warned of cyber risks for decades, until recently, DOD did not prioritize weapon systems cybersecurity. Finally, DOD is still determining how best to address weapon systems cybersecurity.

In operational testing, DOD routinely found mission-critical cyber vulnerabilities in systems that were under development, yet program officials GAO met with believed their systems were secure and discounted some test results as unrealistic. Using relatively simple tools and techniques, testers were able to take control of systems and largely operate undetected, due in part to basic issues such as poor password management and unencrypted communications. In addition, vulnerabilities that DOD is aware of likely represent a fraction of total vulnerabilities due to testing limitations. For example, not all programs have been tested and tests do not reflect the full range of threats.

It is definitely easier, and cheaper, to ignore the problem or pretend it isn't a big deal. But that's probably a mistake in the long run.

CryptogramSecurity in a World of Physically Capable Computers

It's no secret that computers are insecure. Stories like the recent Facebook hack, the Equifax hack and the hacking of government agencies are remarkable for how unremarkable they really are. They might make headlines for a few days, but they're just the newsworthy tip of a very large iceberg.

The risks are about to get worse, because computers are being embedded into physical devices and will affect lives, not just our data. Security is not a problem the market will solve. The government needs to step in and regulate this increasingly dangerous space.

The primary reason computers are insecure is that most buyers aren't willing to pay -- in money, features, or time to market -- for security to be built into the products and services they want. As a result, we are stuck with hackable internet protocols, computers that are riddled with vulnerabilities and networks that are easily penetrated.

We have accepted this tenuous situation because, for a very long time, computer security has mostly been about data. Banking data stored by financial institutions might be important, but nobody dies when it's stolen. Facebook account data might be important, but again, nobody dies when it's stolen. Regardless of how bad these hacks are, it has historically been cheaper to accept the results than to fix the problems. But the nature of how we use computers is changing, and that comes with greater security risks.

Many of today's new computers are not just screens that we stare at, but objects in our world with which we interact. A refrigerator is now a computer that keeps things cold; a car is now a computer with four wheels and an engine. These computers sense us and our environment, and they affect us and our environment. They talk to each other over networks, they are autonomous, and they have physical agency. They drive our cars, pilot our planes, and run our power plants. They control traffic, administer drugs into our bodies, and dispatch emergency services. These connected computers and the network that connects them -- collectively known as "the internet of things" -- affect the world in a direct physical manner.

We've already seen hacks against robot vacuum cleaners, ransomware that shut down hospitals and denied care to patients, and malware that shut down cars and power plants. These attacks will become more common, and more catastrophic. Computers fail differently than most other machines: It's not just that they can be attacked remotely -- they can be attacked all at once. It's impossible to take an old refrigerator and infect it with a virus or recruit it into a denial-of-service botnet, and a car without an internet connection simply can't be hacked remotely. But that computer with four wheels and an engine? It -- along with all other cars of the same make and model -- can be made to run off the road, all at the same time.

As the threats increase, our longstanding assumptions about security no longer work. The practice of patching a security vulnerability is a good example of this. Traditionally, we respond to the never-ending stream of computer vulnerabilities by regularly patching our systems, applying updates that fix the insecurities. This fails in low-cost devices, whose manufacturers don't have security teams to write the patches: if you want to update your DVR or webcam for security reasons, you have to throw your old one away and buy a new one. Patching also fails in more expensive devices, and can be quite dangerous. Do we want to allow vulnerable automobiles on the streets and highways during the weeks before a new security patch is written, tested, and distributed?

Another failing assumption is the security of our supply chains. We've started to see political battles about government-placed vulnerabilities in computers and software from Russia and China. But supply chain security is about more than where the suspect company is located: we need to be concerned about where the chips are made, where the software is written, who the programmers are, and everything else.

Last week, Bloomberg reported that China inserted eavesdropping chips into hardware made for American companies like Amazon and Apple. The tech companies all denied the accuracy of this report, which precisely illustrates the problem. Everyone involved in the production of a computer must be trusted, because any one of them can subvert the security. As everything becomes a computer and those computers become embedded in national-security applications, supply-chain corruption will be impossible to ignore.

These are problems that the market will not fix. Buyers can't differentiate between secure and insecure products, so sellers prefer to spend their money on features that buyers can see. The complexity of the internet and of our supply chains make it difficult to trace a particular vulnerability to a corresponding harm. The courts have traditionally not held software manufacturers liable for vulnerabilities. And, for most companies, it has generally been good business to skimp on security, rather than sell a product that costs more, does less, and is on the market a year later.

The solution is complicated, and it's one I devoted my latest book to answering. There are technological challenges, but they're not insurmountable -- the policy issues are far more difficult. We must engage with the future of internet security as a policy issue. Doing so requires a multifaceted approach, one that requires government involvement at every step.

First, we need standards to ensure that unsafe products don't harm others. We need to accept that the internet is global and regulations are local, and design accordingly. These standards will include some prescriptive rules for minimal acceptable security. California just enacted an Internet of Things security law that prohibits default passwords. This is just one of many security holes that need to be closed, but it's a good start.

We also need our standards to be flexible and easy to adapt to the needs of various companies, organizations, and industries. The National Institute of Standards and Technology's Cybersecurity Framework is an excellent example of this, because its recommendations can be tailored to suit the individual needs and risks of organizations. The Cybersecurity Framework -- which contains guidance on how to identify, prevent, recover, and respond to security risks -- is voluntary at this point, which means nobody follows it. Making it mandatory for critical industries would be a great first step. An appropriate next step would be to implement more specific standards for industries like automobiles, medical devices, consumer goods, and critical infrastructure.

Second, we need regulatory agencies to penalize companies with bad security, and a robust liability regime. The Federal Trade Commission is starting to do this, but it can do much more. It needs to make the cost of insecurity greater than the cost of security, which means that fines have to be substantial. The European Union is leading the way in this regard: they've passed a comprehensive privacy law, and are now turning to security and safety. The United States can and should do the same.

We need to ensure that companies are held accountable for their products and services, and that those affected by insecurity can recover damages. Traditionally, United States courts have declined to enforce liabilities for software vulnerabilities, and those affected by data breaches have been unable to prove specific harm. Here, we need statutory damages -- harms spelled out in the law that don't require any further proof.

Finally, we need to make it an overarching policy that security takes precedence over everything else. The internet is used globally, by everyone, and any improvements we make to security will necessarily help those we might prefer remain insecure: criminals, terrorists, rival governments. Here, we have no choice. The security we gain from making our computers less vulnerable far outweighs any security we might gain from leaving insecurities that we can exploit.

Regulation is inevitable. Our choice is no longer between government regulation and no government regulation, but between smart government regulation and ill-advised government regulation. Government regulation is not something to fear. Regulation doesn't stifle innovation, and I suspect that well-written regulation will spur innovation by creating a market for security technologies.

No industry has significantly improved the security or safety of its products without the government stepping in to help. Cars, airplanes, pharmaceuticals, consumer goods, food, medical devices, workplaces, restaurants, and, most recently, financial products -- all needed government regulation in order to become safe and secure.

Getting internet safety and security right will depend on people: people who are willing to take the time and expense to do the right things; people who are determined to put the best possible law and policy into place. The internet is constantly growing and evolving; we still have time for our security to adapt, but we need to act quickly, before the next disaster strikes. It's time for the government to jump in and help. Not tomorrow, not next week, not next year, not when the next big technology company or government agency is hacked, but now.

This essay previously appeared in the New York Times. It's basically a summary of what I talk about in my new book.

Planet DebianDirk Eddelbuettel: RcppNLoptExample 0.0.1: Use NLopt from C/C++

A new package of ours, RcppNLoptExample, arrived on CRAN yesterday after a somewhat longer-than-usual wait for new packages as CRAN seems really busy these days. As always, a big and very grateful Thank You! for all they do to keep this community humming.

So what does it do ?

NLopt is a very comprehensive library for nonlinear optimization. The nloptr package by Jelmer Ypma has long been providing an excellent R interface.

Starting with its 1.2.0 release, the nloptr package now exports several C symbols in a way that makes it accessible to other R packages without linking easing the installation on all operating systems.

The new package RcppNLoptExample illustrates this facility with an example drawn from the NLopt tutorial. See the (currently single) file src/nlopt.cpp.

How / Why ?

R uses C interfaces. These C interfaces can be exported between packages. So when the usual library(nloptr) (or an import via NAMESPACE) happens, we now also get a number of C functions registered.

And those are enough to run optimization from C++ as we simply rely on the C interface provided. Look careful at the example code: the objective function and the constraint functions are C functions, and the body of our example invokes C functions from NLopt. This just works. For either C code, or C++ (where we rely on Rcpp to marshal data back and forth with ease).

On the other hand, if we tried to use the NLopt C++ interface which brings with it someinterface code we would require linking to that code (which R cannot easily export across packages using its C interface). So C it is.


The package is pretty basic but fully workable. Some more examples should get added, and a helper class or two for state would be nice. The (very short) NEWS entry follows:

Changes in version 0.0.1 (2018-10-01)

  • Initial basic package version with one example from NLopt tutorial

Code, issue tickets etc are at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Don MartiZero-click bookstore?

Random idea for a way to make the local bookstore easier to use than the big one-click Internet bookstore.

I walk by the local bookstore all the time but I don't always have the list of books I want to read with me.

So what about this?

  1. I keep a list of books I'm reading, or want to read, on Github.

  2. When I find out about a book I want to read, I add it to the list, make a GitHub issue, and assign the issue to someone at the local bookstore.

  3. The local bookstore gets the book and changes the status of the issue.

  4. I go pick up the book when I'm walking by the bookstore anyway.

Don Martimeasuring happiness

Another one of those employee happiness reports is out. This kind of thing always makes me wonder: what are these numbers really measuring?

It seems like happiness ratings by employees would depend on:

  • expected cost of retaliation for low scores

  • expected benefit of management response to low scores

The expected cost of retaliation is the probability that an employee's ratings will be exposed to management, multiplied by the negative impact that the employee will suffer in the event of disclosure. An employee who believes that the survey's security has problems, that management will retaliate severely in the event of disclosure, or both, is likely to assign high scores to management.

Some employers make changes in compensation or working conditions when they fail to achieve well on happiness (or employee engagement) surveys. If an employee believes that management is likely to make changes, then the employee is likely to assign low scores in areas where improvement would have the greatest impact. An employee might choose to vote honestly except in a specific area where they believe improvement is possible.

An evil company where management makes an effort to de-anonymize the happiness survey results, retaliates against employees who give low scores, and will not make changes to improve scores, will appear to have high employee happiness.

A good company where management does not retaliate, and will make changes in response to low scores, will appear to have low employee happiness.

Of course, this all changes the more that people figure out that getting low happiness scores means that you have responsive management.

Hacks.Mozilla.Org: Calls between JavaScript and WebAssembly are finally fast 🎉

Tech Workers Now Want To Know: What Are We Building This For?, by Cade Metz, Kate Conger, New York Times

Alt-right culture jamming

One small change to New York’s intersections is saving pedestrians’ lives

The Effectiveness of Publicly Shaming Bad Security

Commons Clause is a Legal Minefield and a Very Bad Idea

Planet DebianElana Hashman: PyGotham 2018 Talk Resources

At PyGotham in 2018, I gave a talk called "The Black Magic of Python Wheels". I based this talk on my two years of work on auditwheel and the manylinux platform, hoping to share some dark details of how the proverbial sausage is made.

It was a fun talk, and I enjoyed the opportunity to wear my Python Packaging Authority hat:

The Black Magic of Python Wheels

Follow-up readings

All the PEPs referenced in the talk

In increasing numeric order.

  • PEP 376 "Database of Installed Python Distributions"
  • PEP 426 "Metadata for Python Software Packages 2.0"
  • PEP 427 "The Wheel Binary Package Format 1.0"
  • PEP 513 "A Platform Tag for Portable Linux Built Distributions" (aka manylinux1)
  • PEP 571 "The manylinux2010 Platform Tag"

Image licensing info

Valerie AuroraHiring a facilitator for the Ally Skills Workshop

Frame Shift Consulting is getting so much business that I need another facilitator to help me teach Ally Skills Workshops! Short version: We are searching for a second part-time facilitator to help teach the popular Ally Skills Workshop at tech companies, primarily in the San Francisco Bay Area as well as around the world and online.

I’m especially interested in interviewing people who have some significant personal experience as a member of a marginalized group (person of color, queer, disabled, etc.). If that’s you and you’re even a little interested in the job, please consider spending 5 minutes slapping together an email with a link to your out-of-date typo-ridden résumeé. What’s the worst that could happen, you end up with a part-time gig being paid lots of money to teach people ally skills?

Here are the basic requirements:

  • Software experience, broadly defined (infosec, data science, testing, design, UX/UI, etc.)
  • Teaching experience, broadly defined (speaking at conferences, volunteer teaching, etc.)
  • Strong grasp of research and terminology around multiple axes of oppression
  • Residence in the San Francisco Bay Area
  • Work rights in the U.S.

If you’d like to learn more, including how to apply, check out the detailed job description.

Planet DebianDirk Eddelbuettel: GitHub Streak: Round Five

Four years ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:

This is sometimes called Jerry Seinfeld’s secret to productivity: Just keep at it. Don’t break the streak.

and then showed the first chart of GitHub streaking

github activity october 2013 to october 2014github activity october 2013 to october 2014

And three year ago a first follow-up appeared in this post:

github activity october 2014 to october 2015github activity october 2014 to october 2015

And two years ago we had a followup

github activity october 2015 to october 2016github activity october 2015 to october 2016

And last year we another one

github activity october 2016 to october 2017github activity october 2016 to october 2017

As today is October 12, here is the newest one from 2017 to 2018:

github activity october 2017 to october 2018github activity october 2017 to october 2018

Again, special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecuritySupply Chain Security 101: An Expert’s View

Earlier this month I spoke at a cybersecurity conference in Albany, N.Y. alongside Tony Sager, senior vice president and chief evangelist at the Center for Internet Security and a former bug hunter at the U.S. National Security Agency. We talked at length about many issues, including supply chain security, and I asked Sager whether he’d heard anything about rumors that Supermicro — a high tech firm in San Jose, Calif. — had allegedly inserted hardware backdoors in technology sold to a number of American companies.

Tony Sager, senior vice president and chief evangelist at the Center for Internet Security.

The event Sager and I spoke at was prior to the publication of Bloomberg Businessweek‘s controversial story alleging that Supermicro had duped almost 30 companies into buying backdoored hardware. Sager said he hadn’t heard anything about Supermicro specifically, but we chatted at length about the challenges of policing the technology supply chain.

Below are some excerpts from our conversation. I learned quite bit, and I hope you will, too.

Brian Krebs (BK): Do you think Uncle Sam spends enough time focusing on the supply chain security problem? It seems like a pretty big threat, but also one that is really hard to counter.

Tony Sager (TS): The federal government has been worrying about this kind of problem for decades. In the 70s and 80s, the government was more dominant in the technology industry and didn’t have this massive internationalization of the technology supply chain.

But even then there were people who saw where this was all going, and there were some pretty big government programs to look into it.

BK: Right, the Trusted Foundry program I guess is a good example.

TS: Exactly. That was an attempt to help support a U.S.-based technology industry so that we had an indigenous place to work with, and where we have only cleared people and total control over the processes and parts.

BK: Why do you think more companies aren’t insisting on producing stuff through code and hardware foundries here in the U.S.?

TS: Like a lot of things in security, the economics always win. And eventually the cost differential for offshoring parts and labor overwhelmed attempts at managing that challenge.

BK: But certainly there are some areas of computer hardware and network design where you absolutely must have far greater integrity assurance?

TS: Right, and this is how they approach things at Sandia National Laboratories [one of three national nuclear security research and development laboratories]. One of the things they’ve looked at is this whole business of whether someone might sneak something into the design of a nuclear weapon.

The basic design principle has been to assume that one person in the process may have been subverted somehow, and the whole design philosophy is built around making sure that no one person gets to sign off on what goes into a particular process, and that there is never unobserved control over any one aspect of the system. So, there are a lot of technical and procedural controls there.

But the bottom line is that doing this is really much harder [for non-nuclear electronic components] because of all the offshoring now of electronic parts, as well as the software that runs on top of that hardware.

BK: So is the government basically only interested in supply chain security so long as it affects stuff they want to buy and use?

TS: The government still has regular meetings on supply chain risk management, but there are no easy answers to this problem. The technical ability to detect something wrong has been outpaced by the ability to do something about it.

BK: Wait…what?

TS: Suppose a nation state dominates a piece of technology and in theory could plant something inside of it. The attacker in this case has a risk model, too. Yes, he could put something in the circuitry or design, but his risk of exposure also goes up.

Could I as an attacker control components that go into certain designs or products? Sure, but it’s often not very clear what the target is for that product, or how you will guarantee it gets used by your target. And there are still a limited set of bad guys who can pull that stuff off. In the past, it’s been much more lucrative for the attacker to attack the supply chain on the distribution side, to go after targeted machines in targeted markets to lessen the exposure of this activity.

BK: So targeting your attack becomes problematic if you’re not really limiting the scope of targets that get hit with compromised hardware.

TS: Yes, you can put something into everything, but all of a sudden you have this massive big data collection problem on the back end where you as the attacker have created a different kind of analysis problem. Of course, some nations have more capability than others to sift through huge amounts of data they’re collecting.

BK: Can you talk about some of the things the government has typically done to figure out whether a given technology supplier might be trying to slip in a few compromised devices among an order of many?

TS: There’s this concept of the “blind buy,” where if you think the threat vector is someone gets into my supply chain and subverts the security of individual machines or groups of machines, the government figures out a way to purchase specific systems so that no one can target them. In other words, the seller doesn’t know it’s the government who’s buying it. This is a pretty standard technique to get past this, but it’s an ongoing cat and mouse game to be sure.

BK: I know you said before this interview that you weren’t prepared to comment on the specific claims in the recent Bloomberg article, but it does seem that supply chain attacks targeting cloud providers could be very attractive for an attacker. Can you talk about how the big cloud providers could mitigate the threat of incorporating factory-compromised hardware into their operations?

TS: It’s certainly a natural place to attack, but it’s also a complicated place to attack — particularly the very nature of the cloud, which is many tenants on one machine. If you’re attacking a target with on-premise technology, that’s pretty simple. But the purpose of the cloud is to abstract machines and make more efficient use of the same resources, so that there could be many users on a given machine. So how do you target that in a supply chain attack?

BK: Is there anything about the way these cloud-based companies operate….maybe just sheer scale…that makes them perhaps uniquely more resilient to supply chain attacks vis-a-vis companies in other industries?

TS: That’s a great question. The counter positive trend is that in order to get the kind of speed and scale that the Googles and Amazons and Microsofts of the world want and need, these companies are far less inclined now to just take off-the-shelf hardware and they’re actually now more inclined to build their own.

BK: Can you give some examples?

TS: There’s a fair amount of discussion among these cloud providers about commonalities — what parts of design could they cooperate on so there’s a marketplace for all of them to draw upon. And so we’re starting to see a real shift from off-the-shelf components to things that the service provider is either designing or pretty closely involved in the design, and so they can also build in security controls for that hardware. Now, if you’re counting on people to exactly implement designs, you have a different problem. But these are really complex technologies, so it’s non-trivial to insert backdoors. It gets harder and harder to hide those kinds of things.

BK: That’s interesting, given how much each of us have tied up in various cloud platforms. Are there other examples of how the cloud providers can make it harder for attackers who might seek to subvert their services through supply chain shenanigans?

TS: One factor is they’re rolling this technology out fairly regularly, and on top of that the shelf life of technology for these cloud providers is now a very small number of years. They all want faster, more efficient, powerful hardware, and a dynamic environment is much harder to attack. This actually turns out to be a very expensive problem for the attacker because it might have taken them a year to get that foothold, but in a lot of cases the short shelf life of this technology [with the cloud providers] is really raising the costs for the attackers.

When I looked at what Amazon and Google and Microsoft are pushing for it’s really a lot of horsepower going into the architecture and designs that support that service model, including the building in of more and more security right up front. Yes, they’re still making lots of use of non-U.S. made parts, but they’re really aware of that when they do. That doesn’t mean these kinds of supply chain attacks are impossible to pull off, but by the same token they don’t get easier with time.

BK: It seems to me that the majority of the government’s efforts to help secure the tech supply chain come in the form of looking for counterfeit products that might somehow wind up in tanks and ships and planes and cause problems there — as opposed to using that microscope to look at commercial technology. Do you think that’s accurate?

TS: I think that’s a fair characterization. It’s a logistical issue. This problem of counterfeits is a related problem. Transparency is one general design philosophy. Another is accountability and traceability back to a source. There’s this buzzphrase that if you can’t build in security then build in accountability. Basically the notion there was you often can’t build in the best or perfect security, but if you can build in accountability and traceability, that’s a pretty powerful deterrent as well as a necessary aid.

BK: For example….?

TS: Well, there’s this emphasis on high quality and unchangeable logging. If you can build strong accountability that if something goes wrong I can trace it back to who caused that, I can trace it back far enough to make the problem more technically difficult for the attacker. Once I know I can trace back the construction of a computer board to a certain place, you’ve built a different kind of security challenge for the attacker. So the notion there is while you may not be able to prevent every attack, this causes the attacker different kinds of difficulties, which is good news for the defense.

BK: So is supply chain security more of a physical security or cybersecurity problem?

TS: We like to think of this as we’re fighting in cyber all the time, but often that’s not true. If you can force attackers to subvert your supply chain, they you first off take away the mid-level criminal elements and you force the attackers to do things that are outside the cyber domain, such as set up front companies, bribe humans, etc. And in those domains — particularly the human dimension — we have other mechanisms that are detectors of activity there.

BK: What role does network monitoring play here? I’m hearing a lot right now from tech experts who say organizations should be able to detect supply chain compromises because at some point they should be able to see truckloads of data leaving their networks if they’re doing network monitoring right. What do you think about the role of effective network monitoring in fighting potential supply chain attacks.

TS:  I’m not so optimistic about that. It’s too easy to hide. Monitoring is about finding anomalies, either in the volume or type of traffic you’d expect to see. It’s a hard problem category. For the US government, with perimeter monitoring there’s always a trade off in the ability to monitor traffic and the natural movement of the entire Internet towards encryption by default. So a lot of things we don’t get to touch because of tunneling and encryption, and the Department of Defense in particular has really struggled with this.

Now obviously what you can do is man-in-the-middle traffic with proxies and inspect everything there, and the perimeter of the network is ideally where you’d like to do that, but the speed and volume of the traffic is often just too great.

BK: Isn’t the government already doing this with the “trusted internet connections” or Einstein program, where they consolidate all this traffic at the gateways and try to inspect what’s going in and out?

TS: Yes, so they’re creating a highest volume, highest speed problem. To monitor that and to not interrupt traffic you have to have bleeding edge technology to do that, and then handle a ton of it which is already encrypted. If you’re going to try to proxy that, break it out, do the inspection and then re-encrypt the data, a lot of times that’s hard to keep up with technically and speed-wise.

BK: Does that mean it’s a waste of time to do this monitoring at the perimeter?

TS: No. The initial foothold by the attacker could have easily been via a legitimate tunnel and someone took over an account inside the enterprise. The real meaning of a particular stream of packets coming through the perimeter you may not know until that thing gets through and executes. So you can’t solve every problem at the perimeter. Some things only become obvious and make sense to catch them when they open up at the desktop.

BK: Do you see any parallels between the challenges of securing the supply chain and the challenges of getting companies to secure Internet of Things (IoT) devices so that they don’t continue to become a national security threat for just about any critical infrastructure, such as with DDoS attacks like we’ve seen over the past few years?

TS: Absolutely, and again the economics of security are so compelling. With IoT we have the cheapest possible parts, devices with a relatively short life span and it’s interesting to hear people talking about regulation around IoT. But a lot of the discussion I’ve heard recently does not revolve around top-down solutions but more like how do we learn from places like the Food and Drug Administration about certification of medical devices. In other words, are there known characteristics that we would like to see these devices put through before they become in some generic sense safe.

BK: How much of addressing the IoT and supply chain problems is about being able to look at the code that powers the hardware and finding the vulnerabilities there? Where does accountability come in?

TS: I used to look at other peoples’ software for a living and find zero-day bugs. What I realized was that our ability to find things as human beings with limited technology was never going to solve the problem. The deterrent effect that people believed someone was inspecting their software usually got more positive results than the actual looking. If they were going to make a mistake – deliberately or otherwise — they would have to work hard at it and if there was some method of transparency, us finding the one or two and making a big deal of it when we did was often enough of a deterrent.

BK: Sounds like an approach that would work well to help us feel better about the security and code inside of these election machines that have become the subject of so much intense scrutiny of late.

TS: We’re definitely going through this now in thinking about the election devices. We’re kind of going through this classic argument where hackers are carrying the noble flag of truth and vendors are hunkering down on liability. So some of the vendors seem willing to do something different, but at the same time they’re kind of trapped now by the good intentions of open vulnerability community.

The question is, how do we bring some level of transparency to the process, but probably short of vendors exposing their trade secrets and the code to the world? What is it that they can demonstrate in terms of cost effectiveness of development practices to scrub out some of the problems before they get out there. This is important, because elections need one outcome: Public confidence in the outcome. And of course, one way to do that is through greater transparency.

BK: What, if anything, are the takeaways for the average user here? With the proliferation of IoT devices in consumer homes, is there any hope that we’ll see more tools that help people gain more control over how these systems are behaving on the local network?

TS: Most of [the supply chain problem] is outside the individual’s ability to do anything about, and beyond ability of small businesses to grapple with this. It’s in fact outside of the autonomy of the average company to figure it out. We do need more national focus on the problem.

It’s now almost impossible to for consumers to buy electronics stuff that isn’t Internet-connected. The chipsets are so cheap and the ability for every device to have its own Wi-Fi chip built in means that [manufacturers] are adding them whether it makes sense to or not. I think we’ll see more security coming into the marketplace to manage devices. So for example you might define rules that say appliances can talk to the manufacturer only. 

We’re going to see more easy-to-use tools available to consumers to help manage all these devices. We’re starting to see the fight for dominance in this space already at the home gateway and network management level. As these devices get more numerous and complicated, there will be more consumer oriented ways to manage them. Some of the broadband providers already offer services that will tell what devices are operating in your home and let users control when those various devices are allowed to talk to the Internet.

Since Bloomberg’s story broke, The U.S. Department of Homeland Security and the National Cyber Security Centre, a unit of Britain’s eavesdropping agency, GCHQ, both came out with statements saying they had no reason to doubt vehement denials by Amazon and Apple that they were affected by any incidents involving Supermicro’s supply chain security. Apple also penned a strongly-worded letter to lawmakers denying claims in the story.

Meanwhile, Bloomberg reporters published a follow-up story citing new, on-the-record evidence to back up claims made in their original story.


Planet DebianBastian Venthur: Introducing Litestats

Profiling in Python has always been easy, however, analyzing the profiler's output not so much. After the profile has been created you can use Python's pstats module but it feels quite clumsy and not really empowering. For Python 2 there has been RunSnakeRun, a very convenient tool for analyzing the profiler output, unfortunately that tool hasn't been updated since 2014. I recently ported it to Python3 and wxPython4 but I'm probably not going to maintain that code properly as I'm not very comfortable with wxPython.

I still wanted something nicer than pstats for profiling so I wrote litestats. Litestats is a simple command line tool that takes the output of the Python profiler and transforms the data into a sqlite3 database. You can now easily analyze the profiler output using sqlite on the command line, the sqlitebrowser for a graphical interface or use the data base as the foundation of your very own tooling around the analysis.

How does it work?

Litestats reads the dump of the profiler and creates a normalized data base with tree tables:

  • functions: contains each function (callers and callees) with filename, line number and function name
  • stats contains the statistics (primitive/total calls, total/cumulative time) for all functions
  • calls a caller-callee mapping

While this provides an exact representation of the dump, those tables would be cumbersome to use. So litestats additionally creates three views emulating pstats print_stats(), print_callers() and print_callees() functionality:

  • pstats
  • callers
  • callees


Litestats has no requirements other than Python itself and is available on PyPI:

$ pip install litestats


$ # run the profiler and dump the output
$ python3 -m cProfile -o
$ # convert dump to sqlite3 db
$ litestats
$ # created

You can now use the sqlite3 data base to investigate the profiler dump:

sqlite> select *
   ...> from pstats
   ...> order by cumtime desc
   ...> limit 20;

ncalls      tottime     ttpercall             cumtime     ctpercall   filename:lineno(function)
----------  ----------  --------------------  ----------  ----------  ------------------------------------
18/1        0.000161    8.94444444444444e-06  0.067797    0.067797    ~:0(<built-in method builtins.exec>)
1           1.0e-06     1.0e-06               0.067755    0.067755    <string>:1(<module>)
1           4.0e-06     4.0e-06               0.067754    0.067754    /usr/lib/python3.7/
1           6.0e-06     6.0e-06               0.066135    0.066135    /usr/lib/python3.7/
1           1.1e-05     1.1e-05               0.066113    0.066113    /home/venthur/Documents/projects/lit
1           6.6e-05     6.6e-05               0.055152    0.055152    /home/venthur/Documents/projects/lit
1           4.1e-05     4.1e-05               0.0549      0.0549      /home/venthur/Documents/projects/lit
1           0.050196    0.050196              0.050196    0.050196    ~:0(<method 'executescript' of 'sqli
20/3        8.9e-05     4.45e-06              0.011064    0.003688    <frozen importlib._bootstrap>:978(_f
20/3        4.8e-05     2.4e-06               0.011005    0.00366833  <frozen importlib._bootstrap>:948(_f
20/3        7.5e-05     3.75e-06              0.01083     0.00361     <frozen importlib._bootstrap>:663(_l
15/3        3.5e-05     2.33333333333333e-06  0.01073     0.00357666  <frozen importlib._bootstrap_externa
29/5        2.5e-05     8.62068965517241e-07  0.010215    0.002043    <frozen importlib._bootstrap>:211(_c
3           6.0e-06     2.0e-06               0.010087    0.00336233  ~:0(<built-in method builtins.__impo
28/6        9.0e-06     3.21428571428571e-07  0.008977    0.00149616  <frozen importlib._bootstrap>:1009(_
1           9.0e-06     9.0e-06               0.00841     0.00841     /home/venthur/Documents/projects/lit
16          0.000138    8.625e-06             0.004802    0.00030012  <frozen importlib._bootstrap_externa
1           4.5e-05     4.5e-05               0.004143    0.004143    /usr/lib/python3.7/logging/__init__.
1           0.004038    0.004038              0.004038    0.004038    ~:0(<method 'commit' of 'sqlite3.Con
13          3.3e-05     2.53846153846154e-06  0.002368    0.00018215  <frozen importlib._bootstrap_externa

CryptogramFriday Squid Blogging: Eat Less Squid

The UK's Marine Conservation Society is urging people to eat less squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianDirk Eddelbuettel: binb 0.0.3: Now with Monash

The third release of the binb package just arrived on CRAN, and it comes with a new (and very crispy) theme: Monash. With that we are also thrilled to welcome Rob Hyndman as a co-author.

Here is a quick demo combining all (by now four) themes:

Also, Ista made the IQSS theme more robust to font selection. Other changes:

Changes in binb version 0.0.3 (2018-10-12)

  • The IQSS theme now has a fallback font if Libertinus is unavailable (Ista in #7)

  • Added support for 'Monash' theme (Rob Hyndman in #10 and #11 closing #9)

  • Simplified some options for the 'Monash' theme (Dirk in #13)

  • The IQSS theme can now set an alternate titlegraphic (Ista in #14)

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Worse Than FailureError'd: Latin is Making a Comeback?

"Well, if I need an email template, lucky me, I now have one handy," writes Paul C.


"I was shopping around for refrigerators and after seeing this, I would definitely take delivery on the $19.99 model!" writes Daniel P.


Ryan L. wrote, "I should probably consider renewing the warranty, but I have %days% to think about it."


"Thus are the side effects of having a currency converter extension installed," Shahim M. writes.


Yuna M. wrote, "Ever check news sites sometime between 4:00 AM and when people start getting in to work? You may see some weird stuff."


"'Yes' means yes, and 'Print Info' means yes," writes Michael R. ,"Immediately after selecting 'Print Info' I received an SMS text from CVS, and then a comically small receipt printed with opt-out instructions."


[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

CryptogramAnother Bloomberg Story about Supply-Chain Hardware Attacks from China

Bloomberg has another story about hardware surveillance implants in equipment made in China. This implant is different from the one Bloomberg reported on last week. That story has been denied by pretty much everyone else, but Bloomberg is sticking by its story and its sources. (I linked to other commentary and analysis here.)

Again, I have no idea what's true. The story is plausible. The denials are about what you'd expect. My lone hesitation to believing this is not seeing a photo of the hardware implant. If these things were in servers all over the US, you'd think someone would have come up with a photograph by now.

EDITED TO ADD (10/12): Three more links worth reading.

Planet Linux AustraliaLev Lafayette: International HPC Certification Program

The HPC community has always considered the training of new and existing HPC practitioners to be of high importance to its growth. The significance of training will increase even further in the era of Exascale when HPC encompasses even more scientic disciplines. This diversification of HPC practitioners challenges the traditional training approaches, which are not able to satisfy the specific needs of users, often coming from non-traditionally HPC disciplines and only interested in learning a particular set of skills. HPC centres are struggling to identify and overcome the gaps in users' knowledge. How should we support prospective and existing users who are not aware of their own knowledge gaps? We are working towards the establishment of an International HPC Certification program that would clearly categorize, define and examine them similarly to a school curriculum. Ultimately, we aim for the certificates to be recognized and respected by the HPC community and industry.

International HPC Certification Program, International Supercomputing Conference, Frankfurt, June, 2018

Julian Kunkel (University of Reading), Kai Himstedt (Universität Hamburg), Weronika Filinger (University of Edinburgh), Jean-Thomas Acquaviva (DDN), William Jalby (Université de Versailles Saint-Quentin), Lev Lafayette (University of Melbourne)


Planet DebianDeepanshu Gajbhiye: Google Summer of code at Debian Final Report

Google Summer of code 2018

Table of contents

  1. Introduction
  2. Final Summary
  3. Deliverable
  4. Weekly reports & Blog posts
  5. Other contributions
  6. Thank you


Virtual LTSP server project automates installation and configuration of LTSP server with vagrant. It is the easiest way to create LTSP setup. We have developed the project to do the same for Linux mint 19 and Debian 9. We also created several scripts for testing, create ltsp client, manage accounts, etc. Also created packer scripts to create vagrant boxes that we will use in the project.

Final Summary

Google Summer of Code was a great opportunity to work with really smart and amazing people. I learned a lot in the process. It took my understanding of Vagrant, bash-scripting, packer, ltsp, Debian packaging to a whole another level. I started with a basic provisioner script to install ltsp to vagrant box. Then we make several improvements in it.

Later made several features and improvements in it. After that a major issue was the client was unable to boot. In order to solve this issue I searched through all the content on the internet about ltsp. Even asked the active developers of ltsp on how to fix this issue. They have been working on ltsp since 2006 but they haven’t encountered this problem yet. After struggling for lots of I solved it! Looking at the complexity and the time it took mentors told me to write a separate blog post about it. We have also created Virtual ltsp server for Debian 9. Also one for linux mint 19.

I had to create a new linux mint vagrant box with xfce. Which was really fun. I also automated its creation with packer scripts. We also did port from Edubuntu packages to Debian. It is locally built and installed via a small script. In the end, we added features like automatic login, guest login, and several scripts to optimize the workflow for the user. This was a really short summary of the work done. More details can be found on the weekly reports.


Virtual LTSP Server


Pull request made


Commits made

  • bionic branch —
  • buster branch —

Issues worked on


Packer scripts to create vagrant box


Linux mint tara vagrant box

Vagrant Cloud by HashiCorp

Ported edubuntu packages from ubuntu to debian



Weekly reports & Blog posts

  • Week1:

GSoC weekly report of Deepanshu Gajbhiye week 1

  • Week2:
  • Week3:
  • Week4:
  • Week5:
  • Week6:
  • Week7:
  • Week8:
  • Week9:
  • Week10:
  • Week11:
  • Week12:

Other contributions

Thank you

I am very thankful to Google and Debian for accepting me to Google Summer of Code. Working for GSoC was an amazing experience. I will definately participate next year as well.

Thank to my mentors Dashamir Hoxha and Akash Shende for their solid support, quick response, patience and trust on me.

Thank you Daniel Pocock for encouragement and Big thanks to Debian, ltsp, Vagrant community for being very helpful.

Planet DebianAntoine Beaupré: Archived a part of my CD collection

After about three days of work, I've finished archiving a part of my old CD collection. There were about 200 CDs in a cardboard box that were gathering dust. After reading Jonathan Dowland's post about CD archival, I got (rightly) worried it would be damaged beyond rescue so I sat down and did some research on the rescue mechanisms. My notes are in rescue and I hope to turn this into a more agreeable LWN article eventually.

I post this here so I can put a note in the box with a permanent URL for future reference as well.

Remaining work

All the archives created were dumped in the ~/archive or ~/mp3 directories on curie. Data needs to be deduplicated, replicated, and archived somewhere more logical.


I have a bunch of piles:

  • a spindle of disks that consists mostly of TV episodes, movies, distro and Windows images/ghosts. not imported.
  • a pile of tapes and Zip drives. not imported.
  • about fourty backup disks. not imported.
  • about five "books" disks of various sorts. ISOs generated. partly integrated in my collection, others failed to import or were in formats that were considered non-recoverable
  • a bunch of orange seeds piles
    • Burn Your TV masters and copies
    • apparently live and unique samples - mostly imported in mp3
    • really old stuff with tons of dupes - partly sorted through, in jams4, reste still in the pile
  • a pile of unidentified disks

All disks were eventually identified as trash, blanks, perfect, finished, defective, or not processed. A special needs attention stack was the "to do" pile, and would get sorted through the other piles. each pile was labeled with a sticky note and taped together summarily.

A post-it pointing to the blog post was included in the box, along with a printed version of the blog post summarizing a snapshot of this inventory.

Here is a summary of what's in the box.

Type Count Note
trash 13 non-recoverable. not detected by the Linux kernel at all and no further attempt has been made to recover them.
blanks 3 never written to, still usable
perfect 28 successfully archived, without errors
finished 4 almost perfect: but mixed-mode or multi-session
defective 21 found to have errors but not considered important enough to re-process
total 69
not processed ~100 visual estimate

Worse Than FailureCodeSOD: Boldly Leaping Over the Span

No one writes HTML anymore. We haven’t for years. These days, your HTML is snippets and components, templates and widgets. Someplace in your application chain, whether server-side or client-side, or even as part of a deployment step, if you’re using a static site generator, some code mashes those templates together and you get some output.

This has some side effects, like div abuse. Each component needs its own container tag, but we often nest components inside each other. Maybe there’s a span in there. If the application is suitably HTML5-y, maybe it’s sections instead.

Andy stumbled across a site which was blocking right clicking, so Andy did what any of us would do: pulled up the debugging tools and started exploring the DOM tree.

<h2 class="content-title" style="text-align:center;">
    <span style="font-weight:400">
      <span style="font-weight:400">
        <span style="font-weight:400">
          <span style="font-weight:400">
            <span style="font-weight:400">
              <span style="font-weight:400">
                <span style="font-weight:400">
                  <span style="font-weight:400">
                    <span style="font-weight:400">
                      <span style="font-weight:400">
                        <span style="font-weight:400">
                          <span style="font-weight:400">
                            <span style="font-weight:400">
                              <span style="font-weight:400">
                                <span style="font-weight:400">
                                  <span style="font-weight:400">
                                    <span style="font-weight:400">
                                                              <font color="#a82e2e" size="6">Welcome to the [redacted]</font>

Maybe this is a chain of components, maybe it’s a runaway loop, maybe it’s just stupid.

With all those font-weight directives stacked up there, I’m tempted to say that’s mighty bold, but with a font-weight: 400, that’s honestly not bold at all. Well, it’s a bold use of span tags, I suppose.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet DebianMario Lang: "Learning from machines" by Ashi Krishnan

I need to share this talk by Ashi Krishnan because I think it is very well done, and I find the content extremely fascinating and interesting.

If you are into consciousness and/or dreaming, do yourself a favour and allocate 40 minutes of your life for this one.

Thanks Ashi, you made my day!

Planet DebianLars Wirzenius: On flatpak, snap, distros, and software distribution

I don't think any of Flatpak, Snappy, traditional Linux distros, non-traditional Linux distros, containers, online services, or other forms of software distribution are a good solution for all users. They all fail in some way, and each of them requires continued, ongoing effort to be acceptable even within their limitations.

This week, there's been some discussion about Flatpak, a software distribution approach that's (mostly) independent of traditional Linux distributions. There's also, Snappy, which is Canonical's similar thing.

The discussion started with the launch of a new website attacking Flatpak as a technology. I'm not going to link to it, since it's an anonymous attack and rant, and not constructive. I'd rather have a constructive discussion. I'm also not going to link to rebuttals, and will just present my own view, which I hope is different enough to be interesting.

The website raises the issue that Flatpak's sandboxing is not as good as it should be. This seems to be true. Some of Flatpak's defenders respond that it's an evolving technology, which seems fair. It's not necessary to be perfect; it's important to be better than what came before, and to constantly improve.

The website also raises the point that a number of flatpaks themselves contain unfixes security problems. I find this to be more worrying than an imperfect sandbox. A security problem inside a perfect sandbox can still be catastrophic: it can leak sensitive data, join a distributed denial of service attack, use excessive CPU and power, and otherwise cause mayhem. The sandbox may help in containing the problem somewhat, but to be useful for valid use, the sandbox needs to allow things that can be used maliciously.

As a user, I want software that's...

  • easy to install and update
  • secure to install (what I install is what the developers delivered)
  • always up to date with security fixes, including for any dependencies (embedded in the software or otherwise)
  • reasonably up to date with other bug fixes
  • sufficiently up to date with features I want (but I don't care a about newer features that I don't have a use for)
  • protective of my freedoms and privacy and other human rights, which includes (but is not restricted to) being able to self-host services and work offline

As a software developer, I additionally want my own software to be...

  • effortless to build
  • automatically tested in a way that gives me confidence it works for my users
  • easy to deliver to my users
  • easy to debug
  • not be broken by changes to build and runtime dependencies, or at least make such changes be extremely obvious, meaning they result in a build error or at least an error during automated tests

These are requirements that are hard to satisfy. They require a lot of manual effort, and discipline, and I fear the current state of software development isn't quite there yet. As an example, the Linux kernel development takes great care to never break userland, but that requires a lot of care when making changes, a lot of review, and a lot of testing, and a willingness to go to extremes to achieve that. As a result, upgrading to a newer kernel version tends to be a low-risk operation. The glibc C library, used by most Linux distributions, has a similar track record.

But Linux and glibc are system software. Flatpak is about desktop software. Consider instead LibreOffice, the office suite. There's no reason why it couldn't be delivered to users as a Flatpak (and indeed it is). It's a huge piece of software, and it needs a very large number of libraries and other dependencies to work. These need to be provided inside the LibreOffice Flatpak, or by one or more of the Flatpak "runtimes", which are bundles of common dependencies. Making sure all of the dependencies are up to date can be partly automated, but not fully: someone, somewhere, needs to make the decision that a newer version is worth upgrading to right now, even if it requires changes in LibreOffice for the newer version to work.

For example, imagine LO uses a library to generate PDFs. A new version of the library reduces CPU consumption by 10%, but requires changes, because the library's API (programming interface) has changed radically. The API changes are necessary to allow the speedup. Should LibreOffice upgrade to the new version of not? If 10% isn't enough of a speedup to warrant the effort to make the LO changes, is 90%? An automated system could upgrade the library, but that would then break the LO build, resulting in something that doesn't work anymore.

Security updates are easier, since they usually don't involve API changes. An automated system could upgrade dependencies for security updates, and then trigger automated build, test, and publish of a new Flatpak. However, this is made difficult by there is often no way to automatically, reliably find out that there is a security fix released. Again, manual work is required to find the security problem, to fix it, to communicate that there is a fix, and to upgrade the dependency. Some projects have partial solutions for that, but there seems to be nothing universal.

I'm sure most of this can be solved, some day, in some manner. It's definitely an interesting problem area. I don't have a solution, but I do think it's much too simplistic to say "Flatpaks will solve everything", or "the distro approach is best", or "just use the cloud".

Krebs on SecurityPatch Tuesday, October 2018 Edition

Microsoft this week released software updates to fix roughly 50 security problems with various versions of its Windows operating system and related software, including one flaw that is already being exploited and another for which exploit code is publicly available.

The zero-day bug — CVE-2018-8453 — affects Windows versions 7, 8.1, 10 and Server 2008, 2012, 2016 and 2019. According to security firm Ivanti, an attacker first needs to log into the operating system, but then can exploit this vulnerability to gain administrator privileges.

Another vulnerability patched on Tuesday — CVE-2018-8423 — was publicly disclosed last month along with sample exploit code. This flaw involves a component shipped on all Windows machines and used by a number of programs, and could be exploited by getting a user to open a specially-crafted file — such as a booby-trapped Microsoft Office document.

KrebsOnSecurity has frequently suggested that Windows users wait a day or two after Microsoft releases monthly security updates before installing the fixes, with the rationale that occasionally buggy patches can cause serious headaches for users who install them before all the kinks are worked out.

This month, Microsoft briefly paused updates for Windows 10 users after many users reported losing all of the files in their “My Documents” folder. The worst part? Rolling back to previous saved versions of Windows prior to the update did not restore the files.

Microsoft appears to have since fixed the issue, but these kinds of incidents illustrate the value of not only waiting a day or two to install updates but also manually backing up your data prior to installing patches (i.e., not just simply counting on Microsoft’s System Restore feature to save the day should things go haywire).

Mercifully, Adobe has spared us an update this month for its Flash Player software, although it has shipped a non-security update for Flash.

For more on this month’s Patch Tuesday batch, check out posts from Ivanti and Qualys.

As always, if you experience any issues installing any of these patches this month, please feel free to leave a comment about it below; there’s a good chance other readers have experienced the same and may even chime in here with some helpful tips. My apologies for the tardiness of this post; I have been traveling in Australia this past week with only sporadic access to the Internet.

Downtown Melbourne, Australia.

Planet DebianNorbert Preining: Debian/TeX Live updates 20181009

MOre than a month has passed, and we went through a CVE and some other complications, but finally I managed to build and upload a new set of packages of TeX Live for Debian.

During this update some color profiles (icc) that had unclear licenses have been removed, which for now creates problems with the pdfx package. So if you use the pdfx package, please explicitly specify a color profile. The next upload will again allow using pdfx without specifying a profile in which case a default profile is used. I have uploaded already a set of free profiles to CTAN and they arrived in TeX Live, but pdfx package isn’t updated till now.

Other than this I don’t recall anything spectacular new or changed, but it is still a long list of packages being updated 😉

Please enjoy.

New packages

bxwareki, chs-physics-report, ctanbib, dehyph, firamath, firamath-otf, jigsaw, kalendarium, kvmap, libertinus, libertinus-fonts, libertinus-type1, metapost-colorbrewer, pst-feyn, pst-lsystem, pst-marble, ptex-manual, quantikz, rank-2-roots, tex-locale, utexasthesis, widows-and-orphans.

Updated packages

achemso, apa6, arabluatex, archaeologie, arydshln, axodraw2, babel, babel-belarusian, babel-french, bangorcsthesis, beamer, beamerswitch, bezierplot, biblatex-anonymous, biblatex-chem, biblatex-ext, biblatex-manuscripts-philology, biblatex-publist, bidi, breqn, bxjscls, bxorigcapt, bxwareki, caption, catechis, clrstrip, cochineal, context-handlecsv, cooking-units, covington, csplain, cstex, doi, ducksay, duckuments, dvipdfmx, eplain, epstopdf, europecv, exercisebank, fei, fira, fontawesome5, gentombow, hyperref, hyphen-german, hyphen-latin, hyphen-thai, hyph-utf8, ifluatex, jadetex, jlreq, jslectureplanner, l3build, l3experimental, l3kernel, l3packages, latex-bin, latexindent, latex-make, lettrine, libertinus-otf, libertinust1math, libertinus-type1, listings, lshort-chinese, lualibs, luamplib, luaotfload, luatexja, lwarp, make4ht, memdesign, mltex, mptopdf, nicematrix, nimbus15, oberdiek, ocgx2, onedown, overpic, parskip, pdftex, pdfx, perception, platex, platex-tools, plautopatch, poetry, pst-eucl, pst-plot, pstricks, ptex, reledmac, returntogrid, sourceserifpro, svg, tableof, tetex, tex4ht, textualicomma, thesis-gwu, thumbpdf, tkz-base, tkz-doc, tkz-graph, tkz-kiviat, tkz-linknodes, tkz-tab, tlcockpit, tugboat, tugboat-plain, ucsmonograph, ulthese, univie-ling, updmap-map, uri, witharrows, xcharter, xepersian, xetex, xits, xmltex, yafoot, zhlipsum.

Planet DebianDirk Eddelbuettel: digest 0.6.18

Earlier today, digest version 0.6.18 arrived on CRAN. It will get uploaded to Debian in due course.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64 and murmur32 algorithms) permitting easy comparison of R language objects.

As I wrote when announcing the 0.6.17 release about a month ago

[…] it also squashed one UBSAN error under the standard gcc setup. But two files remain with UBSAN issues, help would be welcome!

And lo and behold, and withing a day or two, Jim Hester saw this, looked at it and updated xxHash (which had contributed to digest 0.6.5 in 2014) to the newest version, taking care of one part. And Radford Neal took one good hard look at the remaining issue and suggested a cast for pmurhash. In testing against the UBSAN instance at RHub, both issues appear to be taken care of. So a big Thank You to both Jim and Radford!

No other changes were made.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Sociological ImagesWhat’s Trending? Trust in Institutions

Social institutions are powerful on their own, but they still need buy-in to work. When people don’t feel like they can trust institutions, they are more likely to find ways to opt out of participating in them. Low voting rates, religious disaffiliation, and other kinds of civic disengagement make it harder for people to have a voice in the organizations that influence their lives.

And, wow, have we seen some good reasons not to trust institutions over the past few decades. The latest political news only tops a list running from Watergate to Whitewater, Bush v. Gore, the 2008 financial crisis, clergy abuse scandals, and more.

Using data from the General Social Survey, we can track how confidence in these institutions has changed over time. For example, recent controversy over the Kavanaugh confirmation is a blow to the Supreme Court’s image, but strong confidence in the Supreme Court has been on the decline since 2000. Now, attitudes about the Court are starting to look similar to the way Americans see the other branches of government.

(Click to Enlarge)
Source: General Social Survey Cumulative File
LOESS-Smoothed trend lines follow weighted proportion estimates for each response option.

Over time, you can see trust in the executive and legislative branches drop as the proportion of respondents who say they have a great deal of confidence in each declines. The Supreme Court has enjoyed higher confidence than the other two branches, but even this has started to look more uncertain.

For context, we can also compare these trends to other social institutions like the market, the media, and organized religion. Confidence in these groups has been changing as well.

(Click to Enlarge)
Source: General Social Survey Cumulative File

It is interesting to watch the high and low trend lines switch over time, but we should also pay attention to who sits on the fence by choosing some confidence on these items. More people are taking a side on the press, for example, but the middle is holding steady for organized religion and the Supreme Court.

These charts raise an important question about the nature of social change: are the people who lose trust in institutions moderate supporters who are driven away by extreme changes, or “true believers” who feel betrayed by scandals? When political parties argue about capturing the middle or motivating the base, or the church worries about recruiting new members, these kinds of trends are central to the conversation.

Inspired by demographic facts you should know cold, “What’s Trending?” is a post series at Sociological Images featuring quick looks at what’s up, what’s down, and what sociologists have to say about it.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

Planet DebianNeil Williams: Code Quality & Formatting for Python

I've recently added two packages (and their dependencies) to Debian and thought I'd cover a bit more about why.


black, the uncompromising Python code formatter, has arrived in Debian unstable and testing.

black is being adopted by the LAVA Software Community Project in a gradual way and the new CI will be checking that files which have been formatted by black stay formatted by black in merge requests.

There are endless ways to format Python code and pycodestyle and pylint are often too noisy to use without long lists of ignored errors and warnings. Black takes the stress out of maintaining a large Python codebase as long as a few simple steps are taken:

  • Changes due to black are not functional changes. A merge request applying black to a source code file must not include functional changes. Just the change done by black. This makes code review manageable.
  • Changes made by black are recorded and once made, CI is used to ensure that there are no regressions.
  • Black is only run on files which are not currently being changed in existing merge requests. This is a simple sanity provision, rebasing functional changes after running black is not fun.

Consistent formatting goes a long way to helping humans spot problematic code.

See or apt-get install python-black-doc for a version which doesn't "call home".


So much for code formatting, that's nice and all but what can matter more is an overview of the complexity of the codebase.

We're experimenting with running radon as part of our CI to get a CodeClimate report which GitLab should be able to understand.

(Take a bow - Vince gave me the idea by mentioning his use of Cyclomatic Complexity.)

What we're hoping to achieve here is a failed CI test if the complexity of critical elements increases and a positive indication if the code complexity of areas which are currently known to be complex can be reduced without losing functionality.

Initially, just having the data is a bonus. The first try at CodeClimate support took the best part of an hour to scan our code repository. radon took 3 seconds.

See or apt-get install python-radon-doc for a version which doesn't "call home".

(It would be really nice for upstreams to understand that putting badges in their sphinx documentation templates makes things harder to distribute fairly. Fine, have a nice web UI for your own page but remove the badges from the pages in the released tarballs, e.g. with a sphinx build time option.)

One more mention - bandit

I had nothing to do with introducing this to Debian but I am very grateful that it exists in Debian. bandit is proving to be very useful in our CI, providing SAST reports in GitLab. As with many tools of it's kind, it is noisy at first. However, with a few judicious changes and the use of the # nosec comment to rule out scanning of things like unit tests which deliberately tried to be insecure, we have substantially reduced the number of reports produced with bandit.

Having the tools available is so important to actually fixing problems before the software gets released.

Planet DebianMichal Čihař: Weblate 3.2.1

Weblate 3.2.1 has been released today. It's a bugfix release for 3.2 fixing several minor issues which appeared in the release.

Full list of changes:

  • Document dependency on backports.csv on Python 2.7.
  • Fix running tests under root.
  • Improved error handling in gitexport module.
  • Fixed progress reporting for newly added languages.
  • Correctly report Celery worker errors to Sentry.
  • Fixed creating new translations with Qt Linguist.
  • Fixed occasional fulltext index update failures.
  • Improved validation when creating new components.
  • Added support for cleanup of old suggestions.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Worse Than FailureCodeSOD: Round Two

John works for a manufacturing company which has accrued a large portfolio of C++ code. Developed over the course of decades, by many people, there’s more than a little legacy cruft and coding horrors mixed in. Frustrated with the ongoing maintenance, and in the interests of “modernization”, John was tasked with converting the legacy C++ into C#.

Which meant he had to read through the legacy C++.

In the section for creating TPS reports, there were two functions, TpsRound and TpsRound2. The code between the two of them was nearly identical- someone had clearly copy/pasted and made minor tweaks.

CString EDITFORM::TpsRound2(double dbIntoConvert) 
    // This Stub calculates the rounding based
    // upon this company standards as
    // outlined in TPS 101 Conversion Rules and
    // dual dimensioning practices

    CString csHold1,csHold2;
    int  decimal, sign,ChkDigit;
    long OutVal;
    char *buffer2;
    char *outbuff;
    outbuff = "                          ";
    buffer2 = "                     ";

    if (dbIntoConvert == 0)
//		return CString("0.00");

    buffer2 = _fcvt( dbIntoConvert, 7, &decimal, &sign );
    buffer2[decimal] = '.';

    csHold2 = XvertDecValues(dbIntoConvert); 

    if (m_round	== FALSE)
        return csHold2;
    ChkDigit = atoi(csHold2.Mid(2,1));
    OutVal = atol(csHold2.Left(2));

    if (ChkDigit >= 5)

    if (OutVal >= 100)
        buffer2[decimal] = '0';
        buffer2[decimal] = '.';

    int jj=2;  // this value is the ONLY difference to `TpsRound()`
    while (jj < decimal)
    csHold1 = CString(buffer2).Left(decimal);
    return csHold1;

At its core, this is just string-mangling its way through some basic rounding operations. Writing your own C++ rounding is less of a WTF than it might seem, simply because C++ didn’t include standard rounding methods for most of its history. The expectation was that you’d implement it yourself, for your specific cases, as there’s no one “right” way to round.

This, however, is obviously the wrong way. The code is actually pretty simple, just cluttered with a mix of terrible variable names, loads of string conversion calls, and a thick layer of not understanding what they’re doing.

To add to the confusion, buffer2 holds the results of _fcvt- a method which converts a floating point to a string. csHold2 holds the results of XvertDecValues, which also returns a floating point converted to a string, just… a little differently.

CString EDITFORM::XvertDecValues(double incon1) 
    char *buffer3;
    char *outbuff;
    int  decimal, sign;
    buffer3 = "                     ";
    outbuff = "00000000000000000000";
    buffer3 = _fcvt(incon1, 7, &decimal, &sign );

    if (incon1 == 0)
//		return CString("0.00");

    int cnt1,cnt2,cnt3;
    cnt1 = 0;
    cnt2 = 0;

        cnt3 = decimal;
        if (cnt3 <= 0)
            while (cnt3 < 0)
            while (cnt1 < decimal)
                outbuff[cnt2] = buffer3[cnt1];
            outbuff[cnt2] = '.';


        while (cnt1 < 15)
            outbuff[cnt2] = buffer3[cnt1];

        outbuff[cnt2] = '\0';

    return CString(outbuff);

So, back in TpsRound2, csHold2 and buffer2 both hold a string version of a floating point number, but they both do it differently. They’re both referenced. Note also the check if (OutVal >= 100)- OutVal holds the leftmost two characters of csHold2- so it will never be greater than or equal to 100.

John’s orders were to do a 1-to–1 port of functionality. “Get it working in C#, then we can refactor.” So John did. And then once it was working in C#, he threw all this code away and replaced it with calls to C#’- built in rounding and string formatting methods, which were perfectly fine for their actual problems.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet Linux AustraliaSimon Lyall: Audiobooks – September 2018

Lone Star: A History of Texas and the Texans by T. R. Fehrenbach

About 80% of the 40 hour book covers the period 1820-1880. Huge amounts of detail during then but skips over the rest quickly. Great stories though. 8/10

That’s Not English – Britishisms, Americanisms, and What Our English Says About Us by Erin Moore

A series of short chapters (usually one per word) about how the English language is used differently in England from the US. Fun light read. 7/10

The Complacent Class: The Self-Defeating Quest for the American Dream by Tyler Cowen

How American culture (and I’d extend that to countries like NZ) has stopped innovating and gone the safe route in most areas. Main thesis is that pressure is building up and things may break hard. Interesting 8/10

A History of Britain, Volume 2 : The British Wars 1603 – 1776 by Simon Schama

Covering the Civil War, Glorious Revolution and bits of the early empire and American revolution. A nice overview. 7/10

I Find Your Lack of Faith Disturbing: Star Wars and the Triumph of Geek Culture by A. D. Jameson

A personal account of the author’s journey though Geekdom (mainly of the Sci-Fi sort) mixed in with a bit of analysis of how the material is deeper than critics usually credit. 7/10


Planet DebianLars Wirzenius: New job: WMF release engineering

I've started my new job. I now work in the release engineering team at Wikimedia, the organisation that runs sites such as Wikipedia. We help put new versions of the software that runs the sites into production. My role is to help make that process more automated and frequent.

Krebs on SecurityNaming & Shaming Web Polluters: Xiongmai

What do we do with a company that regularly pumps metric tons of virtual toxic sludge onto the Internet and yet refuses to clean up their act? If ever there were a technology giant that deserved to be named and shamed for polluting the Web, it is Xiongmai — a Chinese maker of electronic parts that power a huge percentage of cheap digital video recorders (DVRs) and Internet-connected security cameras.

A rendering of Xiongmai’s center in Hangzhou, China. Source:

In late 2016, the world witnessed the sheer disruptive power of Mirai, a powerful botnet strain fueled by Internet of Things (IoT) devices like DVRs and IP cameras that were put online with factory-default passwords and other poor security settings.

Security experts soon discovered that a majority of Mirai-infected devices were chiefly composed of components made by Xiongmai (a.k.a. Hangzhou Xiongmai Technology Co., Ltd.) and a handful of other Chinese tech firms that seemed to have a history of placing product market share and price above security.

Since then, two of those firms — Huawei and Dahua — have taken steps to increase the security of their IoT products out-of-the-box. But Xiongmai — despite repeated warnings from researchers about deep-seated vulnerabilities in its hardware — has continued to ignore such warnings and to ship massively insecure hardware and software for use in products that are white-labeled and sold by more than 100 third-party vendors.

On Tuesday, Austrian security firm SEC Consult released the results of extensive research into multiple, lingering and serious security holes in Xiongmai’s hardware.

SEC Consult said it began the process of working with Xiongmai on these problems back in March 2018, but that it finally published its research after it became clear that Xiongmai wasn’t going to address any of the problems.

“Although Xiongmai had seven months notice, they have not fixed any of the issues,” the researchers wrote in a blog post published today. “The conversation with them over the past months has shown that security is just not a priority to them at all.”


A core part of the problem is the peer-to-peer (P2P) communications component called “XMEye” that ships with all Xiongmai devices and automatically connects them to a cloud network run by Xiongmai. The P2P feature is designed so that consumers can access their DVRs or security cameras remotely anywhere in the world and without having to configure anything.

The various business lines of Xiongmai. Source:

To access a Xiongmai device via the P2P network, one must know the Unique ID (UID) assigned to each device. The UID is essentially derived in an easily reproducible way using the device’s built-in MAC address (a string of numbers and letters, such as 68ab8124db83c8db).

Electronics firms are assigned ranges of MAC address that they may use, but SEC Consult discovered that Xiongmai for some reason actually uses MAC address ranges assigned to a number of other companies, including tech giant Cisco Systems, German printing press maker Koenig & Bauer AG, and Swiss chemical analysis firm Metrohm AG.

SEC Consult learned that it was trivial to find Xiongmai devices simply by computing all possible ranges of UIDs for each range of MAC addresses, and then scanning Xiongmai’s public cloud for XMEye-enabled devices. Based on scanning just two percent of the available ranges, SEC Consult conservatively estimates there are around 9 million Xiongmai P2P devices online.

[For the record, KrebsOnSecurity has long advised buyers of IoT devices to avoid those advertise P2P capabilities for just this reason. The Xiongmai debacle is yet another example of why this remains solid advice].


While one still needs to provide a username and password to remotely access XMEye devices via this method, SEC Consult notes that the default password of the all-powerful administrative user (username “admin”) is blank (i.e, no password).

The admin account can be used to do anything to the device, such as changing its settings or uploading software — including malware like Mirai. And because users are not required to set a secure password in the initial setup phase, it is likely that a large number of devices are accessible via these default credentials.

The raw, unbranded electronic components of an IP camera produced by Xiongmai.

Even if a customer has changed the default admin password, SEC Consult discovered there is an undocumented user with the name “default,” whose password is “tluafed” (default in reverse). While this user account can’t change system settings, it is still able to view any video streams.

Normally, hardware devices are secured against unauthorized software updates by requiring that any new software pushed to the devices be digitally signed with a secret cryptographic key that is held only by the hardware or software maker. However, XMEye-enabled devices have no such protections.

In fact, the researchers found it was trivial to set up a system that mimics the XMEye cloud and push malicious firmware updates to any device. Worse still, unlike with the Mirai malware — which gets permanently wiped from memory when an infected device powers off or is rebooted — the update method devised by SEC Consult makes it so that any software uploaded survives a reboot.


In the wake of the Mirai botnet’s emergence in 2016 and the subsequent record denial-of-service attacks that brought down chunks of the Internet at a time (including this Web site and my DDoS protection provider at times), multiple security firms said Xiongmai’s insecure products were a huge contributor to the problem.

Among the company’s strongest critics was New York City-based security firm Flashpoint, which pointed out that even basic security features built into Xiongmai’s hardware had completely failed at basic tasks.

For example, Flashpoint’s analysts discovered that the login page for a camera or DVR running Xiongmai hardware and software could be bypassed just by navigating to a page called “DVR.htm” prior to login.

Flashpoint’s researchers also found that any changes to passwords for various user accounts accessible via the Web administration page for Xiongmai products did nothing to change passwords for accounts that were hard-coded into these devices and accessible only via more obscure, command-line communications interfaces like Telnet and SSH.

Not long after Xiongmai was publicly shamed for failing to fix obvious security weaknesses that helped contribute to the spread of Mirai and related IoT botnets, Xiongmai lashed out at multiple security firms and journalists, promising to sue its critics for defamation (it never followed through on that threat, as far as I can tell).

At the same time, Xiongmai promised that it would be issuing a product recall on millions of devices to ensure they were not deployed with insecure settings and software. But according to Flashpoint’s Zach Wikholm, Xiongmai never followed through with the recall, either. Rather, it was all a way for the company to save face publicly and with its business partners.

“This company said they were going to do a product recall, but it looks like they never got around to it,” Wikholm said. “They were just trying to cover up and keep moving.”

Wikholm said Flashpoint discovered a number of additional glaring vulnerabilities in Xiongmai’s hardware and software that left them wide open to takeover by malicious hackers, and that several of those weaknesses still exist in the company’s core product line.

“We could have kept releasing our findings, but it just got really difficult to keep doing that because Xiongmai wouldn’t fix them and it would only make it easier for people to compromise these devices,” Wikholm said.

The Flashpoint analyst said he believes SEC Consult’s estimates of the number of vulnerable Xiongmai devices to be extremely conservative.

“Nine million devices sounds quite low because these guys hold 25 percent of the world’s DVR market,” to say nothing of the company’s share in the market for cheapo IP cameras, Wikholm said.

What’s more, he said, Xiongmai has turned a deaf ear to reports about dangerous security holes across its product lines principally because it doesn’t answer directly to customers who purchase the gear.

“The only reason they’ve maintained this level of [not caring] is they’ve been in this market for a long time and established very strong regional sales channels to dozens of third-party companies,” that ultimately rebrand Xiongmai’s products as their own, he said.

Also, the typical consumer of cheap electronics powered by Xiongmai’s kit don’t really care how easily these devices can be commandeered by cybercriminals, Wikholm observed.

“They just want a security system around their house or business that doesn’t cost an arm and leg, and Xiongmai is by far the biggest player in that space,” he said. “Most companies at least have some sort of incentive to make things better when faced with public pressure. But they don’t seem to have that drive.”


SEC Consult concluded its technical advisory about the security flaws by saying Xiongmai “does not provide any mitigations and hence it is recommended not to use any products associated with the XMeye P2P Cloud until all of the identified security issues have been fixed and a thorough security analysis has been performed by professionals.”

While this may sound easy enough, acting on that advice is difficult in practice because very few devices made with Xiongmai’s deeply flawed hardware and software advertise that fact on the label or product name. Rather, the components that Xiongmai makes are sold downstream to vendors who then use it in their own products and slap on a label with their own brand name.

How many vendors? It’s difficult to say for sure, but a search on the term XMEye via the e-commerce sites where Xiongmai’s white-labeled products typically are sold (Amazon,, and Walmart) reveals more than 100 companies that you’ve probably never heard of which brand Xiongmai’s hardware and software as their own.  That list is available here (PDF) and is also pasted at the conclusion of this post for the benefit of search engines.

SEC Consult’s technical advisory about their findings lists a number of indicators that system and network administrators can use to quickly determine whether any of these vulnerable P2P Xiongmai devices happen to be on your network.

For end users concerned about this, one way of fingerprinting Xiongmai devices is to search,, and other online merchants for the brand on the side of your device and the term “XMEye.” If you get a hit, chances are excellent you’ve got a device built on Xiongmai’s technology.

Another option: open a browser and navigate to the local Internet address of your device. If you have one of these devices on your local network, the login page should look like the one below:

The administrative login screen for IoT devices powered by Xiongmai’s software and hardware.

Another giveaway on virtually all Xiongmai devices is pasting “http://IP/err.htm” into a browser address bar should display the following error message (where IP= the local IP address of the device):

Ironically, even the error page for Xiongmai devices contains errors.

According to SEC Consult, Xiongmai’s electronics and hardware make up the guts of IP cameras and DVRs marketed and sold under the company names below.

What’s most remarkable about many of the companies listed below is that about half of them don’t even have their own Web sites, and instead simply rely on direct-to-consumer product listings at or other e-commerce outlets. Among those that do sell Xiongmai’s products directly via the Web, very few of them seem to even offer secure (https://) Web sites.

SEC Consult’s blog post about their findings has more technical details, as does the security advisory they released today.

In response to questions about the SEC Consult reports, Xiongmai said it is now using a new encryption method to generate the UID for its XMEye devices, and will not longer be relying on MAC addresses.

Xiongmai also said users will be asked to change a devices default username and password when they use the XMEye Internet Explorer plugin or mobile app. The company also said it had removed the “default” account in firmware versions after August 2018. It also disputed SEC Consult’s claims that it doesn’t encrypt traffic handled by the devices.

In response to criticism that any settings changed by the user in the Web interface will not affect user accounts that are only accessible via telnet, Xiongmai said it was getting ready to delete telnet completely from its devices “soon.”

KrebsOnSecurity is unable to validate the veracity of Xiongmai’s claims, but it should be noted that this company has made a number of such claims and promises in the past that never materialized.

Johannes Greil, head of SEC Consult Vulnerability Lab, said as far as he could tell none of the proclaimed fixes have materialized.

“We are looking forward for Xiongmai to fix the vulnerabilities for new devices as well as all devices in the field,” Greil said.

Here’s the current list of companies that white label Xiongmai’s insecure products, according to SEC Consult:

Shell film
Unique Vision
WNK Security Technology

Update, 3:44 p.m.: Updated story to include Xiongmai’s statement.


Planet DebianBenjamin Mako Hill: What we lose when we move from social to market exchange

Couchsurfing and Airbnb are websites that connect people with an extra guest room or couch with random strangers on the Internet who are looking for a place to stay. Although Couchsurfing predates Airbnb by about five years, the two sites are designed to help people do the same basic thing and they work in extremely similar ways. They differ, however, in one crucial respect. On Couchsurfing, the exchange of money in return for hosting is explicitly banned. In other words, couchsurfing only supports the social exchange of hospitality. On Airbnb, users must use money: the website is a market on which people can buy and sell hospitality.

Graph of monthly signups on Couchsurfing and Airbnb.Comparison of yearly sign-ups of trusted hosts on Couchsurfing and Airbnb. Hosts are “trusted” when they have any form of references or verification in Couchsurfing and at least one review in Airbnb.

The figure above compares the number of people with at least some trust or verification on both  Couchsurfing and Airbnb based on when each user signed up. The picture, as I have argued elsewhere, reflects a broader pattern that has occurred on the web over the last 15 years. Increasingly, social-based systems of production and exchange, many like Couchsurfing created during the first decade of the Internet boom, are being supplanted and eclipsed by similar market-based players like Airbnb.

In a paper led by Max Klein that was recently published and will be presented at the ACM Conference on Computer-supported Cooperative Work and Social Computing (CSCW) which will be held in Jersey City in early November 2018, we sought to provide a window into what this change means and what might be at stake. At the core of our research were a set of interviews we conducted with “dual-users” (i.e. users experienced on both Couchsurfing and Airbnb). Analyses of these interviews pointed to three major differences, which we explored quantitatively from public data on the two sites.

First, we found that users felt that hosting on Airbnb appears to require higher quality services than Couchsurfing. For example, we found that people who at some point only hosted on Couchsurfing often said that they did not host on Airbnb because they felt that their homes weren’t of sufficient quality. One participant explained that:

“I always wanted to host on Airbnb but I didn’t actually have a bedroom that I felt would be sufficient for guests who are paying for it.”

An another interviewee said:

“If I were to be paying for it, I’d expect a nice stay. This is why I never Airbnb-hosted before, because recently I couldn’t enable that [kind of hosting].”

We conducted a quantitative analysis of rates of Airbnb and Couchsurfing in different cities in the United States and found that median home prices are positively related to number of per capita Airbnb hosts and a negatively related to the number of Couchsurfing hosts. Our exploratory models predicted that for each $100,000 increase in median house price in a city, there will be about 43.4 more Airbnb hosts per 100,000 citizens, and 3.8 fewer hosts on Couchsurfing.

A second major theme we identified was that, while Couchsurfing emphasizes people, Airbnb places more emphasis on places. One of our participants explained:

“People who go on Airbnb, they are looking for a specific goal, a specific service, expecting the place is going to be clean […] the water isn’t leaking from the sink. I know people who do Couchsurfing even though they could definitely afford to use Airbnb every time they travel, because they want that human experience.”

In a follow-up quantitative analysis we conducted of the profile text from hosts on the two websites with a commonly-used system for text analysis called LIWC, we found that, compared to Couchsurfing, a lower proportion of words in Airbnb profiles were classified as being about people while a larger proportion of words were classified as being about places.

Finally, our research suggested that although hosts are the powerful parties in exchange on Couchsurfing, social power shifts from hosts to guests on Airbnb. Reflecting a much broader theme in our interviews, one of our participants expressed this concisely, saying:

“On Airbnb the host is trying to attract the guest, whereas on Couchsurfing, it works the other way round. It’s the guest that has to make an effort for the host to accept them.”

Previous research on Airbnb has shown that guests tend to give their hosts lower ratings than vice versa. Sociologists have suggested that this asymmetry in ratings will tend to reflect the direction of underlying social power balances.

power difference bar graphAverage sentiment score of reviews in Airbnb and Couchsurfing, separated by direction (guest-to-host, or host-to-guest). Error bars show the 95% confidence interval.

We both replicated this finding from previous work and found that, as suggested in our interviews, the relationship is reversed on Couchsurfing. As shown in the figure above, we found Airbnb guests will typically give a less positive review to their host than vice-versa while in Couchsurfing guests will typically a more positive review to the host.

As Internet-based hospitality shifts from social systems to the market, we hope that our paper can point to some of what is changing and some of what is lost. For example, our first result suggests that less wealthy participants may be cut out by market-based platforms. Our second theme suggests a shift toward less human-focused modes of interaction brought on by increased “marketization.” We see the third theme as providing somewhat of a silver-lining in that shifting power toward guests was seen by some of our participants as a positive change in terms of safety and trust in that guests. Travelers in unfamiliar places often are often vulnerable and shifting power toward guests can be helpful.

Although our study is only of Couchsurfing and Airbnb, we believe that the shift away from social exchange and toward markets has broad implications across the sharing economy. We end our paper by speculating a little about the generalizability of our results. I have recently spoken at much more length about the underlying dynamics driving the shift we describe in  my recent LibrePlanet keynote address.

More details are available in our paper which we have made available as a preprint on our website. The final version is behind a paywall in the ACM digital library.

This blog post, and paper that it describes, is a collaborative project by Maximilian Klein, Jinhao Zhao, Jiajun Ni, Isaac Johnson, Benjamin Mako Hill, and Haiyi Zhu. Versions of this blog post were posted on several of our personal and institutional websites. Support came from GroupLens Research at the University of Minnesota and the Department of Communication at the University of Washington.

Planet DebianHolger Levsen: 20181008-lts-201809

My LTS work in September

In September I only managed to spend 2.5h working on jessie LTS on:

  • finishing work on patches for samba, but then failed to release the DLA for it until now. Expect an upload soon. Sorry for the delay, various RL issues took their toll.

CryptogramThe US National Cyber Strategy

Last month, the White House released the "National Cyber Strategy of the United States of America. I generally don't have much to say about these sorts of documents. They're filled with broad generalities. Who can argue with:

Defend the homeland by protecting networks, systems, functions, and data;

Promote American prosperity by nurturing a secure, thriving digital economy and fostering strong domestic innovation;

Preserve peace and security by strengthening the ability of the United States in concert with allies and partners ­ to deter and, if necessary, punish those who use cyber tools for malicious purposes; and

Expand American influence abroad to extend the key tenets of an open, interoperable, reliable, and secure Internet.

The devil is in the details, of course. And the strategy includes no details.

In a New York Times op-ed, Josephine Wolff argues that this new strategy, together with the more-detailed Department of Defense cyber strategy and the classified National Security Presidential Memorandum 13, represent a dangerous shift of US cybersecurity posture from defensive to offensive:

...the National Cyber Strategy represents an abrupt and reckless shift in how the United States government engages with adversaries online. Instead of continuing to focus on strengthening defensive technologies and minimizing the impact of security breaches, the Trump administration plans to ramp up offensive cyberoperations. The new goal: deter adversaries through pre-emptive cyberattacks and make other nations fear our retaliatory powers.


The Trump administration's shift to an offensive approach is designed to escalate cyber conflicts, and that escalation could be dangerous. Not only will it detract resources and attention from the more pressing issues of defense and risk management, but it will also encourage the government to act recklessly in directing cyberattacks at targets before they can be certain of who those targets are and what they are doing.


There is no evidence that pre-emptive cyberattacks will serve as effective deterrents to our adversaries in cyberspace. In fact, every time a country has initiated an unprompted cyberattack, it has invariably led to more conflict and has encouraged retaliatory breaches rather than deterring them. Nearly every major publicly known online intrusion that Russia or North Korea has perpetrated against the United States has had significant and unpleasant consequences.

Wolff is right; this is reckless. In Click Here to Kill Everybody, I argue for a "defense dominant" strategy: that while offense is essential for defense, when the two are in conflict, it should take a back seat to defense. It's more complicated than that, of course, and I devote a whole chapter to its implications. But as computers and the Internet become more critical to our lives and society, keeping them secure becomes more important than using them to attack others.

CryptogramDefeating the "Deal or No Deal" Arcade Game

Two teenagers figured out how to beat the "Deal or No Deal" arcade game by filming the computer animation and then slowing it down enough to determine where the big prize was hidden.

CryptogramConspiracy Theories around the "Presidential Alert"

Noted conspiracy theorist John McAfee tweeted:

The "Presidential alerts": they are capable of accessing the E911 chip in your phones -- giving them full access to your location, microphone, camera and every function of your phone. This not a rant, this is from me, still one of the leading cybersecurity experts. Wake up people!

This is, of course, ridiculous. I don't even know what an "E911 chip" is. And -- honestly -- if the NSA wanted in your phone, they would be a lot more subtle than this.

RT has picked up the story, though.

(If they just called it a "FEMA Alert," there would be a lot less stress about the whole thing.)

CryptogramChinese Supply Chain Hardware Attack

Bloomberg is reporting about a Chinese espionage operating involving inserting a tiny chip into computer products made in China.

I've written about (alternate link) this threat more generally. Supply-chain security is an insurmountably hard problem. Our IT industry is inexorably international, and anyone involved in the process can subvert the security of the end product. No one wants to even think about a US-only anything; prices would multiply many times over.

We cannot trust anyone, yet we have no choice but to trust everyone. No one is ready for the costs that solving this would entail.

EDITED TO ADD: Apple, Amazon, and others are denying that this attack is real. Stay tuned for more information.

EDITED TO ADD (9/6): TheGrugq comments. Bottom line is that we still don't know. I think that precisely exemplifies the greater problem.

EDITED TO ADD (10/7): Both the US Department of Homeland Security and the UK National Cyber Security Centre claim to believe the tech companies. Bloomberg is standing by its story. Nicholas Weaver writes that the story is plausible.

Planet DebianMarkus Koschany: My Free Software Activities in September 2018

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Yavor Doganov continued his heroics in September and completed the port to GTK 3 of teg, a risk-like game. (#907834) Then he went on to fix gnome-breakout.
  • I packaged a new upstream release of freesweep, a minesweeper game, which fixed some minor bugs but unfortunately not #907750.
  • I spent most of the time this month on packaging a newer upstream version of unknown-horizons, a strategy game similar to the old Anno games. After also upgrading the fife engine, fifechan and NMUing python-enet, the game is up-to-date again.
  • More new upstream versions this month: atomix, springlobby, pygame-sdl2, and renpy.
  • I updated widelands to fix an incomplete appdata file (#857644) and to make the desktop icon visible again.
  • I enabled gconf support in morris (#908611) again because gconf will be supported in Buster.
  • Drascula, a classic adventure game, refused to start because of changes to the ScummVM engine. It is working now. (#908864)
  • In other news I backported freeorion to Stretch and sponsored a new version of the runescape wrapper for Carlos Donizete Froes.

Debian Java

  • Only late in September I found the time to work on JavaFX but by then Emmanuel Bourg had already done most of the work and upgraded OpenJFX to version 11. We now have a couple of broken packages (again) because JavaFX is no longer tied to the JRE but is designed more like a library. Since most projects still cling to JavaFX 8 we have to fix several build systems by accommodating those new circumstances.  Surely there will be more to report next month.
  • A Ubuntu user reported that importing furniture libraries was no longer possible in sweethome3d (LP: #1773532) when it is run with OpenJDK 10. Although upstream is more interested in supporting Java 6, another user found a fix which I could apply too.
  • New upstream versions this month: jboss-modules, libtwelvemonkeys-java, robocode, apktool, activemq (RC #907688), cup and jflex. The cup/jflex update required a careful order of uploads because both packages depend on each other. After I confirmed that all reverse-dependencies worked as expected, both parsers are up-to-date again.
  • I submitted two point updates for dom4j and tomcat-native to fix several security issues in Stretch.


  • Firefox 60 landed in Stretch which broke all xul-* based browser plugins. I thought it made sense to backport at least two popular addons, ublock-origin and https-everywhere, to Stretch.
  • I also prepared another security update for discount (DSA-4293-1) and uploaded  libx11 to Stretch to fix three open CVE.

Debian LTS

This was my thirty-first month as a paid contributor and I have been paid to work 29,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 24.09.2018 until 30.09.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in dom4j, otrs2, strongswan, python2.7, udisks2, asterisk, php-horde, php-horde-core, php-horde-kronolith, binutils, jasperreports, monitoring-plugins, percona-xtrabackup, poppler, jekyll and
  • DLA-1499-1. Issued a security update for discount fixing 4 CVE.
  • DLA-1504-1. Issued a security update for ghostscript fixing 14 CVE.
  • DLA-1506-1. Announced a security update for intel-microcode.
  • DLA-1507-1. Issued a security update for libapache2-mod-perl2 fixing 1 CVE.
  • DLA-1510-1. Issued a security update for glusterfs fixing 11 CVE.
  • DLA-1511-1. Issued an update for reportbug.
  • DLA-1513-1. Issued a security update for openafs fixing 3 CVE.
  • DLA-1517-1. Issued a security update for dom4j fixing 1 CVE.
  • DLA-1523-1. Issued a security update for asterisk fixing 1 CVE.
  • DLA-1527-1 and DLA-1527-2. Issued a security update for ghostscript fixing 2 CVE and corrected an incomplete fix for CVE-2018-16543 later.
  • I reviewed and uploaded strongswan and otrs2 for Abhijith PA.


Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my fourth month and I have been paid to work 15  hours on ELTS.

  • I was in charge of our ELTS frontdesk from 10.09.2018 until 16.09.2018 and I triaged CVE in samba, activemq, chromium-browser, curl, dom4j, ghostscript, firefox-esr, elfutils, gitolite, glib2.0, glusterfs, imagemagick, lcms2, lcms, jhead, libpodofo, libtasn1-3, mgetty, opensc, openafs, okular, php5, smarty3, radare, sympa, wireshark, zsh, zziplib and intel-microcode.
  • ELA-35-1. Issued a security update for samba fixing 1 CVE.
  • ELA-36-1. Issued a security update for curl fixing 1 CVE.
  • ELA-37-2. Issued a regression update for openssh.
  • ELA-39-1. Issued a security update for intel-microcode addressing 6 CVE.
  • ELA-42-1. Issued a security update for libapache2-mod-perl2 fixing 1 CVE.
  • ELA-45-1. Issued a security update for dom4j fixing 1 CVE.
  • I started to work on a security update for the Linux kernel which will be released shortly.

Thanks for reading and see you next time.

Worse Than FailureCodeSOD: Tern The Bool Around

Some say that the only reason I like ternary code snippets is that it gives me an opportunity to make the title a “tern” pun.

They’re not wrong.

I’m actually a defender of ternaries. Just last week, I wrote this line of C++ code:

ControllerState response = allMotorsIdle() ? READY : NOT_READY;

That's a good use of ternaries, in my opinion. It's a clear translation- if all motors are idle, we're ready, otherwise we're not, keep waiting. Simple, easy to read, and turns what really is one idea (if we're idle, we're ready) into one line of code. Ternaries can make code more clear.

Which brings us to this anonymous submission.

disableEstimateSent: function () {  
    let surveyCompleted = (modalControl.survey[0].Survey_Complete == undefined || modalControl.survey[0].Survey_Complete == false) ? false : true;  
    return !surveyCompleted;  

This is the perfect storm of bad choices. First, we have a long, complex expression in our ternary. I mean, not all that long, it’s only got two clauses, but boy howdy are we digging down the object graph. For the same object. Twice. An object which is either false or undefined, which in JavaScript booleans, undefined is also falsy. So if Survey_Complete is falsy, we store false in surveyCompleted… and then return !surveyCompleted.

Extra variables, double negatives, ugly ternaries. This is a work of art.

Our anonymous submitter of course went all Banksy and shredded this work of art and replaced it with a more prosaic return !modalControl.survey[0].Survey_Complete.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianNorbert Preining: TeX Live Database as Graph Database

For a presentation at the Neo4j User Meeting in Tokyo I have converted the TeX Live Database into a Graph Database and represented dependencies between all kind of packages as well as files and their respective packages as nodes and relations.

Update 20181010: I have worked out the first step mentioned in further work and got rid of uuids completely and use package names/revisions and file names as identifier. I also added a new node type for TLPDB and renamed the relation between packages and files from contains to includes. The former now refers to the relation between TLPDB and packages. The text and code has been updated to reflect this.

Before going into the details how I did represent the TeX Live Database tlpdb as graph, let us recall a few concepts of how packages are managed and arranged in TeX Live. Each package in TeX Live has a cateogry. The currently available categories are Package, ConTeXt, Collection, Scheme, TLCore. They can be categorized into four groups:

  • Basic macro packages These are the meat of TeX Live, the actual stuff our upstream authors are writing. Typically LaTeX or font packages, they are either of category Package or ConTeXt.
  • Packages for binaries These are packages that ship “binary” files – which means files that are installed into the bin directory and are executable. Some of these are actually scripts and not binaries, though.
  • Collections A Collection contains basic and binary packages, and might depend on other collections. We guarantee that the set of collections is a partition of the available files, which allows distributors like Debian etc to make sure that no file is included two times in different packages.
  • Schemata These are the top-level groups that are presented to the user during installation. They depend on collections and other packages, and try to provide a meaningful selection.

The TeX Live Database itself is modeled after the Debian package database, and contains stanzas for each package. A typical example for a package would be (slightly abbreviated):

category Package
revision 15878
catalogue one2many
shortdesc Generalising mathematical index sets
longdesc In the discrete branches of mathematics and the computer
longdesc one-line change.
docfiles size=98
 texmf-dist/doc/latex/12many/12many.pdf details="Package documentation"
 texmf-dist/doc/latex/12many/README details="Readme"
srcfiles size=6
runfiles size=1
catalogue-ctan /macros/latex/contrib/12many
catalogue-date 2016-06-24 19:18:15 +0200
catalogue-license lppl
catalogue-topics maths
catalogue-version 0.3

A typical example for a collection would be:

name collection-langjapanese
category Collection
revision 48752
shortdesc Japanese
longdesc Support for Japanese; additional packages in
longdesc collection-langcjk.
depend collection-langcjk
depend ascmac
depend babel-japanese

and a typical example of a schema would be:

name scheme-medium
category Scheme
revision 44177
shortdesc medium scheme (small + more packages and languages)
longdesc This is the medium TeX Live collection: it contains plain TeX,
longdesc LaTeX, many recommended packages, and support for most European
longdesc languages.
depend collection-basic
depend collection-binextra
depend collection-context
depend collection-fontsrecommended

In total, we are currently at the following values: 9 Schemata, 41 Collections, 6718 Packages (Package, TLCore, ConTeXt), and about 181839 files.

Representation in Neo4j

Representation as graph was relatively straight-forward: We decided for separate nodes for each package and each file, and relations of dependency (depend in the above examples), inclusion (files being included in a package), and containment (a package is contained in a certain tlpdb revision).

We used a simple Perl script tl-dump-neo4j which uses the TeX Live provided Perl modules to read and parse the TeX Live Database to generate CSV files for each node type and each relation type. These CSV files were then imported into a Neo4j database with neo4j-import. For each node type one csv file was generated with three fields, an UUID consisting of the name and the revision separated by a colon, the name of the package and the revision. Example of the file node-Package.csv containing the Packages:


For the files contained in the database I use the file name as identifier, thus the respective csv only contains one field, the file name (enclosed in quotes to make sure that spaces are not mistreated).

There is a node type TLPDB with only identifier revision that carries the current version of the tlpdb used.

The three relations (depends, contains, and includes) then used the assigned UUIDs to define the relation: For packages it is the “name:revision”, for files the filename. The start of edge-depends.csv file is:


Only for the includes relation we added an additional tag giving the type of file (run/bin/doc/src according to the group the file is in the tlpdb). The start of edge-includes.csv is given below:


The last relation is contains which sets up connections between tlpdb revisions and the contained packages. The start of edge-contains.csv is given below:


With this in place a simple call to neo4j-import produced a ready-to-go Neo4j Database:

$ ls
edge-contains.csv    node-ConTeXt.csv  node-TLCore.csv
edge-depends.csv     node-Files.csv    node-TLPDB.csv
edge-includes.csv    node-Package.csv
node-Collection.csv  node-Scheme.csv
$ neo4j-import --into ../graphdb \
   --nodes:TLPDB node-TLPDB.csv \
   --nodes:Collection node-Collection.csv \
   --nodes:ConTeXt node-ConTeXt.csv \
   --nodes:Files node-Files.csv \
   --nodes:Package node-Package.csv \
   --nodes:Scheme node-Scheme.csv \
   --nodes:TLCore node-TLCore.csv \
   --relationships:contains edge-contains.csv \
   --relationships:includes edge-includes.csv \
   --relationships:depends edge-depends.csv
IMPORT DONE in 2s 93ms. 
  168129 nodes
  172280 relationships
  175107 properties
Peak memory usage: 1.03 GB

Sample queries

Return all schemata:

match (s:Scheme) return s;

Return all dependencies from a schema to something else then a collection:

match p = (s:Scheme) -[:depends]-> (q)
  where NOT 'Collection' IN LABELS(q)
  return p;

Here we use LABELS to find all the labels of a node.

Check whether the same package is contained in two different collections:

match (c1:Collection) -[:depends]-> (p)
  <-[:depends]- (c2:Collection) return c1, c2, p;

Fortunately, only collections are targets of multiple depends, which is fine 😉

Search for cycles in the dependencies:

match p = (n)-[:depends*]-> (n) return p;

Here we use the * operator to search for arbitrary long paths. Interestingly we got one result, namely that ConTeXt depends on itself, something that is not good anyway.

Search for files that are included in multiple packages:

match (p1) -[:includes]-> (f)
  <- [:includes]- (p2) return p1, p2, f;

Fortunately here we didn't get any result. Anyway, this is checked every day with a simple grep/awk program 😉

Show all the documentation files for one package:

match (p) -[:includes {type:'doc'}]-> (f)
  where = "tlcockpit"
  return p,f;

Graph Algorithm with Neo4j

The Neo4j Team also provides a set of graph alogrithm readily available by installing and activating a plugin. This plugin can be downloaded from this Neo4j Github Page. In my case this resulted in the download of graph-algorithms-algo-, which I did put into the plugins folder of my Neo4j installation. On Debian this is /var/lib/neo4j/plugins/. To get it to actually run one needs to allow running it by adding the following line to the Neo4j config file (on Debian /etc/neo4j/neo4j.conf):*

After a restart of Neo4j one is ready to use all the algorithms provided in this jar.

First let us check the Google Page Rank (whatever it might mean for the current case):

CALL, 'depends', {iterations:20, dampingFactor:0.85})
  YIELD nodeId, score
  MATCH (node) WHERE id(node) = nodeId
  RETURN AS page,score

which gives the following output (in table mode):

│"page"                        │"score"            │
│"context"                     │4.868265000000001  │
│"hyphen-base"                 │4.667172000000001  │
│"hyph-utf8"                   │4.0754105          │
│"kpathsea"                    │1.8529665          │
│"plain"                       │0.982524           │

In the similar vein is the Betweenness Centrality:

CALL, 'depends', {direction:'out'})
  YIELD nodeId, centrality
  MATCH (pkg) WHERE id(pkg) = nodeId
  RETURN AS pkg,centrality
  ORDER BY centrality DESC;

which gives the following output:

│"pkg"                     │"centrality"       │
│"collection-basic"        │1675.4717032967033 │
│"collection-latexextra"   │1212.0             │
│"context"                 │947.3333333333334  │
│"collection-latex"        │744.8166666666666  │
│"collection-pictures"     │586.0              │

Finally let us look at the triangle computation:

CALL, 'depends', {concurrency:4})
  YIELD nodeId, triangles, coefficient
  MATCH (p) WHERE id(p) = nodeId
  RETURN AS name, triangles, coefficient
  ORDER BY triangles DESC

which yields the following output:

│"name"                   │"triangles"│"coefficient"          │
│"collection-basic"       │109        │0.042644757433489826   │
│"scheme-full"            │46         │0.05897435897435897    │
│"collection-latex"       │43         │0.04154589371980676    │
│"scheme-tetex"           │42         │0.022950819672131147   │
│"collection-context"     │39         │0.04318936877076412    │

Future Work

[DONE 20181010 - see above] During the presentation we got the suggestion to use hash values of the node content instead of arbitrarily computed uuids to allow for better upgrades/additions in case the values of nodes did remain the same.

Furthermore, it would be interesting to parse the full information of packages (including revision numbers, catalogue information etc), save them into the nodes, and regularly update the database to see the development of packages. To make this actually work out we need the first step of using hashes, though.


Considering that all of the above plus the actual presentation slides were written in less than one day, one can see that developing a graph database based on Neo4j and playing around with it is a rather trivial procedure. The difficult part is normally to find the "right" set of node types and relation types, as well as their attributes. In the case of the TeX Live database this was quite trivial, which allowed for an easy and direct representation in Neo4j.

We made the graph (read-only) available for experimentation at (with user/pass neo4j).

We hope that these simple examples of graphs help others to kick-start more interesting and deeper projects using the power of graphs.

Planet DebianLucas Nussbaum: Sending mail from mutt with queueing and multiple accounts/profiles?

I’m looking into upgrading my email setup. I use mutt, and need two features:

  • Local queueing of emails, so that I can write emails when offline, queue them, and send them later when I’m online.
  • Routing of emails through several remote SMTP servers, ideally depending on the From header, so that emails pass SPF/DKIM checks.

I currently use nullmailer, which does queueing just fine, but cannot apparently handle several remote SMTP servers.

There’s also msmtp, which can handle several “accounts” (remote SMTPs). But apparently not when using queueing using msmtpq.

What are you using yourself?


Planet Linux Australiasthbrx - a POWER technical blog: Open Source Firmware Conference 2018

I recently had the pleasure of attending the 2018 Open Source Firmware Conference in Erlangen, Germany. Compared to other more general conferences I've attended in the past, the laser focus of OSFC on firmware and especially firmware security was fascinating. Seeing developers from across the world coming together to discuss how they are improving their corner of the stack was great, and I've walked away with plenty of new knowledge and ideas (and several kilos of German food and drink..).

What was especially exciting though is that I had the chance to talk about my work on Petitboot and what's happened from the POWER8 launch until now. If you're interested in that, or seeing how I talk after 36 hours of travel, check it out here:

OSFC have made all the talks from the first two days available in a playlist on Youtube
If you're after a few suggestions there was, in no particular order:

Ryan O'Leary giving an update on Linuxboot - also known as NERF, Google's approach to a Linux bootloader all written in Go.

Subrate Banik talking about porting Coreboot on top of Intel's FSP

Ron Minnich describing his work with "rompayloads" on Coreboot

Vadmin Bendebury describing Google's "Secure Microcontroller" Chip

Facebook presenting their use of Linuxboot and "systemboot"

And heaps more, check out the full playlist!

Planet DebianClint Adams: Downhill

“Now that I'm 50 and within range of medical disaster, any ideas for a comfortable suicide?” he said. “Leading candidates are sleeping pills or car exhaust. I tried to enlist my kids to do it but they can't, so I guess I have to do it myself if necessary.”

“A large plastic bag over the head supposedly puts you into a dreamy, pleasant stupor before it kills you,” she replied, “or any CO₂ replacement would work. Also an opioid overdose is probably nice.”

Posted on 2018-10-08
Tags: umismu

TEDReboot: The talks of TED@BCG

CEO of BCG, Rich Lesser, welcomes the audience to TED@BCG, held October 3, 2018, at Princess of Wales Theatre in Toronto. (Photo: Ryan Lash / TED)

How do we manage the transformations that are radically altering our lives — all while making a positive impact on our well-being, productivity and the world? In a word: reboot.

For a seventh year, BCG has partnered with TED to bring experts in leadership, psychology, technology, sustainability and more to the stage to share ideas on rethinking our goals and redefining the operating systems we use to reach them. At this year’s TED@BCG — held on October 3, 2018, at the Princess of Wales Theater in Toronto — 18 creators, leaders and innovators invited us to imagine a bright future with a new definition of the bottom line.

After opening remarks from Rich Lesser, CEO of BCG, the talks of Session 1

Let’s stop trying to be good. “What if I told you that our attachment to being ‘good people’ is getting in the way of us being better people?” asks social psychologist Dolly Chugh, professor at NYU’s Stern School of Business. The human brain relies on shortcuts so we can cope with the millions of pieces of information bombarding us at any moment. That’s why we’re often able to get dressed or drive home without thinking about it — our brains are reserving our attention for the important stuff. In her research, Chugh has found the same cognitive efficiency occurs in our ethical behavior, where it shows up in the form of unconscious biases and conflicts of interest. And we’re so focused on appearing like good people — rather than actually being them — that we get defensive or aggressive when criticized for ethical missteps. As a result, we never change. “In every other part of our lives, we give ourselves room to grow — except in this one where it matters the most,” Chugh says. So, rather than striving to be good, let’s aim for “good-ish,” as she puts it. That means spotting our mistakes, owning them and, last but not least, learning from them.

You should take your technology out to coffee, says BCG’s Nadjia Yousif. She speaks at TED@BCG about how we can better embrace our tech — as colleagues. (Photo: Ryan Lash / TED)

Treat your technology like a colleague. “The critical skill in the 21st-century workplace is … to collaborate with the technologies that are becoming such a big and costly part of our daily working lives,” says technology advisor Nadjia Yousif. She’s seen countless companies invest millions in technology, only to ignore or disregard it. Why? Because the people using the technology are skeptical and even afraid of it. They don’t spend the time learning and training — and then they get frustrated and write it off. What if we approached new technology as if it were a new colleague? What if we treated it like a valued member of the team? People would want to get to know it better, spend time integrating it into the team and figure out the best ways to collaborate, Yousif says — and maybe even give feedback and make sure the tech is working well with everyone else. Yousif believes we can treat technology this way, and she encourages us to “share a bit of humanity” with our software, algorithms and robots. “By embracing the ideas that these machines are actually valuable colleagues, we as people will perform better … and be happier,” she says.

Confessions of a reformed micromanager. When Chieh Huang started the company Boxed out of his garage in 2013, there wasn’t much more to manage than himself and the many packages he sent. As his company expanded, his need to oversee the smallest of details increased — a habit that he’s since grown out of, but can still reference with humor and humility. “What is micromanaging? I posit that it’s actually taking great, wonderful, imaginative people … bringing them into an organization, and then crushing their souls by telling them which font size to use,” he jokes. He asks us to reflect on the times when we’re most tired at work. It probably wasn’t those late nights or challenging tasks, he says, but when someone was looking over your shoulder watching your every move. Thankfully, there’s a cure to this management madness, Huang says: trust. When we stop micromanaging the wonderfully creative people at our own companies, he says, innovation will flourish.

Dancing with digital titans. Tech giants from the US and China are taking over the world, says digital strategist François Candelon. Of the world’s top 20 internet companies, a full 100 percent of them are American or Chinese — like the US’s Alphabet Inc. and Amazon, and China’s Tencent and Alibaba. Europe and the rest of the world must find a way to catch up, Candelon believes, or they will face US-China economic dominance for decades to come. What are their options for creating a more balanced digital revolution? Candelon offers a solution: governments should tango with these digital titans. Instead of fearing their influence — as the EU has done by levying fines against Google, for instance — countries would be better off advocating for the creation of local digital jobs. Why would companies like Facebook or Baidu be willing to tango with governments? Because they can offer things like tax incentives and adapted regulations. Candelon points to “Digital India,” a partnership between Google and the government of India, as an example: one of the project’s initiatives is to train two million Indian developers in the latest technologies, helping Google develop its talent pipeline while cultivating India’s digital ecosystem. “Let’s urge our governments and the American and Chinese digital titans to invest enough brainpower and energy to imagine and implement win-win strategic partnerships,” Candelon says. The new digital world order depends on it.

Upcycling air pollution into ink. In 2012, a photo of an exhaust stain on a wall sparked a thought for engineer Anirudh Sharma: What if we could use air pollution as ink? A simple experiment with a candle and vegetable oil convinced Sharma that the idea was viable, leading him home to Bangalore to test how to collect the carbon-rich PM2.5 nanoparticles that would make up the ink. Sharma and his team at AIR INK created a device that could capture up to 95 percent of air pollution that passed through it; using it, 45 minutes of diesel car exhaust can become 30 milliliters of ink (or about 2 tablespoons). Artists worldwide embraced AIR INK, and this success brought surprising interest from the industrial world. Sharma realized that by incentivizing corporations to send their pollution to AIR INK, they could upcycle pollution usually headed for landfills into a productive tool. AIR INK won’t necessarily solve global pollution concerns, Sharma says, “but it does show what can be done if you look at problems a little differently.”

Leadership lessons for an uncertain world. Jim Whitehurst is a recovering know-it-all CEO. Kicking off Session 2, Whitehurst tells the story of how his work as the COO of Delta trained him to think that a good leader was someone who knew more than anyone else. But after becoming CEO of RedHat, an open-source software company, Whitehurst encountered a different kind of organization, one where open criticism of superiors — and not exactly following a boss’s orders — were normal. This experience yielded insights about success and leadership, as Whitehurst came to realize that being a good leader isn’t about control and compliance, it’s about creating the context for the best ideas to emerge out of your organization. “In a world where innovation wins and ambiguity is the only certainty, people don’t need to be controlled,” Whitehurst says. “They need to get comfortable with conflict. And leaders need to foment it.”

Elizabeth Lyle shares ideas on the future of leadership in the workplace at TED@BCG. (Photo: Ryan Lash / TED)

Why we need to coach people before they lead. The C-suites of corporate America are full of management coaches, yet top-tier execs are not the ones who really need the help, says Elizabeth Lyle, a principal in BCG’s Boston office. “Outdated leadership habits are forming right before our eyes among the middle managers who will one day take their place,” she says. While the uncertain future of work demands new ways of thinking, acting and interacting, tomorrow’s leaders aren’t given the autonomy or training they need to develop — and they don’t ask for it, lest they seem pushy and disagreeable. They also think that they’ll be able to change their behavior once they’ve earned the authority to do things their own way, Lyle says, but this rarely happens. By the time they’re in a high-stakes position, they tend to retreat to doing what their bosses did. The solution: senior leaders must present their direct reports with the opportunities to try new things, and reports should return that trust by approaching their work with thought and creativity. Lyle also suggests bringing in coaches to work in the same room with leaders and reports — like a couples therapist, they’d observe the pair’s communication and offer ideas for how to improve it.

A breakdown, and a reboot. Each of us feels the burden of daily repetitive actions on our bodies and psyches, whether we create them or they’re imposed by outside forces. Left unchecked, these actions can “turn into cages,” says Frank Müller-Pierstorff, Global Creative Director at BCG. In an electronic music performance, he uses soundscapes built out of dense, looped phrases to embody these “cages,” while dancer Carlotta Bettencourt attempts to keep up in an accompanying video — and ultimately shows us what might happen if we could only “reboot” under the weight of our stress.

WWMD? What would MacGyver do? That’s what Dara Dotz asks herself, whether she’s working to help build the first factory in space or aiding survivors of a recent catastrophic event. Much like the fictional genius/action hero, Dotz loves to use technology to solve real-life problems — but she believes our increasing reliance on tech is setting us up for major failure: Instead of making us superhuman, tech may instead be slowly killing our ability to be creative and think on our feet. If disaster strikes — natural or man-made — and our tech goes down, will we still have the ingenuity, resilience and grit to survive? With that concern in mind, Dotz cofounded a nonprofit, Field Ready, to support communities that experience disasters by creating life-saving supplies in the field from found materials and tools. With real-world examples from St. Thomas to Syria, Dotz demonstrates the importance of co-designing with communities to create specific solutions that fit the need — and to ensure that the communities can reproduce these solutions. “We aren’t going to be able to throw tech at every problem as efficiently or effectively as we would like — as time moves on, there are more disasters, more people and less resources,” she says. “Instead of focusing on the next blockchain or AI, perhaps the things we really need to focus on are the things that make us human.”

Rebooting how we work. What are we willing to give up to achieve a better way of working? For starters: the old way of doing things, says Senior Partner and Managing Director of BCG Netherlands, Martin Danoesastro. In a world that’s increasingly complex and fast-paced, we need a way of working that allows people to make faster decisions, eliminates bureaucracy and creates alignment around a single purpose. Danoesastro learned this firsthand by visiting and studying innovative and hugely profitable tech companies. He discovered the source of their success in small, autonomous teams that have the freedom to be creative and move fast. Danoesastro provides a few steps for companies that want to replicate this style: get rid of micro-managers, promote open and transparent communication throughout the organization, and ensure all employees take initiative. Changing deeply ingrained structures and processes is hard, and changing behavior is even harder, but it’s worth it. Ultimately, this model creates a more efficient workplace and sets the company up for a future in which they’ll be better prepared to respond to change.

The power of visual intelligence. Are you looking closely enough? Author Amy Herman thinks we should all increase our perceptual intelligence — according to Herman, taking a little more time to question and ponder when we’re looking at something can have lasting beneficial impact in our lives. Using a variety of fine art examples, Herman explains how to become a more intentional, insightful viewer by following the four A’s: assess the situation, analyze what you see, articulate your observations and act upon them. Herman has trained groups across a spectrum of occupations — from Navy SEALS to doctors to crime investigators — and has found that by examining art, we can develop a stronger ability to understand both the big picture and influential small details of any scene. By using visual art as a lens to look more carefully at what’s presented to us, Herman says, we’ll have the confidence to see our work and the world clearer than ever.

Fintech entrepreneur Viola Llewellyn shares her work pairing AI with local knowledge to create smarter products for the African market. (Photo: Ryan Lash / TED)

Culturally attuned microfinance for Africa. Financial institutions in Africa’s business sector don’t have the technology or tools to harness the continent’s potential for wealth, says fintech entrepreneur Viola Llewellyn, opening Session 3. The continent is made up of thousands of ethnic groups speaking more than 2,000 languages among them, rooted in a long, rich history of cultural diversity, tradition and wealth. “You need a deep understanding of nuance and history,” Llewellyn says, “and a respect for the elegance required to code and innovate [financial] products and services for the vast African market.” She cofounded Ovamba, a mobile technology company, to bridge the gap in knowledge between institutions and African entrepreneurs. Working with teams on the ground, Ovamba pairs human insights about local culture with AI to create risk models and algorithms, and ultimately product designs. Llewellyn highlights examples across sub-Saharan Africa that are successfully translating her vision into real-world profit. “In digitizing our future, we will preserve the beauty of our culture and unlock the code of our best wealth traits,” she says. “If we do this, Africans will become global citizens with less reliance on charity. Becoming global citizens gives us a seat at the table as equals.”

Globalization isn’t dead — it’s transforming into something new. All the way up to Davos, business leaders have proclaimed the death of globalization. But Arindam Bhattacharya thinks their obituary was published prematurely. Despite growing economic protectionism, and the declining influence of multilateral trade organizations, business is booming. Technology has allowed data-driven businesses like Netflix to reach their customers instantly and simultaneously — and as a result, Netflix revenues have grown more than five-fold. Netflix is one of a new breed of companies using cutting-edge technology to build “a radical new model of globalization.” And it’s not just data — soon, 3D printing will redefine our supply chains. Working with the manufacturer SpeedFactory, Adidas allows customers to choose designs online, have them printed at a nearby “mini-factory,” and delivered via drone in a matter of days, not weeks or months. Aided by local production, cross-border data flow could be worth $20 trillion by 2025 — more than every nation’s current exports combined. As society becomes “more nationalistic and less and less open,” Bhattacharya says, commerce is becoming more personalized and less tied to cross-border trade. These twin narratives are reinvigorating globalization.

Viruses that fight superbugs. Viruses have a bad reputation — but some might just be the weapon we need to help in the fight against superbugs, says biotech entrepreneur Alexander Belcredi. While many viruses do cause deadly diseases, others can actually help cure them, he says — and they’re called phages. More formally known as bacteriophages, these viruses hunt, infect and kill bacteria with deadly selectivity. Whereas antibiotics inhibit the growth of broad range of bacteria — sometimes good bacteria, like you find in the gut — phages target specific strains. Belcredi’s team has estimated that we have at least ten billion phages on each hand, infecting the bacteria that accumulate there. So, why is it likely you’ve never heard of phages? Although they were discovered in the early 20th century, they were largely forgotten in favor of transformative antibiotics like penicillin, which seemed for many decades like the solution to bacterial infections. Unfortunately, we were wrong, Belcredi says: multi-drug-resistant infections — also known as superbugs — have since developed and now overpower many of our current antibiotics. Fortunately, we are in a good place to develop powerful phage drugs, giving new hope in the fight against superbugs. So, the next time you think of a virus, try not to be too judgmental, Belcredi says. After all, a phage might one day save your life.

Madame Gandhi and Amber Galloway-Gallego perform “Top Knot Turn Up” and “Bad Habits” at TED@BCG. (Photo: Ryan Lash / TED)

How music brings us together. “Music is so much more than sound simply traveling through the ear,” says sign language interpreter Amber Galloway-Gallego, during the second musical interlude of the day. In a riveting performance, musician and activist Madame Gandhi plays two songs — her feminist anthems “Top Knot Turn Up” and “Bad Habits” — while Galloway-Gallego provides a spirited sign language interpretation.

Agreeing to disagree. Our public discourse is broken, says behavioral economist Julia Dhar, and the key to fixing it might come from an unexpected place: debate teams. In the current marketplace of ideas, Dhar says, contempt has replaced conversation: people attack each other’s identity instead of actually hashing out ideas. If we turn to the principles of debate, Dhar believes we can learn how to disagree productively — over family dinners, during company meetings and even in our national conversations. The first principle she mentions is rebuttal: “Debate requires that we engage with a conflicting idea directly, respectfully and face-to-face,” she says — and as research shows, this forces us to humanize the “other side.” Second, ideas are totally separate from the identity of the person advocating for them in debate tournaments. Dhar invites us to imagine if the US Congress considered a policy without knowing if it was Democrat or Republican, or if your company submitted and reviewed proposals anonymously. And third, debate lets us open ourselves up to the possibility of being wrong, an exercise that can actually make us better listeners and decision makers. “We should bring [debate] to our workplaces, our conferences and our city council meetings,” Dhar says — and begin to truly reshape the marketplace of ideas.

A better world through activist investment. Who’s working on today’s most pressing issues? Activist investors, says BCG’s Vinay Shandal, or as he calls them: “the modern-day OGs of Wall Street.” These investors — people like Carl Icahn, Dan Loeb and Paul Singer — have made an art of getting large corporations to make large-scale changes. And not just to make money. They’re also interested in helping the environment and society. “The good news and perhaps the saving grace for our collective future is that it’s more than just an act of good corporate citizenship,” Shandal says. “It’s good business.” Shandal shares examples of investors disrupting industries from retail to food service to private prisons and shows growing evidence of a clear correlation between good ESG (environmental, social and governance) investing and good financial performance. You don’t need to be a rich investor to make a difference, Shandal says. Every one of us can put pressure on our companies, including the ones that manage our money, to do the right thing. “It’s your money, it’s your pension fund, it’s your sovereign wealth fund. And it is your right to have your money managed in line with your values.” Shandal says. “So speak up … Investors will listen.”

Planet DebianNeil McGovern: GNOME ED Update – September

We’ve now moved my reporting to the board to a monthly basis, so this blog should get updated monthly too! So here’s what I’ve been up to in September.


Recruitment continues for our four positions that we announced earlier this year, but I’m pleased to say we’re in the final stages for these. For those interested, the process went a little bit like this:

  • Applicants sent in a CV and cover letter
  • If they were suitable for the position on a quick read of the CV and letter, they got a short questionnaire asking for more details, such as “What do you know about the GNOME Foundation?”
  • Those with interesting answers get sent to a first interview, which mostly technical
  • Then, those who are still in the process are invited to a second interview, which is competency-based
  • At the end of all this, we hope to make an offer to the best candidate!

End of year

For those who don’t know, the Foundation’s financial year runs from the start of October to the end of September. This means we have quite a bit of work to do to:

  1. Finalise the accounts for last year and submit our tax returns
  2. Make a new budget for the forthcoming year

Work has already begun on this, and I hope to finalise the new budget with the board at the Foundation Hackfest being held next week.

Libre Application Summit

LAS was held in Denver, Colorado, and I attended. There were 20 talks and three BoF
sessions held, as well as a number of social events. From looking around, there were probably around 60-70 people, including representatives from KDE and Elementary. It was particularly pleasing to see a number of students from the local university attend and present a lightning talk.

I also had meetings with System76 and Private Internet Access, as well as a couple of local companies.

Speaking of System76, we also had a nice tour of their new factory. I knew they were taking manufacturing in-house, but I didn’t realise the extent of this process. It’s not just assembly, but taking raw sheet metal, bending it into the right shape and painting them!

My meetings with PIA were also interesting – I got to see the new VPN client that they have, which I’m assured will be free software when released. There was a couple of issues I could see about how to integrate that with GNOME, and we had a good session running through these.

Other conferences coming up

In October, I’m hoping to attend Sustain Summit 2018, in London, followed by Freenode.Live in Bristol, UK. I’ll be speaking at the latter, which is in November. Then, after a couple of days at home, GNOME is going to SeaGL! Meet me and Rosanna in Seattle at the GNOME booth!

Friends of GNOME

Another thing that happened was fixing the Friends of GNOME signup page. For some reason, unknown to us, when you submitted the form to PayPal, it redirected to the home page rather than the payment page. This didn’t happen if you selected “EUR” as the payment method, or if you selected “EUR” and then “USD” before submitting. After lots of head scratching (an analysis of the POST data showed that it was /identical/ in each case) I changed the POST to a GET, and it suddenly started working again. Confusion all around, but it should now be working again.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #180

Here’s what happened in the Reproducible Builds effort between Sunday September 30 and Saturday October 6 2018:

Packages reviewed and fixed, and bugs filed

Test framework development

There were a huge number of updates to our Jenkins-based testing framework that powers by Holger Levsen this month, including:

In addition, Alexander Couzens added a comment regarding OpenWrt/LEDE which was subsequently amended by Holger.


This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, heinrich5991, Holger Levsen, Marek Marczykowski-Górecki, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianHolger Levsen: 20180908-lts-201809

My LTS work in September

In September I only managed to spend 2.5h working on jessie LTS on:

  • finishing work on patches for samba, but then failed to release the DLA for it until now. Expect an upload soon. Sorry for the delay, various RL issues took their toll.

Planet Linux AustraliaLev Lafayette: Not The Best Customer Service (

You would think with a website like you would be sitting on a gold mine of opportunity. It would take real effort not to turn such a domain advantage into a real advantage, to become the country's specialist and expert provider of laptops. But alas, some effort is required in this regard and it involves what, in my considered opinion, is not doing the right thing. I leave you, gentle reader, to form your own opinion on the matter from the facts provided.

In mid-August 2018 I purchased a laptop from said provider. I didn't require anything fancy, but it did need to be light and small. The Lenovo Yoga 710-11ISK for $699 seemed to fit the bill. The dispatch notice was sent on August 14, and on August 21st I received the item and noticed that there were a few things wrong. Firstly, the processor was nowhere near as powerful as advertised (and no surprise there - they're advertising the bust-speed of a water-cooled processor, not an air-cooled small laptop). Further, the system came with half of the advertised 8GB of RAM.

With the discrepancy pointed out they offered what I considered a paltry sum of $100 - which would be quite insufficient for the loss of performance, and it was not the kind of system that could be upgraded with ease. Remarkably they made the claim "We would offer to swap over, however, it's expensive to ship back and forth and we don't have another in stock at this time". I asked that if this was the case why they were still advertising the supposedly absent model on their website (and, at the time of writing, October 8), it is apparently still available. I pointed out that their own terms and conditions stated: "A refund, repair, exchange or credit is available if on arrival the goods are advised faulty or the model or the specifications are incorrect", which was certainly the case here.

Receiving no reply several days later I had to contact them again. The screen on the system had completely ceased to function. I demanded that the refund the cost of the laptop plus postage, as per their own terms and conditions. The system was faulty and the specifications are incorrect. They offered to replace the machine. I told them I preferred a refund, as I now had reasonable doubts about their quality control, as per Victorian consumer law.

I sent the laptop, express post with signature, and waited. A week later I had to contact them again and provided the Australia Post tracking record to show that it had been delivered (but not collected). It was at the point that, instead of providing a refund as I had requested, they sent a second laptop, completely contrary to my wishes. They responded that they had "replaced machine with original spec that u ordered. Like new condition" and that "We are obliged under consumer law to provide a refund within 30 days of purchase" (any delays were due to their inaction). At that point a case was opened at the Commonwealth Bank (it was paid via credit card), and Consumer Affair Victoria.

But it gets better. They sent the wrong laptop again. This time with a completely different processor, and significantly heavier and larger. It was pointed out to them that they have sent the wrong machine, twice, and the second time contrary to my requests. It was pointed out to them that all they had to do was provide a refund as requested for the machine and my postage costs. It was pointed out that it is my fault that you sent the wrong machine and that was their responsibility. It was pointed out that that it was not my fault that they sent a second, wrong, machine, contrary to my request, and that, again, their responsibility. Indeed, they could benefit by having someone look at their business processes and quality assurance - because there has been many years of this retailer showing less than optimal customer service.

At this point, they buckled and agreed to provide a full refund if I sent the second laptop back - which I have done and will update this 'blog post as the story unfolds.

Now whilst some of you gentle readers may think that surely it couldn't have been that bad, and surely there's another side to this story. So it is in the public interest and in the principle of disclosure and transparency, that I provide a full set of the correspondence as a text file attached. You can make up your own mind.

Worse Than FailureA Floating Date

Enterprise integration is its own torturous brand of software development. Imagine all the pain of inheriting someone else's code, but now that code is proprietary, you can't modify it, poorly documented, and exposes an API that might solve somebody's problem, but none of the problems you have, and did I say poorly documented? I meant "the documentation is completely inaccurate and it's possible that this was intentional".

Michael was working on getting SAP integrated to their existing legacy systems. This meant huge piles of bulk data loading, which wasn't so bad- they had a third party module which promised to glue all this stuff together. And in early testing phases, everything went perfectly smooth.

Of course, this was a massive enterprise integration project for a massive company. That guaranteed a few problems that were unavoidable. First, there were little teams within business units who weren't using the documented processes in the first place, but had their own home-grown process, usually implemented in an Excel file on a network drive, to do their work. Tracking these down, prying the Excel sheet out of their hands, and then dealing with the fallout of "corporate coming in and changing our processes for no reason" extended the project timeline.

Well, it extended how much time the project actually needed, which brings us to the second guaranteed problem: the timeline was set based on what management wanted to have happen, not based on what was actually possible or practical. No one on the technical side of things was consulted to give an estimate about required effort. A go-live date of October 8th was set, and everything was going to happen on October 8th- or else.

The project was, against all odds, on track to hit the ridiculous target. Until it went into UAT- and that's when Michael started catching issues from users. Dates were shifting. In the source system, the date might be November 21st, but in SAP it was November 20th. The 23rd turned into the 24th. The 25th also turned into the 24th.

Michael was under a time crunch, and trapped between a rock (the obtuse legacy system), a hard place (SAP), and a hydraulic press (the third-party data import module). There was a pattern to the errors, though, and that pattern pointed to a rounding error.

"Wait, a rounding error?" Michael wondered aloud. Now, they did use numbers to represent dates. The "Japanese" notation, which allowed them to store "November 21st, 2018" as 20181121. That's a super common approach to encoding a date as a 32-bit integer. As integers, of course, there was no rounding. They were integers on the legacy side, they were integers on the SAP side- but what about in the middle? What was the third party import module doing?

As a test, Michael whipped up a little two-line program to test:

float _date = Integer.parseInt("20181121"); System.out.println((int)_date); //Outputs: 20181120

Of course. This is a standard feature of IEEE floating point arithmetic. This hadn't been happening in early testing because they safely avoided date/numbers "large" enough. It was only when they moved into UAT and started using real data that the bug became apparent. For some reason, the data import module was passing integer data straight through floats, probably out of a misguided attempt to be "generic".

Michael raised the issue with the vendor, suggested that the vendor should check for casts to float, and pointed out that he was under an extreme time crunch. The vendor, to their credit, tracked down the bug and had a patched version to Michael within two days.

Working in the enterprise space, Michael has seen too many applications which store currency values as floats, leading to all sorts of accounting-related messes. This, however, is the only time he's seen that happen with dates.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet DebianPetter Reinholdtsen: Fetching trusted timestamps using the rfc3161ng python module

I have earlier covered the basics of trusted timestamping using the 'openssl ts' client. See blog post for 2014, 2016 and 2017 for those stories. But some times I want to integrate the timestamping in other code, and recently I needed to integrate it into Python. After searching a bit, I found the rfc3161 library which seemed like a good fit, but I soon discovered it only worked for python version 2, and I needed something that work with python version 3. Luckily I next came across the rfc3161ng library, a fork of the original rfc3161 library. Not only is it working with python 3, it have fixed a few of the bugs in the original library, and it has an active maintainer. I decided to wrap it up and make it available in Debian, and a few days ago it entered Debian unstable and testing.

Using the library is fairly straight forward. The only slightly problematic step is to fetch the required certificates to verify the timestamp. For some services it is straight forward, while for others I have not yet figured out how to do it. Here is a small standalone code example based on of the integration tests in the library code:



Python 3 script demonstrating how to use the rfc3161ng module to
get trusted timestamps.

The license of this code is the same as the license of the rfc3161ng
library, ie MIT/BSD.


import os
import pyasn1.codec.der
import rfc3161ng
import subprocess
import tempfile
import urllib.request

def store(f, data):

def fetch(url, f=None):
    response = urllib.request.urlopen(url)
    data =
    if f:
        store(f, data)
    return data

def main():
    with tempfile.NamedTemporaryFile() as cert_f,\
    	 tempfile.NamedTemporaryFile() as ca_f,\
    	 tempfile.NamedTemporaryFile() as msg_f,\
    	 tempfile.NamedTemporaryFile() as tsr_f:

        # First fetch certificates used by service
        certificate_data = fetch('', cert_f)
        ca_data_data = fetch('', ca_f)

        # Then timestamp the message
        timestamper = \
        data = b"Python forever!\n"
        tsr = timestamper(data=data, return_tsr=True)

        # Finally, convert message and response to something 'openssl ts' can verify
        store(msg_f, data)
        store(tsr_f, pyasn1.codec.der.encoder.encode(tsr))
        args = ["openssl", "ts", "-verify",

if '__main__' == __name__:

The code fetches the required certificates, store them as temporary files, timestamp a simple message, store the message and timestamp to disk and ask 'openssl ts' to verify the timestamp. A timestamp is around 1.5 kiB in size, and should be fairly easy to store for future use.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.


Planet DebianDima Kogan: Generating manpages from python and argparse

I find the python ecosystem deeply frustrating. On some level, they fixed some issues in previous languages. But at the same time, they chose to completely flout long-standing conventions, and have rewritten the world in ways that are different for no good reason. And when you rewrite the world, you always end up rewriting only the parts you care about, so your new implementation lacks pieces that other people find very important.

Today's annoyance: manpages. I have some tools written in python that I'm going to distribute, and since this isn't intended to be user-hostile, I want to ship manpages. The documentation already exists in the form of docstrings and argparse option decriptions, so I don't want to write the manpages, but they should be generated for me. I looked around and I can't find anything anywhere. There're some hoaky hacks people have come up with to do some of this, but I don't see any sort of "standard" way to do this at all. Even when I reduce the requirements to almost nothing, I still can't find anything. Please tell me if I missed something.

Anyway, I came up with yet another hoaky hack. It works for me. Sample project:

This has a python tool called frobnicate, and a script to generate its manpage as a .pod. The Makefile has rules to make this .pod and to convert it into a manpage. It works by running frobnicate --help, and parsing that output.

The --help message is set up to be maximally useful by including the main docstring into the message. This is good not just for the manpages, but to make an informative --help. And this has a nice consequence in that the manpage generator only needs to look at the --help output.

It is assumed that the main docstring (and thus the --help message) is formatted like a manpage would be, beginning with a synopsis. This isn't usually done in python, but it should be; they just like being contrarian for no good reason.

With those assumptions I can parse the --help output, and produce a reasonable manpage. Converted to html, it looks like this. Not the most exciting thing in the world, but that's the point.

This could all be cleaned up, and made less brittle. That would be great. In the meantime, it solves my use case. I'm releasing this into the public domain. Please use it and hack it up. If you fix it and give the changes back, I wouldn't complain. And if there're better ways to have done this, somebody please tell me.


A few people reached out to me with suggestions of tools they have found and/or used for this purpose. A survey:


This lives here and here. I actually found this thing in my search, but it didn't work at all, and I didn't want to go into the rabbithole of debugging it. Well, I debugged it now and know what it needed. Issues

  • First off, there're 3 binaries that could be executed:
    • argparse-manpage
    • wrap
    • bin/argparse-manpage

    It appears that only the first one is meant to be functional. The rest throw errors that require the user to debug this thing

  • To make it more exciting, bin/argparse-manpage doesn't work out of the box on Debian-based boxes: it begins with

    Which isn't something that exists there. Changing this to


    makes it actually runnable. Once you know that this is the thing that needs running, of course.

  • All right, now that it can run, we need to figure out more of this. The "right" invocation for the example project is this:
    $ argparse-manpage/bin/argparse-manpage --pyfile python-argparse-generate-manpages-example/frobnicate  --function parse_args > frobnicator.1

    Unsurprisingly it doesn't work

  • argparse-manpage wants a function that returns an ArgumentParser object. This went against what the example program did: return the already-parsed options.
  • Once this is done, it still doesn't work. Apparently it attempts to actually run the program, and the program barfs that it wasn't given enough arguments. So for manpage-creation purposes I disable the actual option parsing, and make the program do nothing
  • And it still doesn't work. Apparently the function that parses the arguments doesn't see the argparse import when argparse-manpage works with it (even though it works fine when you just run the program). Moving that import into the function makes it work finally. A patch to the test program combining all of the workarounds together:
diff --git a/frobnicate b/frobnicate
index 89da764..475ac63 100755
--- a/frobnicate
+++ b/frobnicate
@@ -16,6 +16,7 @@ import argparse

 def parse_args():
+    import argparse
     parser = \
         argparse.ArgumentParser(description = __doc__,
@@ -29,8 +30,8 @@ def parse_args():
                         help='''Inputs to process''')

-    return parser.parse_args()
+    return parser

-args = parse_args()
+#args = parse_args().parse_args()

Now that we actually get output, let's look at it. Converted to html, looks like this. Some chunks are missing because I didn't pass them on the commandline (author, email, etc). But it also generated information about the arguments only. I.e. the description in the main docstring is missing even though argparse was given it, and it's reported with --help. And the synopsis section contains the short options instead of an example, like it should.


This is in Debian in the help2man package. It works out of the box at least. Invocation:

help2man python-argparse-generate-manpages-example/frobnicate --no-discard-stderr > frobnicator.1

In html looks like this. Better. At least the description made it in. The usage message is mistakenly in the NAME section, and the SYNOPSIS section is missing with the synopsis ending up in the DESCRIPTION section. The tool was not given enough information to do this correctly. It could potentially do something POD-like where the user is responsible for actually writing out all the sections, and help2man would just pick out and reformat the option-parsing stuff. This would be very heuristic, and impossible to do "right".


There's apparently some way to do this with sphinx. I generally avoid the python-specific reimaginings of the world, so I haven't touched those. Besides, the "python" part is a detail in this project. The sphinx thing is here. And apparently you can invoke it like this:

python3 -m sphinx -b man ...


Not sure what a better solution to all this would be. Maybe some sort of docstring2man tool that works like pod2man combined with docopt instead of argparse.


Cory Doctorow“Radicalized” will be my next book!

I’ve just closed a new book deal: Tor Books will publish “Radicalized,” which tells four stories of hope, conflict, technology and justice in the modern world and near future in March 2019; along with the book deal is a major audiobook deal with Macmillan Audio and a screen deal with Topic Studios (a sister company to The Intercept) for one of the tales, “Unauthorized Bread.”

I’ll have lots more to say about this in the upcoming months! I just finished the last of the stories, “Masque of the Red Death” and I spent the week at the recording studio helping oversee a brilliant actor‘s recording of the audio for “Unauthorized Bread.” Things are happening!

Cory Doctorow closed a six-figure agreement with Tor’s Patrick Nielsen Hayden for four new novellas. Russell Galen at Scovil Galen Ghosh Literary Agency, who represented Doctorow, said the novellas will be published as a single print volume by Tor under “the overall title of Radicalized” and individually, in audio, by Macmillan Audio. Slated for a March 2019 release, the novellas will, Galen said, “provide a unique take on some of the most urgent and painful issues of our time.” The first novella in the collection, “Unauthorized Bread,” has been optioned for film by Topic Studios.

Tor Nabs Quad by Doctorow [Publishers Weekly]

Planet DebianDirk Eddelbuettel: RcppCCTZ 0.2.4

A new release 0.2.4 of RcppCCTZ is now on CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now at least two others do—but decided in their infinite wisdom to copy the sources yet again into their packages. Sigh.

This version updates to the current upstream, makes the internal tests a bit more rigorous (and skips them on the OS we shall not name as it does not seem to have proper zoneinfo available or installed). One function was properly vectorised in a clean PR, and a spurious #include was removed.

Changes in version 0.2.4 (2018-10-06)

  • An unused main() in src/ was #ifdef'ed away to please another compiler/OS combination.

  • The tzDiff function now supports a vector argument (#24).

  • An unnecessary #include was removed (#25).

  • Some tests are not conditioning on Solaris to not fail there (#26).

  • The CCTZ code was updated to the newest upstream version (#27).

  • Unit tests now use the RUnit package replacing a simpler tests script.

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianMichal Čihař: Weblate 3.2

Weblate 3.2 has been released today. It's fiftieth release of Weblate and also it's release with most fixed issues on GitHub. The most important change is in the background - introduction of Celery to process background tasks. The biggest user visible change is extended translation memory.

Full list of changes:

  • Add install_addon management command for automated addon installation.
  • Allow more fine grained ratelimit settings.
  • Added support for export and import of Excel files.
  • Improve component cleanup in case of multiple component discovery addons.
  • Rewritten Microsoft Terminology machine translation backend.
  • Weblate now uses Celery to offload some processing.
  • Improved search capabilities and added regular expression search.
  • Added support for Youdao Zhiyun API machine translation.
  • Added support for Baidu API machine translation.
  • Integrated maintenance and cleanup tasks using Celery.
  • Improved performance of loading translations by almost 25%.
  • Removed support for merging headers on upload.
  • Removed support for custom commit messages.
  • Configurable editing mode (zen/full).
  • Added support for error reporting to Sentry.
  • Added support for automated daily update of repositories.
  • Added support for creating projects and components by users.
  • Built in translation memory now automatically stores translations done.
  • Users and projects can import their existing translation memories.
  • Better management of related strings for screenshots.
  • Added support for checking Java MessageFormat.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Sam VargheseThe village experience

India may be a world power in some respects today, but the majority of its citizens still live in the villages that make up some 75% of the country. Despite the growth of industry, agriculture is still India’s mainstay when it comes to occupation.

Few city-bred kids opt to go and work in villages unless they are forced to. I opted to do so back in 1980, giving up a short stint as a journalist and taking up a job as a rural development extension officer with a Bangalore-based company known as Myrada. (It was originally known as Mysore Resettlement and Development Agency, a name that it had due to being originally set up to resettle Tibetans who had fled the Chinese invasion in 1959.)

By the time I joined Myrada in April 1980, the company had a number of projects in operation. The modus operandi was to do a project report for a certain area which had development potential, approach a foreign funding agency and get the necessary money to implement the project.

I was initially sent to a couple of projects to see what the job involved, the first being a project in the Nilumbur district of Kerala. This is a predominantly Muslim area in the state which is one of those with the highest literacy in the country.

My brief was to observe and submit a report to the Myrada head office when I finished my month-long stint in Nilumbur. The project officer was a man named Jacob (I forget his first name) and his deputy was a man named Mohan Thazhathu, a person to whom I took an instant dislike due to his smarmy manner and use of high-faluting language that meant nothing.

The pair had a nice set-up and were having a ball spending project money on their personal needs. For instance, whenever they went to another part of the state, they would travel by the office jeep. On the return trip, they would visit all their relatives and collect what produce they could for their own consumption. The cost of the journey went on the project’s books.

There were several suspect initiatives which had been provided funding by Jacob and Thazhathu, many of which failed. It looked very much like any decision on funding an initiative was dependent on which of their staff was proposing it, not the viability of the initiative.

My report made mention of all this and the head of Myrada, an Anglo-Indian named Bill Davinson, was not happy about what was going on. He went down there and made some noise and, in turn, both Jacob and Thazhathu were annoyed at what I had done. (Thazhathu managed to later manoeuvre his way into the Bangalore office and tried to queer the pitch for me.)

After Nilumbur, I was sent to Huttur in the state of Karnataka. The project officer there, one Shetty, was rather an obnoxious character who resented the fact that the head office had sent someone whom he thought was meant to spy on what he was doing. It did not help that the head office had not informed him that I would be spending a month at his project. Each of these project officers was busy with empire-building and did not like people from the head office visiting.

The Huttur project was very close to a Tibetan settlement in a place called Odeyarpalaya. It was interesting to see the extent to which the Tibetans had developed their settlement; unlike the local villagers, they were open to the idea of having hybrid cattle and genetically modified seeds and thus their animals and crops yielded much more, leading to greater prosperity.

I had not been long at Huttur when Davinson turned up there. He was on his way to another project in a place known as Talavadi, a terribly under-developed place on the border between the states of Karnataka and Tamil Nadu. Given its location, Talavadi had been neglected by the governments of both states.

Myrada had a project there, spread over a vast area, with it being divided into sectors: Western, Eastern and Central. I was asked to replace the person in charge of the Western Sector, one Raj Aiyer, who was leaving the organisation and returning to Bangalore. There were 55 villages which I had to look after, along with two local staff, neither of whom was very happy that they had not been given the job.

For the time I was there — which ultimately turned out to be about eight months in all — I lived in a village known as Panakahalli, in a small house. One couldn’t spend much money as there was no entertainment. My food came from the Catholic Church which was right in front of the building in which I lived; there were nuns living there who had taken a vow of silence and they did the cooking for the two priests and also for the one outsider – me.

I would go there in the morning and evening for my meals; lunch was in one village or another, where I was visiting and trying to set up little local projects that could be carried out to build relationships with the people. We had more meetings than I care to think about and carried out all kinds of meaningless surveys. The project had plenty of funds — 5.5 million rupees to be spent over three years — but the head, one S. Rajkumar, was frugal in the extreme, though a man of very great integrity.

It took time to get anything going due to the bureaucracy and also the hassles involved with getting from A to B. The three sector extension officers — I was in charge of the western sector — had motorcycles for getting around and there was one jeep at the project head office in the central sector. If your bike broke down, then you were stuck. There was a patchy bus service but one could not depend on it to visit villages, which was the primary job that we had.

Given the remoteness of this area, a provision for six motorcycles had been included in the project report. But three of these vehicles were being used by staff in Bangalore. Exactly why I could not figure out, because a lot of productive time was lost when bikes broke down in the project area itself. When I raised it during one of the weekly staff meetings at which the agriculture officer from Bangalore, one Alva, was present, he was not very pleased. Soon the word got around that there was an upstart in Talavadi who would not hesitate to speak out.

On one trip to Bangalore, I dropped in at the head office and found Thazhathu there. He was all oiliness as usual and offered me the use of his motorcycle — it was one of the three spares that had been asked for by our project — while I was in town. We had a short chat one evening, and he told me that Davinson was thinking of putting me in charge of a project in Kodaikanal, a hill station in Tamil Nadu, where Myrada had started a project to help refugees from Sri Lanka who were taking shelter from the ethnic problems in that country.

A few months later, Rajkumar decided to revive the co-operatives that had lain dormant in the Talavadi area for a long time. The idea was to give loans to the locals through the co-operatives. For this, all the locals had to come together and vote. This was a success, though the villagers who turned up on the day were under the impression that the moment they affixed their thumb prints on the resolution to restart the co-ops — most could not read or write — they would be given loans.

Soon after this — in October — Rajkumar went off to Bangalore for meetings at the Myrada head office and left me in charge. Everyone had gathered in the eastern sector for the co-op vote and the understanding was that we would be there until Rajkumar returned. There was prohibition in Tamil Nadu at the time so any time someone felt the need to drink, they would head over the border — which wasn’t very far away — and slake their thirst.

The offices for the eastern sector were being built at this time and there were three people from a private company who were handling this, all from a Bangalore-based company named Trinity Constructions. Rudy Gonsalves, Raymond Tellis and Joe van Ross were very nice folk, but all loved a drink. As did the eastern sector extension officer, Baldwin Bose Sigamani.

While we were all there, one night Baldwin, Joe and Raymond went out for a drink. The first I knew of it was when some locals brought me news that a fight was taking place in a shop just across the border and that they were involved. I took the project jeep and went looking for them. When the trio returned, they got into a fight with the workers who were employed by Gonsalves. It turned into a major brawl.

When Rajkumar returned, he was not happy about what had happened. But things got worse: someone at head office got wind of things, and Davinson landed in our midst a few days later. During a staff meeting, he told me that as I had been in charge, I was responsible for all that had happened and would be sacked. He did not anticipate the reaction from the others – all handed in their resignations.

Faced with this situation, Davinson had second thoughts, and changed his mind about firing me. But from that point on, I was a marked man as he was unhappy about having to reverse his decision.

In November, Thazhathu came down to Talavadi and picked a fight with me; he had not forgotten my reports which had got him into a spot of bother earlier in the year. By this time I had had enough, and was longing to return to Bangalore and journalism. So I told Rajkumar that I would leave in December.

When I got home a few days after Christmas, I found that my previous employer had sent a letter home asking if I could come and meet him. I did so and he asked me to come back and work for him as a journalist. I have stayed in that profession to this day.

That was how my village work ended. In the few months I was there, I managed to put together a project to provide drinking water for a village down the road from where I lived. Some of the people in that village, Singanapuram, wept when I told that I was leaving.

For me, the enduring image is that of a man named Kariappa, who came to me after I had been given a farewell by the people and slipped 20 rupees into my hand. It was a sum he could ill-afford to give away, but he wanted to do something to show his gratitude. He had tears in his eyes as he turned away.

Planet DebianNorbert Preining: TLCockpit v1.0

Today I released v1.0 of TLCockpit, the GUI front-end for the TeX Live Manager tlmgr.

If you are looking for a general introduction to TLCockpit, please see the blog introducing it. Here I only want to introduce the changes made since the last announcement here:

  • copyable information: The package information window didn’t allow to copy text from it, which has been fixed as far as possible.
  • placeholders: Add placeholders to tables when they are empty.
  • Cygwin support: Thanks to lots of debugging and support from Ken Brown we now support running on Cygwin
  • Java version checks: Java previous to version 8 are not supported and the program bails out. For Java later than version 8 we give a warning since in most cases it will not run due to ScalaFX incompatibilities.
  • CTAN and will soon be available via tlmgr update. As usual, please use the issue page of the github project to report problems.


Planet DebianVishal Gupta: DebDialer : Handling phone numbers on Linux Desktops | GSoC 2018

This summer I had the chance to contribute to Debian as a part of GSoC. I built a desktop application, debdialer for handling tel: URLs and (phone numbers in general) on the Linux Desktop. It is written in Python 3.5.2 and uses PyQt4 to display a popup window. Alternatively, there is also a no-gui option that uses dmenu for input and terminal for output. There is also a modified apk of KDE-Connect to link debdialer with the user’s Android Phone. The pop-up window has numeric and delete buttons, so the user can either use the GUI or keyboard to modify numbers.

Alt Text DebDialer


(Screenshots and how-to)
  1. Adds contact using .vcf file (Add vcard to Contacts)
  2. Adds number in dialer as contact (Add to Contacts)
  3. Sending dialer number to Android phone (DIAL ON ANDROID PHONE)
  4. Parsing numbers from file (Open File)
  5. Automatic formatting of numbers and setting of details


Installing with pip installs the python package but does not set up the desktop file. Hence, the following script needs to be run.

# Optional dependencies. Atleast one of them is required.
sudo apt install dmenu
sudo apt install python3-pyqt4

curl -L -s | bash


CryptogramFriday Squid Blogging: Watch Squid Change Colors

This is an amazing short video of a squid -- I don't know the species -- changing its color instantly.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianJohn Goerzen: The Python Unicode Mess

Unicode has solved a lot of problems. Anyone that remembers the mess of ISO-8859-* vs. CP437 (and of course it’s even worse for non-Western languages) can attest to that. And of course, these days they’re doing the useful work of…. codifying emojis.

Emojis aside, things aren’t all so easy. Today’s cause of pain: Python 3. So much pain.

Python decided to fully integrate Unicode into the language. Nice idea, right?

But here come the problems. And they are numerous.

gpodder, for instance, frequently exits with tracebacks due to Python errors converting podcast titles with smartquotes into ASCII. Then you have the case where the pexpect docs say to use logfile = sys.stdout to show the interaction with the virtual terminal. Only that causes an error these days.

But processing of filenames takes the cake. I was recently dealing with data from 20 years ago, before UTF-8 was a filename standard. These filenames are still valid on Unix. tar unpacks them, and they work fine. But you start getting encoding errors from Python trying to do things like store filenames in strings. For a Python program to properly support all valid Unix filenames, it must use “bytes” instead of strings, which has all sorts of annoying implications. What’s the chances that all Python programs do this correctly? Yeah. Not high, I bet.

I recently was processing data generated by mtree, which uses octal escapes for special characters in filenames. I thought this should be easy in Python, eh?

That second link had a mention of an undocumented function, codecs.escape_decode, which does it right. I finally had to do this:

    if line.startswith(b'#'):
    fields = line.split()
    filename = codecs.escape_decode(fields[0])[0]
    filetype = getfield(b"type", fields[1:])
    if filetype == b"file":

And, whatever you do, don’t accidentally write if filetype == "file" — that will silently always evaluate to False, because "file" tests different than b"file". Not that I, uhm, wrote that and didn’t notice it at first…

So if you want to actually handle Unix filenames properly in Python, you:

  • Must have a processing path that fully avoids Python strings.
  • Must use sys.{stdin,stdout}.buffer instead of just sys.stdin/stdout
  • Must supply filenames as bytes to various functions. See PEP 0471 for this comment: “Like the other functions in the os module, scandir() accepts either a bytes or str object for the path parameter, and returns the and DirEntry.path attributes with the same type as path. However, it is strongly recommended to use the str type, as this ensures cross-platform support for Unicode filenames. (On Windows, bytes filenames have been deprecated since Python 3.3).” So if you want to be cross-platform, it’s even worse, because you can’t use str on Unix nor bytes on Windows.

Update: Would you like to receive filenames on the command line? I’ll hand you this fine mess. And the environment? it’s not even clear.

Krebs on SecuritySupply Chain Security is the Whole Enchilada, But Who’s Willing to Pay for It?

From time to time, there emerge cybersecurity stories of such potential impact that they have the effect of making all other security concerns seem minuscule and trifling by comparison. Yesterday was one of those times. Bloomberg Businessweek on Thursday published a bombshell investigation alleging that Chinese cyber spies had used a U.S.-based tech firm to secretly embed tiny computer chips into electronic devices purchased and used by almost 30 different companies. There aren’t any corroborating accounts of this scoop so far, but it is both fascinating and terrifying to look at why threats to the global technology supply chain can be so difficult to detect, verify and counter.

In the context of computer and Internet security, supply chain security refers to the challenge of validating that a given piece of electronics — and by extension the software that powers those computing parts — does not include any extraneous or fraudulent components beyond what was specified by the company that paid for the production of said item.

In a nutshell, the Bloomberg story claims that San Jose, Calif. based tech giant Supermicro was somehow caught up in a plan to quietly insert a rice-sized computer chip on the circuit boards that get put into a variety of servers and electronic components purchased by major vendors, allegedly including Amazon and Apple. The chips were alleged to have spied on users of the devices and sent unspecified data back to the Chinese military.

It’s critical to note up top that Amazon, Apple and Supermicro have categorically denied most of the claims in the Bloomberg piece. That is, their positions refuting core components of the story would appear to leave little wiggle room for future backtracking on those statements. Amazon also penned a blog post that more emphatically stated their objections to the Bloomberg piece.

Nevertheless, Bloomberg reporters write that “the companies’ denials are countered by six current and former senior national security officials, who—in conversations that began during the Obama administration and continued under the Trump administration—detailed the discovery of the chips and the government’s investigation.”

The story continues:

Today, Supermicro sells more server motherboards than almost anyone else. It also dominates the $1 billion market for boards used in special-purpose computers, from MRI machines to weapons systems. Its motherboards can be found in made-to-order server setups at banks, hedge funds, cloud computing providers, and web-hosting services, among other places. Supermicro has assembly facilities in California, the Netherlands, and Taiwan, but its motherboards—its core product—are nearly all manufactured by contractors in China.

Many readers have asked for my take on this piece. I heard similar allegations earlier this year about Supermicro and tried mightily to verify them but could not. That in itself should be zero gauge of the story’s potential merit. After all, I am just one guy, whereas this is the type of scoop that usually takes entire portions of a newsroom to research, report and vet. By Bloomberg’s own account, the story took more than a year to report and write, and cites 17 anonymous sources as confirming the activity.

Most of what I have to share here is based on conversations with some clueful people over the years who would probably find themselves confined to a tiny, windowless room for an extended period if their names or quotes ever showed up in a story like this, so I will tread carefully around this subject.

The U.S. Government isn’t eager to admit it, but there has long been an unofficial inventory of tech components and vendors that are forbidden to buy from if you’re in charge of procuring products or services on behalf of the U.S. Government. Call it the “brown list, “black list,” “entity list” or what have you, but it’s basically an indelible index of companies that are on the permanent Shit List of Uncle Sam for having been caught pulling some kind of supply chain shenanigans.

More than a decade ago when I was a reporter with The Washington Post, I heard from an extremely well-placed source that one Chinese tech company had made it onto Uncle Sam’s entity list because they sold a custom hardware component for many Internet-enabled printers that secretly made a copy of every document or image sent to the printer and forwarded that to a server allegedly controlled by hackers aligned with the Chinese government.

That example gives a whole new meaning to the term “supply chain,” doesn’t it? If Bloomberg’s reporting is accurate, that’s more or less what we’re dealing with here in Supermicro as well.

But here’s the thing: Even if you identify which technology vendors are guilty of supply-chain hacks, it can be difficult to enforce their banishment from the procurement chain. One reason is that it is often tough to tell from the brand name of a given gizmo who actually makes all the multifarious components that go into any one electronic device sold today.

Take, for instance, the problem right now with insecure Internet of Things (IoT) devices — cheapo security cameras, Internet routers and digital video recorders — sold at places like Amazon and Walmart. Many of these IoT devices have become a major security problem because they are massively insecure by default and difficult if not also impractical to secure after they are sold and put into use.

For every company in China that produces these IoT devices, there are dozens of “white label” firms that market and/or sell the core electronic components as their own. So while security researchers might identify a set of security holes in IoT products made by one company whose products are white labeled by others, actually informing consumers about which third-party products include those vulnerabilities can be extremely challenging. In some cases, a technology vendor responsible for some part of this mess may simply go out of business or close its doors and re-emerge under different names and managers.

Mind you, there is no indication anyone is purposefully engineering so many of these IoT products to be insecure; a more likely explanation is that building in more security tends to make devices considerably more expensive and slower to market. In many cases, their insecurity stems from a combination of factors: They ship with every imaginable feature turned on by default; they bundle outdated software and firmware components; and their default settings are difficult or impossible for users to change.

We don’t often hear about intentional efforts to subvert the security of the technology supply chain simply because these incidents tend to get quickly classified by the military when they are discovered. But the U.S. Congress has held multiple hearings about supply chain security challenges, and the U.S. government has taken steps on several occasions to block Chinese tech companies from doing business with the federal government and/or U.S.-based firms.

Most recently, the Pentagon banned the sale of Chinese-made ZTE and Huawei phones on military bases, according to a Defense Department directive that cites security risks posed by the devices. The U.S. Department of Commerce also has instituted a seven-year export restriction for ZTE, resulting in a ban on U.S. component makers selling to ZTE.

Still, the issue here isn’t that we can’t trust technology products made in China. Indeed there are numerous examples of other countries — including the United States and its allies — slipping their own “backdoors” into hardware and software products.

Like it or not, the vast majority of electronics are made in China, and this is unlikely to change anytime soon. The central issue is that we don’t have any other choice right nowThe reason is that by nearly all accounts it would be punishingly expensive to replicate that manufacturing process here in the United States.

Even if the U.S. government and Silicon Valley somehow mustered the funding and political will to do that, insisting that products sold to U.S. consumers or the U.S. government be made only with components made here in the U.S.A. would massively drive up the cost of all forms of technology. Consumers would almost certainly balk at buying these way more expensive devices. Years of experience has shown that consumers aren’t interested in paying a huge premium for security when a comparable product with the features they want is available much more cheaply.

Indeed, noted security expert Bruce Schneier calls supply-chain security “an insurmountably hard problem.”

“Our IT industry is inexorably international, and anyone involved in the process can subvert the security of the end product,” Schneier wrote in an opinion piece published earlier this year in The Washington Post. “No one wants to even think about a US-only anything; prices would multiply many times over. We cannot trust anyone, yet we have no choice but to trust everyone. No one is ready for the costs that solving this would entail.”

The Bloomberg piece also addresses this elephant in the room:

“The problem under discussion wasn’t just technological. It spoke to decisions made decades ago to send advanced production work to Southeast Asia. In the intervening years, low-cost Chinese manufacturing had come to underpin the business models of many of America’s largest technology companies. Early on, Apple, for instance, made many of its most sophisticated electronics domestically. Then in 1992, it closed a state-of-the-art plant for motherboard and computer assembly in Fremont, Calif., and sent much of that work overseas.

Over the decades, the security of the supply chain became an article of faith despite repeated warnings by Western officials. A belief formed that China was unlikely to jeopardize its position as workshop to the world by letting its spies meddle in its factories. That left the decision about where to build commercial systems resting largely on where capacity was greatest and cheapest. “You end up with a classic Satan’s bargain,” one former U.S. official says. “You can have less supply than you want and guarantee it’s secure, or you can have the supply you need, but there will be risk. Every organization has accepted the second proposition.”

Another huge challenge of securing the technology supply chain is that it’s quite time consuming and expensive to detect when products may have been intentionally compromised during some part of the manufacturing process. Your typical motherboard of the kind produced by a company like Supermicro can include hundreds of chips, but it only takes one hinky chip to subvert the security of the entire product.

Also, most of the U.S. government’s efforts to police the global technology supply chain seem to be focused on preventing counterfeits — not finding secretly added spying components.

Finally, it’s not clear that private industry is up to the job, either. At least not yet.

“In the three years since the briefing in McLean, no commercially viable way to detect attacks like the one on Supermicro’s motherboards has emerged—or has looked likely to emerge,” the Bloomberg story concludes. “Few companies have the resources of Apple and Amazon, and it took some luck even for them to spot the problem. ‘This stuff is at the cutting edge of the cutting edge, and there is no easy technological solution,’ one of the people present in McLean says. ‘You have to invest in things that the world wants. You cannot invest in things that the world is not ready to accept yet.'”

For my part, I try not to spin my wheels worrying about things I can’t change, and the supply chain challenges definitely fit into that category. I’ll have some more thoughts on the supply chain problem and what we can do about it in an interview to be published next week.

But for the time being, there are some things worth thinking about that can help mitigate the threat from stealthy supply chain hacks. Writing for this week’s newsletter put out by the SANS Institute, a security training company based in Bethesda, Md., editorial board member William Hugh Murray has a few provocative thoughts:

  1. Abandon the password for all but trivial applications. Steve Jobs and the ubiquitous mobile computer have lowered the cost and improved the convenience of strong authentication enough to overcome all arguments against it.
  2. Abandon the flat network. Secure and trusted communication now trump ease of any-to-any communication.
  3. Move traffic monitoring from encouraged to essential.
  4. Establish and maintain end-to-end encryption for all applications. Think TLS, VPNs, VLANs and physically segmented networks. Software Defined Networks put this within the budget of most enterprises.
  5. Abandon the convenient but dangerously permissive default access control rule of “read/write/execute” in favor of restrictive “read/execute-only” or even better, “Least privilege.” Least privilege is expensive to administer but it is effective. Our current strategy of “ship low-quality early/patch late” is proving to be ineffective and more expensive in maintenance and breaches than we could ever have imagined.

CryptogramHelen Nissenbaum on Data Privacy and Consent

This is a fantastic Q&A with Cornell Tech Professor Helen Nissenbaum on data privacy and why it's wrong to focus on consent.

I'm not going to pull a quote, because you should read the whole thing.

CryptogramDetecting Credit Card Skimmers

Interesting research paper: "Fear the Reaper: Characterization and Fast Detection of Card Skimmers":

Abstract: Payment card fraud results in billions of dollars in losses annually. Adversaries increasingly acquire card data using skimmers, which are attached to legitimate payment devices including point of sale terminals, gas pumps, and ATMs. Detecting such devices can be difficult, and while many experts offer advice in doing so, there exists no large-scale characterization of skimmer technology to support such defenses. In this paper, we perform the first such study based on skimmers recovered by the NYPD's Financial Crimes Task Force over a 16 month period. After systematizing these devices, we develop the Skim Reaper, a detector which takes advantage of the physical properties and constraints necessary for many skimmers to steal card data. Our analysis shows the Skim Reaper effectively detects 100% of devices supplied by the NYPD. In so doing, we provide the first robust and portable mechanism for detecting card skimmers.

Boing Boing post.

Worse Than FailureError'd: Let's Hope it's Only a Test

"When the notification system about the broken NYC MTA is broken, does that make the MTA meta-broken?" writes T.S.


"I must be 'Y' years old by now, right?" writes Louis I.


Josh W. wrote, "The fact that Fifth Third Bank handles money should be able to handle numbers correctly."


"I received this sextortion email last night, but I'm guessing the spammer hopes I have my own spinning software in order to decipher it!" Nigel writes.


Romaji A. wrote, "Maybe label printers print get bored of printing bar codes and it makes them go a little...crazy?"


"The Quantas in flight app is proud to announce that your plane's warp drives are fully activated," writes Tony B.


[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


TEDSociety 5.0: Talks from TED and Samsung

Carmel Coscia, vice president of B2B marketing for Samsung Electronics America, welcomes the audience to TEDSalon: Society 5.0, held at Samsung’s 837 Space in New York, September 26, 2018. (Photo: Ryan Lash / TED)

We live in an interconnected world where boundaries between physical and digital spaces are blurring. We can no longer think about innovation in isolation, but must consider how emerging technologies — like artificial intelligence, augmented reality, the Internet of Things, 5G networks, robotics and the decentralized web — will combine to create (we hope!) a super-smart society.

At TEDSalon: Society 5.0, presented by TED and Samsung, seven leaders and visionaries explored the new era of interconnectivity and how it will reshape our world.

Do you know how your data is being used? We tap on apps and devices all day long, not quite grasping that our usage is based on a “power imbalance,” says Finn Lützow-Holm Myrstad, director of digital policy at the Norwegian Consumer Council. Most of us automatically click “yes” to terms and conditions without realizing we have agreed to let companies collect our personal information and use it on a scale we could never imagine, he explains. To demonstrate, Myrstad introduces Cayla, a Bluetooth-connected doll. According to Cayla’s terms, its manufacturer can use the recordings of children and relatives who play with the doll for advertising, and any information it gathers can be shared with third parties. Myrstad and his team also looked at the terms for a dating app, finding that users had unwittingly forked over their entire dating history — photos, chats and interactions — to the app creator forever. After the Council’s investigations, Cayla was pulled from retailers and the app changed its policies, but as Myrstad points out, “Organizations such as mine … can’t be everywhere, nor can consumers fix this on their own.” Correcting the situation requires ongoing vigilance and intention. Companies must prioritize trust, and governments should constantly update and enforce rules. For the rest of us, he says: “Be the voice that constantly reminds the world that technology will only truly benefit society if it respects basic rights.”

Aruna Srinivasan, executive director for the mobile communication trade group GSMA, believes the Internet of Things will improve our quality of life — from tackling pollution to optimizing food production. She speaks at TEDSalon: Society 5.0. (Photo: Ryan Lash / TED)

How the Internet of Things is solving real problems. You’re surrounded by things connected to the internet — from cars and smart elevators to parking meters and industrial machines used for manufacturing. How can we use the data created by all of these connected devices to make the world safer and healthier? Aruna Srinivasan, executive director at the mobile communication trade group GSMA, shows how the Internet of Things (IoT) is helping to solve two pressing issues: pollution and food production. Using small IoT-connected sensors on garbage trucks in London, Srinivasan and her team created a detailed map showing pollution hotspots and the times of day when pollution was worst. Now, the data is helping the city introduce new traffic patterns, like one-way streets, and create bicycle paths outside of the most highly polluted areas. In the countryside, IoT-enabled sensors are being used to measure soil moisture, pH and other crop conditions in real time. Srinivasan and her team are working with China Agricultural University, China Mobile and Rothamsted Research to use the information gathered by these sensors to improve the harvest of grapes and wheat. The goal: help farmers be more precise, increasing food production while preventing things like water scarcity. “The magic of the IoT comes from the health and security it can provide us,” Srinivasan says. “The Internet of Things is going to transform our world and change our lives for the better.”

Web builder Tamas Kocsis is developing his own internet: a decentralized network powered and secured by the people. He speaks at TEDSalon: Society 5.0. (Photo: Ryan Lash / TED)

Internet by the people, for the people. Web builder Tamas Kocsis is worried about the future of the internet. In its current form, he says, the internet is trending toward centralization: large corporations are in control of our digital privacy and access to information. What’s more, these gatekeepers are vulnerable to attacks and surveillance, and they make online censorship easier. In China, for instance, where the government tightly controls its internet, web users are prohibited from criticizing the government or talking about protests. And the recent passage of EU copyright directive Article 13, which calls for some platforms to filter user-generated content, could limit our freedom to openly blog, discuss, share and link to content. In 2015, Kocsis began to counteract this centralization process by developing an alternative, decentralized network called ZeroNet. Instead of relying on centralized hosting companies, ZeroNet — which is powered by free and open-source software — allows users to help host websites by directly downloading them onto their own servers. The whole thing is secured by public key cryptography, ensuring no one can edit the websites but their owners — and protecting them from being taken down by one central source. In 2017, China began making moves to block Kocsis’s network, but that hasn’t deterred him, he says: “Building a decentralized network means creating a safe harbor, a space where the rules are not written by political parties and big corporations, but by the people.”

The augmented reality revolution. Entrepreneur Brian Mullins believes augmented reality (AR) is a more important technology than the internet — and even the printing press — because of the opportunities it offers for revolutionizing how we work and learn. At a gas turbine power plant in 2017, Mullins saw that when AR programs replaced traditional training measures, workers slashed their training and work time from 15.5 hours to an average of 50 minutes. Mullins predicts AR will bring a cognitive literacy to the world, helping us transition to new careers and workplaces and facilitating breakthroughs in the arts and sciences. Ultimately, Mullins says, AR won’t just change how we work — it’ll change the fundamentals of how we live.

MAI LAN rocks the stage with a performance of two songs, “Autopilote” and “Pumper,” at TEDSalon: Society 5.0. (Photo: Ryan Lash / TED)

A genre-bending performance. During a musical interlude, French-Vietnamese artist MAI LAN holds the audience rapt with a performance of “Autopilote” and “Pumper.” Alternating between French and English lyrics, lead singer Mai-Lan Chapiron sings over diffuse electronic beats and circular synths, bringing her cool charisma to the stage.

Researcher Kate Darling asks: What can our interactions with robots teach us about what it means to be human? She speaks at TEDSalon: Society 5.0. (Photo: Ryan Lash / TED)

Robotic reflections of our humanity. We’re far from developing robots that feel emotions, but we already feel for them, says researcher Kate Darling — and an instinct like that can have consequences. We’re biologically hardwired to project intent and life onto any movement that seems autonomous to us, which sometimes makes it difficult to treat machines (like a Roomba) any differently from the way we treat our own pets. But this emotional connection to robots, while illogical, could prove useful in better understanding ourselves. “My question for the coming era of human-robot interaction is not: ‘Do we empathize with robots?'” Darling says. “It’s: ‘Can robots change people’s empathy?'”

Humans belong in the digital future. Author, documentarian and technologist Douglas Rushkoff isn’t giving up on humans just yet. He believes humans deserve a place in the digital future, but he worries that the future has become “something we bet on in a zero-sum, winner-takes-all competition,” instead of something we work together to create. Humans, it sometimes seems to him, are no longer valued for their creativity but for their data; as he frames it, we’ve been conditioned to see humanity as the problem and technology as the solution. Instead, he urges us to focus on making technology work for us and our future, not the other way around. Believing in the potential and value of humans isn’t about rejecting technology, he says — it’s about bringing key values of our pre-digital world into the future with us. “Join Team Human. Find the others,” Rushkoff says. “Together let’s make the future that we always wanted.”

Worse Than FailureCodeSOD: Break Out of your Parents

When I first glanced at this submission from Thomas, I almost just scrolled right by. “Oh, it’s just another case where they put the same code in both branches of the conditional,” I said. Then I looked again.

if (obj.success) {
    for (i = 0; i < parent.length; i++) {
        try {
            parent[i]['compositionExportResultMessage'](obj.success, obj.response, 'info');
        } catch (e) { }
else {
    for (i = 0; i < parent.length; i++) {
        try {
            parent[i]['compositionExportResultMessage'](obj.success, obj.response, 'error');
        } catch (e) { }

First, I want to give a little shout out to my “favorite” kind of ticket management: attaching a ticket number to a comment in your code. I can’t be certain, but I assume that’s what //#BZ7350 is doing in there, anyway. I’ve had the misfortune of working in places that required that, and explaining, “I can just put it in my source control comments,” couldn’t shift company policy.

Now, this is a case where nearly identical code is executed in either branch. The key difference is whether you pass info or error up to your parent.

Which… parent is apparently an array, which means it should at least be called parents, but either way that ends up being a weird choice. Sure, there are data structures where children can have multiple parents, but how often do you see them? Especially in web programming, which this appears to be, which mostly uses trees.

But fine, a child might have multiple parents. Those parents may or may not implement a compositionExportResultMessage, and if they do, it may or may not throw an exception when called. We want hopefully one parent to handle it, so take a close look at the loop.

We try to call compositionExportResultMessage on our parent. If it’s successful, we break. If it fails, we throw the exception away and try with the next parent in our array. Repeat until every parent has failed, or one has succeeded.

Thomas didn’t provide much context here, but I’m not sure how much we actually need. Something is really wrong in this code base, and based on how ticket #BZ7350 was closed, I don’t think it’s gonna get any better.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!


Planet Linux AustraliaGary Pendergast: WordPress 5.0 Needs You!

Yesterday, we started the WordPress 5.0 release cycle with an announcement post.

It’s a very exciting time to be involved in WordPress, and if you want to help make it the best, now’s an excellent opportunity to jump right in.

A critical goal of this release cycle is transparency.

As a member of the WordPress 5.0 leadership team, the best way for me to do my job is to get feedback from the wider WordPress community as early, and as quickly as possible. I think I speak for everyone on the leadership team when I say that we all feel the same on this. We want everyone to be able to participate, which will require some cooperation from everyone in the wider WordPress community.

The release post was published as soon as it was written, we wanted to get it out quickly, so everyone could be aware of what’s going on. Publishing quickly does mean that we’re still writing the more detailed posts about scope, and timeline, and processes. Instead of publishing a completed plan all at once, we intentionally want to include everyone from the start, and evolve plans as we get feedback.

With no other context, the WordPress 5.0 timeline of “release candidate in about a month” would be very short, which is why we’ve waited until Gutenberg had proved itself before setting a timeline. As we mentioned in the post, WordPress 5.0 will be “WordPress 4.9.8 + Gutenberg”. The Gutenberg plugin is running on nearly 500k sites, and WordPress 4.9.8 is running on millions of sites. For comparison, it’s considered a well tested major version if we see 20k installs before the final release date. Gutenberg is a bigger change than we’ve done in the past, so should be held to a higher standard, and I think we can agree that 500k sites is a pretty good test base: it arguably meets, or even exceeds that standard.

We can have a release candidate ready in a month.

The Gutenberg core team are currently focussed on finishing off the last few features. The Gutenberg plugin has evolved exceedingly quickly thanks to their work, it’s moved so much faster than anything we’ve done in WordPress previously. As we transition to bug fixing, you should expect to see the same rapid improvement.

The block editor’s backwards compatibility with the classic editor is important, of course, and the Classic Editor plugin is a part of that: if you have a site that doesn’t yet work with the block editor, please go ahead and install the plugin. I’d be happy to see the Classic Editor plugin getting 10 million or more installs, if people need it. That would both show a clear need for the classic interface to be maintained for a long time, and because it’s the official WordPress plugin for doing it, we can ensure that it’s maintained for as long as it’s needed. This isn’t a new scenario to the WordPress core team, we’ve been backporting security fixes to WordPress 3.7 for years. We’re never going to leave site owners out in the cold there, and exactly the same attitude applies to the Classic Editor plugin.

The broader Gutenberg project is a massive change, and WordPress is a big ship to turn.

It’s going to take years to make this transition, and it’s okay if WordPress 5.0 isn’t everything for everyone. There’ll be a WordPress 5.1, and 5.2, and 5.3, and so on, the block editor will continue to evolve to work for more and more people.

My role in WordPress 5.0 is to “generally shepherd the merge”. I’ve built or guided some of the most complex changes we’ve made in Core in recent years, and they’ve all been successful. I don’t intend to change that record, WordPress 5.0 will only be released when I’m as confident in it as I was for all of those previous projects.

Right now, I’m asking everyone in the WordPress community for a little bit of trust, that we’re all working with the best interests of WordPress at heart. I’m also asking for a little bit of patience, we’re only human, we can only type so fast, and we do need to sleep every now and then. 😉

WordPress 5.0 isn’t the finish line, it’s the starter pistol.

This is a marathon, not a sprint, and the goal is to set WordPress up for the next 15 years of evolution. This can only happen one step at a time though, and the best way to get there will be by working together. We can have disagreements, we can have different priorities, and we can still come together to create the future of WordPress.

CryptogramThe Effects of GDPR's 72-Hour Notification Rule

The EU's GDPR regulation requires companies to report a breach within 72 hours. Alex Stamos, former Facebook CISO now at Stanford University, points out how this can be a problem:

Interesting impact of the GDPR 72-hour deadline: companies announcing breaches before investigations are complete.

1) Announce & cop to max possible impacted users.
2) Everybody is confused on actual impact, lots of rumors.
3) A month later truth is included in official filing.

Last week's Facebook hack is his example.

The Twitter conversation continues as various people try to figure out if the European law allows a delay in order to work with law enforcement to catch the hackers, or if a company can report the breach privately with some assurance that it won't accidentally leak to the public.

The other interesting impact is the foreclosing of any possible coordination with law enforcement. I once ran response for a breach of a financial institution, which wasn't disclosed for months as the company was working with the USSS to lure the attackers into a trap. It worked.


The assumption that anything you share with an EU DPA stays confidential in the current media environment has been disproven by my personal experience.

This is a perennial problem: we can get information quickly, or we can get accurate information. It's hard to get both at the same time.

CryptogramTerahertz Millimeter-Wave Scanners

Interesting article on terahertz millimeter-wave scanners and their uses to detect terrorist bombers.

The heart of the device is a block of electronics about the size of a 1990s tower personal computer. It comes housed in a musician's black case, akin to the one Spinal Tap might use on tour. At the front: a large, square white plate, the terahertz camera and, just above it, an ordinary closed-circuit television (CCTV) camera. Mounted on a shelf inside the case is a laptop that displays the CCTV image and the blobby terahertz image side by side.

An operator compares the two images as people flow past, looking for unexplained dark areas that could represent firearms or suicide vests. Most images that might be mistaken for a weapon­ -- backpacks or a big patch of sweat on the back of a person's shirt­ -- are easily evaluated by observing the terahertz image alongside an unaltered video picture of the passenger.

It is up to the operator­ -- in LA's case, presumably a transport police officer­ -- to query people when dark areas on the terahertz image suggest concealed large weapons or suicide vests. The device cannot see inside bodies, backpacks or shoes. "If you look at previous incidents on public transit systems, this technology would have detected those," Sotero says, noting LA Metro worked "closely" with the TSA for over a year to test this and other technologies. "It definitely has the backing of TSA."

How the technology works in practice depends heavily on the operator's training. According to Evans, "A lot of tradecraft goes into understanding where the threat item is likely to be on the body." He sees the crucial role played by the operator as giving back control to security guards and allowing them to use their common sense.

I am quoted in the article as being skeptical of the technology, particularly how its deployed.

CryptogramSophisticated Voice Phishing Scams

Brian Krebs is reporting on some new and sophisticated phishing scams over the telephone.

I second his advice: "never give out any information about yourself in response to an unsolicited phone call." Always call them back, and not using the number offered to you by the caller. Always.

EDITED TO ADD: In 2009, I wrote:

When I was growing up, children were commonly taught: "don't talk to strangers." Strangers might be bad, we were told, so it's prudent to steer clear of them.

And yet most people are honest, kind, and generous, especially when someone asks them for help. If a small child is in trouble, the smartest thing he can do is find a nice-looking stranger and talk to him.

These two pieces of advice may seem to contradict each other, but they don't. The difference is that in the second instance, the child is choosing which stranger to talk to. Given that the overwhelming majority of people will help, the child is likely to get help if he chooses a random stranger. But if a stranger comes up to a child and talks to him or her, it's not a random choice. It's more likely, although still unlikely, that the stranger is up to no good.

That advice is generalizable to this instance as well. The problem is that someone claiming to be from your bank asking for personal information. The problem is that they contacted you first.

Where else does this advice hold true?

Worse Than FailureBlind Leading the Blind

Corporate Standards. You know, all those rules created over time by bureaucrats who think that they're making things better by mandating consistency. The ones that force you to take time to change an otherwise properly-functioning system to comply with rules that don't really apply in the context of the application, but need to be blindly followed anyway. Here are a couple of good examples.

Honda vfr750r

Kevin L. worked on an application that provides driving directions via device-hosted map application. The device was designed to be bolted to the handlebars of a motorcycle. Based upon your destination and current coordinates, it would display your location and the marked route, noting things like distance to destination, turns, traffic circles and exit ramps. A great deal of effort was put into the visual design, because even though the device *could* provide audio feedback, on a motorcycle, it was impossible to hear.

One day, his boss, John, called him into a meeting. "I was just read the riot-act by HR. It seems that our application doesn't comply with corporate Accessibility Standards, specifically the standard regarding Braille Literature In Need of Description. You need to add screenreader support to the motorcycle map application. I estimate that it will take a few months of effort. We don't really have the time to spare, but we have to do it!"

Kevin thought about it for a bit and asked his boss if the company really wanted him to spend time to create functionality to provide verbal driving directions for blind motorcycle drivers.

That head-desk moment you're imagining really happened.

Of course, common sense had no bearing on the outcome, and poor Kevin had to do the work anyway.

While self-driving cars will eventually be commonplace, and no one will need directions, audible or otherwise. For now, though, Kevin at least knows that all the visually impaired motorcycle drivers can get to where they're going.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!


Krebs on SecurityWhen Security Researchers Pose as Cybercrooks, Who Can Tell the Difference?

A ridiculous number of companies are exposing some or all of their proprietary and customer data by putting it in the cloud without any kind of authentication needed to read, alter or destroy it. When cybercriminals are the first to discover these missteps, usually the outcome is a demand for money in return for the stolen data. But when these screw-ups are unearthed by security professionals seeking to make a name for themselves, the resulting publicity often can leave the breached organization wishing they’d instead been quietly extorted by anonymous crooks.

Last week, I was on a train from New York to Washington, D.C. when I received a phone call from Vinny Troia, a security researcher who runs a startup in Missouri called NightLion Security. Troia had discovered that All American Entertainment, a speaker bureau which represents a number of celebrities who also can be hired to do public speaking, had exposed thousands of speaking contracts via an unsecured Amazon cloud instance.

The contracts laid out how much each speaker makes per event, details about their travel arrangements, and any requirements or obligations stated in advance by both parties to the contract. No secret access or password was needed to view the documents.

It was a juicy find to be sure: I can now tell you how much Oprah makes per event (it’s a lot). Ditto for Gwyneth Paltrow, Olivia Newton John, Michael J. Fox and a host of others. But I’m not going to do that.

Firstly, it’s nobody’s business what they make. More to the point, All American also is my speaker bureau, and included in the cache of documents the company exposed in the cloud were some of my speaking contracts. In fact, when Troia called about his find, I was on my way home from one such engagement.

I quickly informed my contact at All American and asked them to let me know the moment they confirmed the data was removed from the Internet. While awaiting that confirmation, my pent-up frustration seeped into a tweet that seemed to touch a raw nerve among others in the security industry.

The same day I alerted them, All American took down its bucket of unsecured speaker contract data, and apologized profusely for the oversight (although I have yet to hear a good explanation as to why this data needed to be stored in the cloud to begin with).

This was hardly the first time Troia had alerted me about a huge cache of important or sensitive data that companies have left exposed online. On Monday, TechCrunch broke the story about a “breach” at Apollo, a sales engagement startup boasting a database of more than 200 million contact records. Calling it a breach seems a bit of a stretch; it probably would be more accurate to describe the incident as a data leak.

Just like my speaker bureau, Apollo had simply put all this data up on an Amazon server that anyone on the Internet could access without providing a password. And Troia was again the one who figured out that the data had been leaked by Apollo — the result of an intensive, months-long process that took some extremely interesting twists and turns.

That journey — which I will endeavor to describe here — offered some uncomfortable insights into how organizations frequently learn about data leaks these days, and indeed whether they derive any lasting security lessons from the experience at all. It also gave me a new appreciation for how difficult it can be for organizations that screw up this way to tell the difference between a security researcher and a bad guy.


I began hearing from Troia almost daily beginning in mid-2017. At the time, he was on something of a personal mission to discover the real-life identity behind The Dark Overlord (TDO), the pseudonym used by an individual or group of criminals who have been extorting dozens of companies — particularly healthcare providers — after hacking into their systems and stealing sensitive data.

The Dark Overlord’s method was roughly the same in each attack. Gain access to sensitive data (often by purchasing access through crimeware-as-a-service offerings), and send a long, rambling ransom note to the victim organization demanding tens of thousands of dollars in Bitcoin for the safe return of said data.

Victims were typically told that if they refused to pay, the stolen data would be sold to cybercriminals lurking on Dark Web forums. Worse yet, TDO also promised to make sure the news media knew that victim organizations were more interested in keeping the breach private than in securing the privacy of their customers or patients.

In fact, the apparent ringleader of TDO reached out to KrebsOnSecurity in May 2016 with a remarkable offer. Using the nickname “Arnie,” the public voice of TDO said he was offering exclusive access to news about their latest extortion targets.

Snippets from a long email conversation in May 2016 with a hacker who introduced himself as Adam but would later share his nickname as “Arnie” and disclose that he was a member of The Dark Overlord. In this conversation, he is offering to sell access to scoops about data breaches that he caused.

Arnie claimed he was an administrator or key member on several top Dark Web forums, and provided a handful of convincing clues to back up his claim. He told me he had real-time access to dozens of healthcare organizations they’d hacked into, and that each one which refused to give in to TDO’s extortion demands could turn into a juicy scoop for KrebsOnSecurity.

Arnie said he was coming to me first with the offer, but that he was planning to approach other journalists and news outlets if I declined. I balked after discovering that Arnie wasn’t offering this access for free: He wanted 10 bitcoin in exchange for exclusivity (at the time, his asking price was roughly equivalent to USD $5,000).

Perhaps other news outlets are accustomed to paying for scoops, but that is not something I would ever consider. And in any case the whole thing was starting to smell like a shakedown or scam. I declined the offer. It’s possible other news outlets or journalists did not; I will not speculate on this matter further, other than to say readers can draw their own conclusions based on the timeline and the public record.


Fast-forward to September 2017, and Troia was contacting me almost daily to share tidbits of research into email addresses, phone numbers and other bits of data apparently tied to TDO’s communications with victims and their various identities on Dark Web forums.

His research was exhaustive and occasionally impressive, and for a while I caught the TDO bug and became engaged in a concurrent effort to learn the identities of the TDO members. For better or worse, the results of that research will have to wait for another story and another time.

At one point, Troia told me he’d gained acceptance on the Dark Web forum Kickass, using the hacker nickname “Soundcard“. He said he believed a presence on all of the forums TDO was active on was necessary for figuring out once and for all who was behind this brazen and very busy extortion group.

Here is a screen shot Troia shared with me of Soundcard’s posting there, which concerned a July 2018 forum discussion thread about a data leak of 340 million records from Florida-based marketing firm Exactis. As detailed by in June 2018, Troia had discovered this huge cache of data unprotected and sitting wide open on a cloud server, and ultimately traced it back to Exactis.

Vinny Troia, a.k.a. “Soundcard” on the Dark Web forum Kickass.

After several weeks of comparing notes about TDO with Troia, I learned that he was telling random people that we were “working together,” and that he was throwing my name around to various security industry sources and friends as a way of gaining access to new sources of data.

I respectfully told Troia that this was not okay — that I never told people about our private conversations (or indeed that we spoke at all) — and I asked him to stop doing that. He apologized, said he didn’t understand he’d overstepped certain boundaries, and that it would never happen again.

But it would. Multiple times. Here’s one time that really stood out for me. Earlier this summer, Troia sent me a link to a database of truly staggering size — nearly 10 terabytes of data — that someone had left open to anyone via a cloud instance. Again, no authentication or password was needed to access the information.

At first glance, it appeared to be LinkedIn profile data. Working off that assumption, I began a hard target search of the database for specific LinkedIn profiles of important people. I first used the Web to locate the public LinkedIn profile pages for nearly all of the CEOs of the world’s top 20 largest companies, and then searched those profile names in the database that Troia had discovered.

Suddenly, I had the cell phone numbers, addresses, email addresses and other contact data for some of the most powerful people in the world. Immediately, I reached out to contacts at LinkedIn and Microsoft (which bought LinkedIn in 2016) and arranged a call to discuss the findings.

LinkedIn’s security team told me the data I was looking at was in fact an amalgamation of information scraped from LinkedIn and dozens of public sources, and being sold by the same firm that was doing the scraping and profile collating. LinkedIn declined to name that company, and it has not yet responded to follow-up questions about whether the company it was referring to was Apollo.

Sure enough, a closer inspection of the database revealed the presence of other public data sources, including startup web site AngelList, Facebook, Salesforce, Twitter, and Yelp, among others.

Several other trusted sources I approached with samples of data spliced from the nearly 10 TB trove of data Troia found in the cloud said they believed LinkedIn’s explanation, and that the data appeared to have been scraped off the public Internet from a variety of sources and combined into a single database.

I told Troia it didn’t look like the data came exclusively from LinkedIn, or at least wasn’t stolen from them, and that all indications suggested it was a collection of data scraped from public profiles. He seemed unconvinced.

Several days after my second call with LinkedIn’s security team — around Aug. 15 — I was made aware of a sales posting on the Kickass crime forum by someone selling what they claimed was “all of the LinkedIN user-base.” The ad, a blurry, partial screenshot of which can be seen below, was posted by the Kickass user Soundcard. The text of the sales thread was as follows:

Soundcard offering to sell what he claimed was all of LinkedIn’s user data, on the Dark Web forum Kickass.

“KA users –

I present you with exclusive opportunity to purchase all (yes ALL) of the LinkedIN user-base for the low low price of 2 BTC.

I found a database server with all LinkedIN users. All of user’s personal information is included in this database (including private email and phone number NOT listed on public profile). No passwords, sorry.

Size: 2.1TB.

user count: 212 million

Why so large for 212 million users? See the sample data per record. There is lot of marketing and CRM data as well. I sell original data only. no editz.

Here is index of server. The LinkedIN users spread across people and contacts indexes. Sale includes both of those indexes.

Questions, comments, purchase? DM me, or message me – soundcard@exploit[.]im

The “sample data” included in the sales thread was from my records in this huge database, although Soundcard said he had sanitized certain data elements from this snippet. He explained his reasoning for that in a short Q&A from his sales thread:

Question 1: Why you sanitize Brian Krebs’ information in sample?

Answer 1: Because nothing in life free. This only to show i have data.

I soon confronted Troia not only for offering to sell leaked data on the Dark Web, but also for once again throwing my name around in his various activities — despite past assurances that he would not. Also, his actions had boxed me into a corner: Any plans I had to credit him in a story for eventually helping to determine the source of the leaked data (which we now know to be Apollo) became more complicated without also explaining his Dark Web alter ego as Soundcard, and I am not in the habit of omitting such important details from stories.

Troia assured me that he never had any intention of selling the data, and that the whole thing had been a ruse to help smoke out some of the suspected TDO members.

For its part, LinkedIn’s security team was not amused, and published a short post to its media page denying that the company had suffered a security breach.

“We want our members to know that a recent claim of a LinkedIn data breach is not accurate,” the company wrote. “Our investigation into this claim found that a third-party sales intelligence company that is not associated with LinkedIn was compromised and exposed a large set of data aggregated from a number of social networks, websites, and the company’s own customers. It also included a limited set of publicly available data about LinkedIn members, such as profile URL, industry and number of connections. This was not a breach of LinkedIn.”

It is quite a fine line to walk when self-styled security researchers mimic cyber criminals in the name of making things more secure. On the one hand, reaching out to companies that are inadvertently exposing sensitive data and getting them to secure it or pull it offline altogether is a worthwhile and often thankless effort, and clearly many organizations still need a lot of help in this regard.

On the other hand, most organizations that fit this description simply lack the security maturity to tell the difference between someone trying to make the Internet a safer place and someone trying to sell them a product or service.

As a result, victim organizations tend to react with deep suspicion or even hostility to legitimate researchers and security journalists who alert them about a data breach or leak. And stunts like the ones described above tend to have the effect of deepening that suspicion, and sowing fear, uncertainty and doubt about the security industry as a whole.

Sociological ImagesWho Gets to Change the Subject?

Everyone has been talking about last week’s Senate testimony from Christine Blasey Ford and Supreme Court nominee Brett Kavanaugh. Amid the social media chatter, I was struck by this infographic from an article at Vox:

Commentators have noted the emotional contrast between Ford and Kavanaugh’s testimony and observed that Kavanaugh’s anger is a strategic move in a culture that is used to discouraging emotional expression from men and judging it harshly from women. Alongside the anger, this chart also shows us a gendered pattern in who gets to change the topic of conversation—or disregard it altogether.

Sociologists use conversation analysis to study how social forces shape our small, everyday interactions. One example is “uptalk,” a gendered pattern of pitched-up speech that conveys different meanings when men and women use it. Are men more likely to change the subject or ignore the topic of conversation? Two experimental conversation studies from American Sociological Review shed light on what could be happening here and show a way forward.

In a 1994 study that put men and women into different leadership roles, Cathryn Johnson found that participants’ status had a stronger effect on their speech patterns, while gender was more closely associated with nonverbal interactions. In a second study from 2001, Dina G. Okamoto and Lynn Smith-Lovin looked directly at changing the topic of conversation and did not find strong differences across the gender of participants. However, they did find an effect where men following male speakers were less likely to change the topic, concluding “men, as high-status actors, can more legitimately evaluate the contributions of others and, in particular, can more readily dismiss the contributions of women” (Pp. 867).

Photo Credit: Sharon Mollerus, Flickr CC

The important takeaway here is not that gender “doesn’t matter” in everyday conversation. It is that gender can have indirect influences on who carries social status into a conversation, and we can balance that influence by paying attention to who has the authority to speak and when. By consciously changing status dynamics —possibly by changing who is in the room or by calling out rule-breaking behavior—we can work to fix imbalances in who has to have the tough conversations.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

CryptogramFacebook Is Using Your Two-Factor Authentication Phone Number to Target Advertising

From Kashmir Hill:

Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn't hand over at all, but that was collected from other people's contact books, a hidden layer of details Facebook has about you that I've come to call "shadow contact information." I managed to place an ad in front of Alan Mislove by targeting his shadow profile. This means that the junk email address that you hand over for discounts or for shady online shopping is likely associated with your account and being used to target you with ads.

Here's the research paper. Hill again:

They found that when a user gives Facebook a phone number for two-factor authentication or in order to receive alerts about new log-ins to a user's account, that phone number became targetable by an advertiser within a couple of weeks. So users who want their accounts to be more secure are forced to make a privacy trade-off and allow advertisers to more easily find them on the social network.

Worse Than FailureCodeSOD: An Error on Logging

The beauty of a good logging system is that it allows you to spam logging messages all through your code, but then set the logging level at runtime, so that you have fine grained control over how much logging there is. You can turn the dial from, “things are running smooth in production, so be quiet,” to “WTF THINGS ARE ON FIRE GODS HELP US WHAT IS GOING ON CAN I LAUNCH A DEBUGGER ON THE PRODUCTION ENVIRONMENT PLEASE GOD”.

You might write something like this, for example:

LOG.error("Error generating file {}", getFileName(), e);

This leverages the logging framework- if error logging is enabled, the message gets logged, otherwise the message is dropped. The string is autoformatted, replacing the {} with the results of getFileName().

That’s the code Graham wrote. Graham replaced this code, from another programmer who maybe didn’t fully grasp what they were doing:

if (LOG.isErrorEnabled()) {
    LOG.error(String.format("Generating " + getFileName() + " :%s", e));

There’s a pile of poorly understood things happening here. As stated, if isErrorEnabled is false, LOG.error doesn’t do anything. It also can handle string formatting, which isn’t what happens here, and what does happen here demonstrates a complete misunderstanding of what String.format does, as it both formats and concatenates.

Then Graham searched through that other programmer’s commits, only to find this basic pattern copy/pasted in everywhere, usually with inconsistent indentation. At least it was actual logging, and not a pile of System.out.printlns.

As soon as that though popped into his head, Graham searched again. There were a lot of System.out.printlns too.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet Linux AustraliaBen Martin: CNC made close up lens filter holder

Close up filters attach to the end of a camera lens and allow you to take photos closer to the subject than you normally would have been able to do. This is very handy for electronics and other work as you can get clear images of circuit boards and other small detail. I recently got a collection of 3 such filters which didn't come with any sort of real holder, the container they shipped in was not really designed for longer term use.

The above is the starting design for a filter holder cut in layers from walnut and stacked together to create the enclosure. The inside is shown below where the outer diameter can hold the 80mm black ring and the inner circles are 70mm and are there to keep the filters from touching each other. Close up filters can be quite fish eyed looking with a substantial curve to the lens on the filter, so a gap is needed to keep each filter away from the next one. A little felt is used to cushion the filter from the walnut itself which adds roughly 1.5mm to the design so the felt layer all have space to live as well.

The bottom has little feet which extend slightly beyond the tangent of the circle so they both make good contact with the ground and there is no rocking. Using two very cheap hinges works well in this design to try to minimize the sideways movement (slop) in the hinges themselves. A small leather strap will finish the enclosure off allowing it to be secured closed.

It is wonderful to be able to turn something like this around. I can only imagine what the world looks like from the perspective of somebody who is used to machining with 5 axis CNC.


Cory DoctorowTalking about Ron Howard’s Haunted Mansion album with the Comedy on Vinyl podcast

It’s been two years since I last sat down with Jason Klamm for his Comedy on Vinyl podcast (we were discussing Allan Sherman’s My Son, The Nut); we were past due for a rematch.

Jason asked me to come on one more time (MP3) to discuss the Disneyland Little Long Playing Record The Story and Song of the Haunted Mansion, which features the voice talents of a young Ron Howard (!!).

As always, we ranged far and wide, discussing narrative and non-narrative artforms, the history of Disney and Disneyland, and my own personal relationship with the Haunted Mansion.

TEDWe the Future: Talks from TED, Skoll Foundation and United Nations Foundation

Bruno Giussani (left) and Chris Anderson co-host “We the Future,” a day of talks presented by TED, the Skoll Foundation and the United Nations Foundation, at the TED World Theater in New York City, September 25, 2018. (Photo: Ryan Lash / TED)

We live in contentious times. Yet behind the dismaying headlines and social-media-fueled quarrels, people around the world — millions of them — are working unrelentingly to solve problems big and small, dreaming up new ways to expand the possible and build a better world.

At “We the Future,” a day of talks at the TED World Theater presented in collaboration with the Skoll Foundation and the United Nations Foundation, 13 speakers and two performers explored some of our most difficult collective challenges — as well as emerging solutions and strategies for building bridges and dialogue.

Updates on the Sustainable Development Goals. Are we delivering on the promises of the Sustainable Development Goals (SDGs), the collection of 17 global goals set by the United Nations General Assembly in 2015, which promised to improve the lives of billions with no one left behind? Using the Social Progress Index, a measure of the quality of life in countries throughout the world, economist Michael Green shares a fresh analysis of where we are today in relationship to the goals — and some new thinking on what we need to do differently to achieve them. While we’ve seen progress in some parts of the world on goals related to hunger and healthy living, the world is projected to fall short of achieving the ambitious targets set by the SDGs for 2030, according to Green’s analysis. If current trends keep up — especially the declines we’re seeing in things like personal rights and inclusiveness across the world — we actually won’t hit the 2030 targets until 2094. So what can we do about this? Two things, says Green: We need to call out rich countries that are falling short, and we need to look further into the data and find opportunities to progress faster. Because progress is happening, and we’re tantalizingly close to a world where nobody dies of things like hunger and malaria. “If we can focus our efforts, mobilize the resources, galvanize the political will,” Green says, “that step change is possible.”

Sustainability expert Johan Rockström debuts the Earth-3 model, a new way to track both the Sustainable Development Goals and the health of the planet at the same time. He speaks at “We the Future.” (Photo: Ryan Lash / TED)

A quest for planetary balance. In 2015, we saw two fantastic global breakthroughs for humanity, says sustainability expert Johan Rockström — the SDGs and the Paris Agreement. But are the two compatible, and can be they be pursued at the same time? Rockström suggests there are inherent contradictions between the two that could lead to irreversible planetary instability. Along with a team of scientists, he created a way to combine the SDGs within the nine planetary boundaries (things like ocean acidification and ozone depletion); it’s a completely new model of possibility — the Earth-3 model — to track trends and simulate future change. Right now, we’re not delivering on our promises to future generations, he says, but the window of success is still open. “We need some radical thinking,” Rockström says. “We can build a safe and just world: we just have to really, really get on with it.”

Henrietta Fore, executive director of UNICEF, is spearheading a new global initiative, Generation Unlimited, which aims to ensure every young person is in school, training or employment by 2030. She speaks at “We the Future.” (Photo: Ryan Lash / TED)

A plan to empower Generation Unlimited. There are 1.8 billion young people between the ages of 10 and 24 in the world, one of the largest cohorts in human history. Meeting their needs is a big challenge — but it’s also a big opportunity, says the executive director of UNICEF, Henrietta Fore. Among the challenges facing this generation are a lack of access to education and job opportunities, exposure to violence and, for young girls, the threats of discrimination, child marriage and early pregnancy. To begin addressing these issues, Fore is spearheading UNICEF’s new initiative, Generation Unlimited, which aims to ensure every young person is in school, learning, training or employment by 2030. She talks about a program in Argentina that connects rural students in remote areas with secondary school teachers, both in person and online; an initiative in South Africa called Techno Girls that gives young women from disadvantaged backgrounds job-shadowing opportunities in the STEM fields; and, in Bangladesh, training for tens of thousands of young people in trades like carpentry, motorcycle repair and mobile-phone servicing. The next step? To take these ideas and scale them up, which is why UNICEF is casting a wide net — asking individuals, communities, governments, businesses, nonprofits and beyond to find a way to help out. “A massive generation of young people is about to inherit our world,” Fore says, “and it’s our duty to leave a legacy of hope for them — but also with them.”

Improving higher education in Africa. There’s a teaching and learning crisis unfolding across Africa, says Patrick Awuah, founder and president of Ashesi University. Though the continent has scaled up access to higher education, there’s been no improvement in quality or effectiveness of that education. “The way we teach is wrong for today. It is even more wrong for tomorrow, given the challenges before us,” Awuah says. So how can we change higher education for the better? Awuah suggests establishing multidisciplinary curricula that emphasize critical thinking and ethics, while also allowing for in-depth expertise. He also suggests collaboration between universities in Africa — and tapping into online learning programs. “A productive workforce, living in societies managed by ethical and effective leaders, would be good not only for Africa but for the world,” Awuah says.

Ayọ (right) and Marvin Dolly fill the theater with a mix of reggae, R&B and folk sounds at “We the Future.” (Photo: Ryan Lash / TED)

Songs of hardship and joy. During two musical interludes, singer-songwriter Ayọ and guitarist Marvin Dolly fill the TED World Theater with the soulful, eclectic strumming of four songs — “Boom Boom,” “What’s This All About,” “Life Is Real” and “Help Is Coming” — blending reggae, R&B and folk sounds.

If every life counts, then count every life. To some, numbers are boring. But data advocate Claire Melamed says numbers are, in fact, “an issue of power and of justice.” The lives and death of millions of people worldwide happen outside the official record, Melamed says, and this lack of information leads to big problems. Without death records, for instance, it’s nearly impossible to detect epidemics until it’s too late. If we are to save lives in disease-prone regions, we must know where and when to deliver medicine — and how much. Today, technology enables us to inexpensively gather reliable data, but tech isn’t a cure-all: governments may try to keep oppressed or underserved populations invisible, or the people themselves may not trust the authorities collecting the data. But data custodians can fix this problem by building organizations, institutions and communities that can build trust. “If every life counts, we should count every life,” Melamed says.

How will the US respond to the rise of China? To Harvard University political scientist Graham Allison, recent skirmishes between the US and China over trade and defense are yet another chapter unfolding in a centuries-long pattern. He’s coined the term “Thucydides’ Trap” to describe it — as he puts it, the Trap “is the dangerous dynamic that occurs when a rising power threatens to displace a ruling power.” Thucydides is viewed by many as the father of history; he chronicled the Peloponnesian Wars between a rising Athens and a ruling Sparta in the 4th century BCE (non-spoiler alert: Sparta won, but at a high price). Allison and colleagues reviewed the last 500 years and found Thucydides’ Trap 16 times — and 12 of them ended in war. Turning to present day, he notes that while the 20th century was dominated by the US, China has risen far and fast in the 21st. By 2024, for instance, China’s GDP is expected to be one-and-a-half times greater than America’s. What’s more, both countries are led by men who are determined to be on top. “Are Americans and Chinese going to let the forces of history draw us into a war that would be catastrophic to both?” Allison asks. To avoid it, he calls for “a combination of imagination, common sense and courage” to come up with solutions — referencing the Marshall Plan, the World Bank and United Nations as fresh approaches toward prosperity and peace that arose after the ravages of war. After the talk, TED curator Bruno Giussani asks Allison if he has any creative ideas to sidestep the Trap. “A long peace,” Allison says, turning again to Athens and Sparta for inspiration: during their wars, the two agreed at one point to a 30-year peace, a pause in their conflict so each could tend to their domestic affairs.

Can we ever hope to reverse climate change? Researcher and strategist Chad Frischmann introduces the idea of “drawdown” — the point at which we remove more greenhouse gases from the atmosphere than we put in — as our only hope of averting climate disaster. At his think tank, he’s working to identify strategies to achieve drawdown, like increased use of renewable energy, better family planning and the intelligent disposal of HFC refrigerants, among others. But the things that will make the biggest impact, he says, are changes to food production and agriculture. The decisions we make every day about the food we grow, buy and eat are perhaps the most important contributions we could make to reversing global warming. Another focus area: better land management and rejuvenating forests and wetlands, which would expand and create carbon sinks that sequester carbon. When we move to fix global warming, we will “shift the way we do business from a system that is inherently exploitative and extractive to a ‘new normal’ that is by nature restorative and regenerative,” Frischmann says.

The end of energy poverty. Nearly two billion people worldwide lack access to modern financial services like credit cards and bank accounts — making it difficult to do things like start a new business, build a nest egg, or make a home improvement like adding solar panels. Entrepreneur Lesley Marincola is working on this issue with Angaza, a company that helps people avoid the steep upfront costs of buying a solar-power system, instead allowing them to pay it off over time. With metering technology embedded in the product, Angaza uses alternative credit scoring methods to determine a borrower’s risk level. The combination of metering technology and an alternative method of assessing credit brings purchasing power to unbanked people. “To effectively tackle poverty at a global scale, we must not solely focus on increasing the amount of money that people earn,” Marincola says. “We must also increase or expand the power of their income through access to savings and credit.”

Anushka Ratnayake displays one of the scratch-off cards that her company, MyAgro, is using to help farmers in Africa break cycles of poverty and enter the cycle of investment and growth. She speaks at “We the Future.” (Photo: Ryan Lash / TED)

An innovative way to help rural farmers save. While working for a microfinance company in Kenya, Anushka Ratnayake realized something big: small-scale farmers were constantly being offered loans … when what they really wanted was a safe place to save money. Collecting and storing small deposits from farmers was too difficult and expensive for banks, and research from the University of California, Berkeley shows that only 14–21 percent of farmers accept credit offers. Ratnayake found a simpler solution — using scratch-off cards that act as a layaway system. MyAgro, a nonprofit social enterprise that Ratnayake founded and leads, helps farmers save money for seeds. Farmers buy myAgro scratch cards from local stores, depositing their money into a layaway account by texting in the card’s scratch-off code. After a few months of buying the cards and saving little by little, myAgro delivers the fertilizer, seed and training they’ve paid for, directly to their farms. Following a wildly successful pilot program in Mali, MyAgro has expanded to Senegal and Tanzania and now serves more than 50,000 farmers. On this plan, rural farmers can break cycles of poverty, Ratnayake says, and instead, enter the cycle of investment and growth.

Durable housing for a resilient future. Around the world, natural disasters destroy thousands of lives and erase decades of economic gains each year. These outcomes are undeniably devastating and completely preventable, says mason Elizabeth Hausler — and substandard housing is to blame. It’s estimated that one-third of the world will be living in insufficiently constructed buildings by 2030; Hausler hopes to cut those projections with a building revolution. She shares six straightforward principles to approach the problem of substandard housing: teach people how to build, use local architecture, give homeowners power, provide access to financing, prevent disasters and use technology to scale. “It’s time we treat unsafe housing as the global epidemic that it is,” Hausler says. “It’s time to strengthen every building just like we would vaccinate every child in a public health emergency.”

A daring idea to reduce income inequality. Every newborn should enter the world with at least $25,000 in the bank. That is the basic premise of a “baby trust,” an idea conceived by economists Darrick Hamilton of The New School and William Darity of Duke University. Since 1980, inequality has been on the rise worldwide, and Hamilton says it will keep growing due to this simple fact: “It is wealth that begets more wealth.” Policymakers and the public have fallen for a few appealing but inaccurate narratives about wealth creation — that grit, education or a booming economy can move people up the ladder — and we’ve disparaged the poor for not using these forces to rise, Hamilton says. Instead, what if we gave a boost up the ladder? A baby trust would give an infant money at birth — anywhere from $500 for those born into the richest families to $60,000 for the poorest, with an average endowment of $25,000. The accounts would be managed by the government, at a guaranteed interest rate of 2 percent a year. When a child reaches adulthood, they could withdraw it for an “asset-producing activity,” such as going to college, buying a home or starting a business. If we were to implement it in the US today, a baby trust program would cost around $100 billion a year; that’s only 2 percent of annual federal expenditures and a fraction of the $500 billion that the government now spends on subsidies and credits that favor the wealthy, Hamilton says. “Inequality is primarily a structural problem, not a behavioral one,” he says, so it needs to be attacked with solutions that will change the existing structures of wealth.

Nothing about us, without us. In 2013, activist Sana Mustafa and her family were forcibly evacuated from their homes and lives as a result of the Syrian civil war. While adjusting to her new reality as a refugee, and beginning to advocate for refugee rights, Mustafa found that events aimed at finding solutions weren’t including the refugees in the conversation. Alongside a group of others who had to flee their homes because of war and disaster, Mustafa founded The Network for Refugee Voices (TNRV), an initiative that amplifies the voices of refugees in policy dialogues. TNRV has worked with the United Nations High Commissioner for Refugees and other organizations to ensure that refugees are represented in important conversations about them. Including refugees in the planning process is a win-win, Mustafa says, creating more effective relief programs and giving refugees a say in shaping their lives.

Former member of Danish Parliament Özlem Cekic has a novel prescription for fighting prejudice: take your haters out for coffee. She speaks at “We the Future.” (Photo: Ryan Lash / TED)

Conversations with people who send hate mail. Özlem Cekic‘s email inbox has been full of hate mail and personal abuse for years. She began receiving the derogatory messages in 2007, soon after she won a seat in the Danish Parliament — becoming one of the first women with a minority background to do so. At first she just deleted the emails, dismissing them as the work of the ignorant or fanatic. The situation escalated in 2010 when a neo-Nazi began to harass Cekic and her family, prompting a friend to make an unexpected suggestion: reach out to the hate mail writers and invite them out to coffee. This was the beginning of what Cekic calls “dialogue coffee”: face-to-face meetings where she sits down with people who have sent hate mail, in an effort to understand the source of their hatred. Cekic has had hundreds of encounters since 2010 — always in the writer’s home, and she always brings food — and has made some important realizations along the way. Cekic now recognizes that people of all political convictions can be caught demonizing those with different views. And she has a challenge for us all: before the end of the year, reach out to someone you demonize — who you disagree with politically or think you won’t have anything in common with — and invite them out to coffee. Don’t give up if the person refuses at first, she says: sometimes it has taken nearly a year for her to arrange a meeting. “Trenches have been dug between people, yes,” Cekic says. “But we all have the ability to build the bridges that cross the trenches.”

Krebs on SecurityVoice Phishing Scams Are Getting More Clever

Most of us have been trained to be wary of clicking on links and attachments that arrive in emails unexpected, but it’s easy to forget scam artists are constantly dreaming up innovations that put a new shine on old-fashioned telephone-based phishing scams. Think you’re too smart to fall for one? Think again: Even technology experts are getting taken in by some of the more recent schemes (or very nearly).

Matt Haughey is the creator of the community Weblog MetaFilter and a writer at Slack. Haughey banks at a small Portland credit union, and last week he got a call on his mobile phone from an 800-number that matched the number his credit union uses.

Actually, he got three calls from the same number in rapid succession. He ignored the first two, letting them both go to voicemail. But he picked up on the third call, thinking it must be something urgent and important. After all, his credit union had rarely ever called him.

Haughey said he was greeted by a female voice who explained that the credit union had blocked two phony-looking charges in Ohio made to his debit/ATM card. She proceeded to then read him the last four digits of the card that was currently in his wallet. It checked out.

Haughey told the lady that he would need a replacement card immediately because he was about to travel out of state to California. Without missing a beat, the caller said he could keep his card and that the credit union would simply block any future charges that weren’t made in either Oregon or California.

This struck Haughey as a bit off. Why would the bank say they were freezing his card but then say they could keep it open for his upcoming trip? It was the first time the voice inside his head spoke up and said, “Something isn’t right, Matt.” But, he figured, the customer service person at the credit union was trying to be helpful: She was doing him a favor, he reasoned.

The caller then read his entire home address to double check it was the correct destination to send a new card at the conclusion of his trip. Then the caller said she needed to verify his mother’s maiden name. The voice in his head spoke out in protest again, but then banks had asked for this in the past. He provided it.

Next she asked him to verify the three digit security code printed on the back of his card. Once more, the voice of caution in his brain was silenced: He’d given this code out previously in the few times he’d used his card to pay for something over the phone.

Then she asked him for his current card PIN, just so she could apply that same PIN to the new card being mailed out, she assured him. Ding, ding, ding went the alarm bells in his head. Haughey hesitated, then asked the lady to repeat the question. When she did, he gave her the PIN, and she assured him she’d make sure his existing PIN also served as the PIN for his new card.

Haughey said after hanging up he felt fairly certain the entire transaction was legitimate, although the part about her requesting the PIN kept nagging at him.

“I balked at challenging her because everything lined up,” he said in an interview with KrebsOnSecurity. “But when I hung up the phone and told a friend about it, he was like, ‘Oh man, you just got scammed, there’s no way that’s real.'”

Now more concerned, Haughey visited his credit union to make sure his travel arrangements were set. When he began telling the bank employee what had transpired, he could tell by the look on her face that his friend was right.

A review of his account showed that there were indeed two fraudulent charges on his account from earlier that day totaling $3,400, but neither charge was from Ohio. Rather, someone used a counterfeit copy of his debit card to spend more than $2,900 at a Kroger near Atlanta, and to withdraw almost $500 from an ATM in the same area. After the unauthorized charges, he had just $300 remaining in his account.

“People I’ve talked to about this say there’s no way they’d fall for that, but when someone from a trustworthy number calls, says they’re from your small town bank, and sounds incredibly professional, you’d fall for it, too,” Haughey said.

Fraudsters can use a variety of open-source and free tools to fake or “spoof” the number displayed as the caller ID, lending legitimacy to phone phishing schemes. Often, just sprinkling in a little foreknowledge of the target’s personal details — SSNs, dates of birth, addresses and other information that can be purchased for a nominal fee from any one of several underground sites that sell such data — adds enough detail to the call to make it seem legitimate.


Cabel Sasser is founder of a Mac and iOS software company called Panic Inc. Sasser said he almost got scammed recently after receiving a call that appeared to be the same number as the one displayed on the back of his Wells Fargo ATM card.

“I answered, and a Fraud Department agent said my ATM card has just been used at a Target in Minnesota, was I on vacation?” Sasser recalled in a tweet about the experience.

What Sasser didn’t mention in his tweet was that his corporate debit card had just been hit with two instances of fraud: Someone had charged $10,000 worth of metal air ducts to his card. When he disputed the charge, his bank sent a replacement card.

“I used the new card at maybe four places and immediately another fraud charge popped up for like $20,000 in custom bathtubs,” Sasser recalled in an interview with KrebsOnSecurity. “The morning this scam call came in I was spending time trying to figure out who might have lost our card data and was already in that frame of mind when I got the call about fraud on my card.”

And so the card-replacement dance began.

“Is the card in your possession?,” the caller asked. It was. The agent then asked him to read the three-digit CVV code printed on the back of his card.

After verifying the CVV, the agent offered to expedite a replacement, Sasser said. “First he had to read some disclosures. Then he asked me to key in a new PIN. I picked a random PIN and entered it. Verified it again. Then he asked me to key in my current PIN.”

That made Sasser pause. Wouldn’t an actual representative from Wells Fargo’s fraud division already have access to his current PIN?

“It’s just to confirm the change,” the caller told him. “I can’t see what you enter.”

“But…you’re the bank,” he countered. “You have my PIN, and you can see what I enter…”

The caller had a snappy reply for this retort as well.

“Only the IVR [interactive voice response] system can see it,” the caller assured him. “Hey, if it helps, I have all of your account info up…to confirm, the last four digits of your Social Security number are XXXX, right?”

Sure enough, that was correct. But something still seemed off. At this point, Sasser said he told the agent he would call back by dialing the number printed on his ATM card — the same number his mobile phone was already displaying as the source of the call. After doing just that, the representative who answered said there had been no such fraud detected on his account.

“I was just four key presses away from having all my cash drained by someone at an ATM,” Sasser recalled. A visit to the local Wells Fargo branch before his trip confirmed that he’d dodged a bullet.

“The Wells person was super surprised that I bailed out when I did, and said most people are 100 percent taken by this scam,” Sasser said.


In Sasser’s case, the scammer was a live person, but some equally convincing voice phishing schemes — sometimes called “vishing” — use a combination of humans and automation. Consider the following vishing attempt, reported to KrebsOnSecurity in August by “Curt,” a longtime reader from Canada.

“I’m both a TD customer and Rogers phone subscriber and just experienced what I consider a very convincing and/or elaborate social engineering/vishing attempt,” Curt wrote. “At 7:46pm I received a call from (647-475-1636) purporting to be from Credit Alert ( on behalf of TD Canada Trust offering me a free 30-day trial for a credit monitoring service.”

The caller said her name was Jen Hansen, and began the call with what Curt described as “over-the-top courtesy.”

“It sounded like a very well-scripted Customer Service call, where they seem to be trying so hard to please that it seems disingenuous,” Curt recalled. “But honestly it still sounded very much like a real person, not like a text to speech voice which sounds robotic. This sounded VERY natural.”

Ms. Hansen proceeded to tell Curt that TD Bank was offering a credit monitoring service free for one month, and that he could cancel at any time. To enroll, he only needed to confirm his home mailing address.

“I’m mega paranoid (I read daily) and asked her to tell me what address I had on their file, knowing full well my home address can be found in a variety of ways,” Curt wrote in an email to this author. “She said, ‘One moment while I access that information.'”

After a short pause, a new voice came on the line.

“And here’s where I realized I was finally talking to a real human — a female with a slight French accent — who read me my correct address,” Curt recalled.

After another pause, Ms. Hansen’s voice came back on the line. While she was explaining that part of the package included free antivirus and anti-keylogging software, Curt asked her if he could opt-in to receive his credit reports while opting-out of installing the software.

“I’m sorry, can you repeat that?” the voice identifying itself as Ms. Hansen replied. Curt repeated himself. After another, “I’m sorry, can you repeat that,” Curt asked Ms. Hansen where she was from.

The voice confirmed what was indicated by the number displayed on his caller ID: That she was calling from Barrie, Ontario. Trying to throw the robot voice further off-script, Curt asked what the weather was like in Barrie, Ontario. Another Long pause. The voice continued describing the offered service.

“I asked again about the weather, and she said, ‘I’m sorry, I don’t have that information. Would you like me to transfer you to someone that does?’ I said yes and again the real person with a French accent started speaking, ignoring my question about the weather and saying that if I’d like to continue with the offer I needed to provide my date of birth. This is when I hung up and immediately called TD Bank.” No one from TD had called him, they assured him.


And then there are the fully-automated voice phishing scams, which can be be equally convincing. Last week I heard from “Jon,” a cybersecurity professional with more than 30 years of experience under his belt (Jon asked to leave his last name out of this story).

Answering a call on his mobile device from a phone number in Missouri, Jon was greeted with the familiar four-note AT&T jingle, followed by a recorded voice saying AT&T was calling to prevent his phone service from being suspended for non-payment.

“It then prompted me to enter my security PIN to be connected to a billing department representative,” Jon said. “My number was originally an AT&T number (it reports as Cingular Wireless) but I have been on T-Mobile for several years, so clearly a scam if I had any doubt. However, I suspect that the average Joe would fall for it.”


Just as you would never give out personal information if asked to do so via email, never give out any information about yourself in response to an unsolicited phone call.

Like email scams, phone phishing usually invokes an element of urgency in a bid to get people to let their guard down. If a call has you worried that there might be something wrong and you wish to call them back, don’t call the number offered to you by the caller. If you want to reach your bank, call the number on the back of your card. If it’s another company you do business with, go to the company’s site and look up their main customer support number.

Unfortunately, this may take a little work. It’s not just banks and phone companies that are being impersonated by fraudsters. Reports on social media suggest many consumers also are receiving voice phishing scams that spoof customer support numbers at Apple, Amazon and other big-name tech companies. In many cases, the scammers are polluting top search engine results with phony 800-numbers for customer support lines that lead directly to fraudsters.

These days, scam calls happen on my mobile so often that I almost never answer my phone unless it appears to come from someone in my contacts list. The Federal Trade Commission’s do-not-call list does not appear to have done anything to block scam callers, and the major wireless carriers seem to be pretty useless in blocking incessant robocalls, even when the scammers are impersonating the carriers themselves, as in Jon’s case above.

I suspect people my age (mid-40s) and younger also generally let most unrecognized calls go to voicemail. It seems to be a very different reality for folks from an older generation, many of whom still primarily call friends and family using land lines, and who will always answer a ringing phone whenever it is humanly possible to do so.

It’s a good idea to advise your loved ones to ignore calls unless they appear to come from a friend or family member, and to just hang up the moment the caller starts asking for personal information.

CryptogramMore on the Five Eyes Statement on Encryption and Backdoors

Earlier this month, I wrote about a statement by the Five Eyes countries about encryption and back doors. (Short summary: they like them.) One of the weird things about the statement is that it was clearly written from a law-enforcement perspective, though we normally think of the Five Eyes as a consortium of intelligence agencies.

Susan Landau examines the details of the statement, explains what's going on, and why the statement is a lot less than what it might seem.

Worse Than FailureCodeSOD: Pointed Array Access

I've spent the past week doing a lot of embedded programming, and for me, this has mostly been handling having full-duplex communication between twenty devices on the same serial bus. It also means getting raw bytes and doing the memcpy(&myMessageStructVariable, buffer, sizeof(MessageStruct)). Yes, that's not the best way, and certainly isn't how I'd build it if I didn't have full control over both ends of the network.

Of course, even with that, serial networks can have some noise and errors. That means sometimes I get a packet that isn't the right size, and memcpy will happily read past the end of the buffer, because my const uint8_t * buffer pointer is just a pointer, after all. It's on me to access memory safely. Errors result when I'm incautious.

Which brings us to Krzysztof`s submission. This code is deep inside of a device that's been on the market for twenty years, and has undergone many revisions, both hardware and software, over that time.

uint32_t tempHistVal1; uint32_t tempHistVal2; uint32_t tempHistVal3; ... uint32_t tempHistVal20; uint32_t get_avg_temp(){ uint32_t res=0; uint32_t *ptr=&tempHistVal1; for(uint32_t i=0;i<20;i++) res+=ptr[i]; return res/20; }

After the first few lines, you'll probably think to yourself, "that should probably be an array, shouldn't it?" Of course it should. The programmer who wrote this agrees with you.

The line uint32_t *ptr=&tempHistVal1 creates a pointer to the first variable. In C, the line between "pointer" and "array" is fuzzy, which means you can use the [] operator to "index" a pointer. So, the line res+=ptr[i] grabs the i-th 32-bit integer after the ptrs address.

Now it's likely that tempHistVal1 and tempHistVal2 are contiguous in memory. It's the most obvious way for the compiler to handle those variables. But there's no guarantee that's the case. The C specification guarantees that arrays represent contiguous blocks of memory, but not variables.

Krzysztof suggested that they change this to an array, but was shot down. "We don't change code that works!", management said. Krzysztof is left to prayto the gods of compilers and hardware platforms and memory alignment that these variables keep getting compiled in order, with no gaps.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Google AdsenseLooking back, looking forward...

Earlier this year, AdSense celebrated 15 years of partnering with digital content creators like you. A lot has changed since we started – new technology, new challenges, and new opportunities all driven by constantly changing user needs. A few things have endured though – the world has an insatiable appetite for great content and publishers like you remain the beating heart of the open web. Sharing in this mission with you, helping to create millions of sustainable content-first businesses on the web, keeps us going.

While we look back fondly on the last 15 years, your stories inspire us to keep looking forward. There’s lots more to do to support you and set AdSense and our ecosystem up for success for the next 15 years. With that in mind, we’d like to share a preview of how we plan to help you grow and play our part in making advertising work for everyone.

First up, smarter sizing, better ad placements and new formats powered by Google’s machine learning technology. We know you want to spend more time writing content and serving your users than managing ad tags and settings. We want that too. We’ve been investing heavily in understanding the best ways to increase user interest in ads, including when and what type of ads to show, while ensuring they complement your content and respect the experience of visitors to your site. Alongside these new capabilities, we’ll bring you the controls, reporting, and transparency you expect from AdSense to ensure we’re constantly meeting your needs along the way.

Secondly, we know that creating great content can take time, but making that content profitable shouldn’t. We’re hard at work on a number of new assistive features that give you more insight into your performance when you’re ready to take action, a clearer understanding of how you’re doing relative to your peers and the industry, and improved navigation to help you get it all done faster. We want to save you time so you can focus on the things that matter most to your business, like creating great content for your audiences.

Thirdly, AdSense and Google are committed to advertising that works for everyone and playing our part to ensure the ads ecosystem supports the diverse needs of publishers, advertisers and consumers. We’ll continue to meet and exceed important industry standards on ads quality, including the Better Ads Standards, to make sure that great publishers who make engaging content are rewarded, while advertisers can continue to spend with confidence.As we launch into the next 15 years and with it deliver these new features, we may need your help to keep your account up to date. If we do, you’ll hear from us by email over the coming months. Don’t forget to make sure we have the correct email address for you and that your preferences are up to date.

We’re just as excited about working with you now as we were when we launched 15 years ago. Thank you for being on the journey with us so far – here’s to another 15 years of partnership and shared success!

Posted by:
Matthew Conroy - Senior Product Manager


Don MartiNotes on "turn off your ad blocker" messages

At least three kinds of software can be detected as "an ad blocker" in JavaScript.

full-service blockers, the best known of which is uBlock Origin. These tools block both invisible trackers and obvious ads, with no paid whitelisting program.

privacy tools, such as Disconnect (list-based protection) and Privacy Badger (behavior-based protection), that block some ads as a side effect. This is a small category now compared to ad blocking in general, but is likely to grow as browsers get better at privacy protection, and try new performance features to improve user experience.

deceptive blockers, which are either actual malware or operate a paid whitelisting scheme. The best-known paid whitelisting scheme is Acceptable Ads from Adblock Plus, which is disclosed to any user who is willing to scroll down and click on the gray-on-white text on the Adblock Plus site, but not anywhere along the way of the default extension install process.

So any ad blocker detector is going to be hitting at least three different kinds of tools and possibly six different groups of users.

  • People who chose and installed a full-service blocker

  • People who chose to protect their privacy but did not specifically choose to block ads

  • People who may have chosen their browser for its general privacy policies, but got upgraded to a specific feature they're not aware of

  • People who chose to block ads but got a blocker with paid whitelisting by mistake

  • People who chose to "install an ad blocker" because it got recommended to them as the magic tool that fixes everything wrong with the Internet

  • People who are deliberately participating in paid whitelisting. (Do these people exist?)

Sometimes you need to match the message to the audience. Because sites can use tools such as Aloodo to get a better picture of what kind of protection, or non-protection, is actually in play in a given session, we can try a variety of approaches.

  • Is silent reinsertion appropriate when the ad is delivered in a way that respects the user's personal information, and the user has only chosen a privacy tool but not an ad blocker?

  • When the user is participating in paid whitelisting, can a trustworthy site do better with an appeal based on disclosing the deception involved?

  • For which categories of users are the conventional, reciprocity-based appeals appropriate?

  • Where is it appropriate to take no action in a user session, but to report to a browser developer that a privacy feature is breaking some legit data collection or advertising?

Newsonomics: The Washington Post’s ambitions for Arc have grown — to a Bezosian scale

Open Access: The timely publishing solution truly serving scientists, science and the public

Watch out, algorithms: Julia Angwin and Jeff Larson unveil The Markup, their plan for investigating tech’s societal impacts

How to buy into journalism’s blockchain future (in only 44 steps)

How Ad Industry Destroys Brand Value

WTF is pubvendors.json?

Nucleus Claims It Now Has Throw Weight to Outperform Platforms on Ads

Instagram's new TV service recommended videos of potential child abuse

Joey Hess: censored Amazon review of Sandisk Ultra 32GB Micro SDHC Card

Apple's best product is now privacy

Storage access policy: Block cookies from trackers

Rondam RamblingsThis is what a precious snowflake looks like

Fred Guttenburg, father of one of the Parkland shooting victims, has done a pretty good job of dressing down Brett Kavanaugh for complaining that his family is "totally and permanently destroyed" by the sexual assault allegations leveled against him by Dr. Christine Blasey Ford.  But I don't think he went nearly far enough, so I'm going to pile on: Judge Kavanaugh, you have no clue what having


TEDBronwyn King leads global pledge for tobacco-free finance, and more TED news

The TED community has been making headlines — here are a few highlights.

Tobacco-free finance initiative launched at the UN. Oncologist and Tobacco Free Portfolios CEO Bronwyn King has made it her mission to detangle the worlds of finance and tobacco — and ensure that no one will ever accidentally invest in a tobacco company again. Together with the French and Australian governments, and a number of finance firms, King introduced The Tobacco-Free Finance Pledge at the United Nations during General Assembly week. The aim of the measure is to decrease the toll of tobacco-related deaths, which now stands at 7 million annually. More than 120 banks, companies, organizations and groups representing US$6.82 trillion have joined the launch as founding signatories and supporters. (Watch King’s TED Talk.)

The Museum of Broken Windows. Artists Dread Scott and Hank Willis Thomas are featured in a new pop-up show grappling with the dangerous impact of “broken windows” policing strategies, which target and criminalize low-income communities of color. The exhibition, which is hosted by the New York Civil Liberties Union, explores the disproportionate and inequitable system of policing in the United States with work by 30 artists from across the country. Scott’s piece for the showcase is a flag that reads, “A man was lynched by police yesterday.” Compelled by the police killing of Walter Scott, Scott revamped a NAACP flag from the 1920s and ‘30s for the piece. Thomas’ contribution to the exhibition are poems, letters and notes from incarcerated people titled “Writings on the Wall.” The exhibition is open through September 30 in Manhattan. (Watch Scott’s TED Talk and Thomas’ TED Talk.)

The future of at-home health care. Technologist Dina Katabi spoke at MIT Technology Review’s EmTech conference about Emerald, the healthcare technology she’s working on to revolutionize the way we gather data on patients at home. Using a low-power wireless connection, Katabi’s device, which she developed with a team at MIT, can monitor patient vital signs without any wearables — and even through walls — by tracking the electromagnetic field surrounding the human body, which shifts every time we move. “The future should be that the healthcare comes to the patient in their homes,” Katabi said, “as opposed to the patient going to the doctor or the clinic.” Some 200 people have already installed the system, and several leading biotech companies are studying the technology for future applications. (Watch Katabi’s TED Talk.)

Does New York City have a gut biome? In collaboration with Elizabeth Hénaff, The Living Collective and the Evan Eisman Company, algoworld expert and technologist Kevin Slavin has debuted an art installation featuring samples of New York City microorganisms titled “Subculture: Microbial Metrics and the Multi-Species City.” Weaving together biology, data analytics and design, the exhibit urges us to reconsider our relationship with bacteria and redefine how we interact with the diversity of life in urban spaces. Hosted at Storefront for Art and Architecture, the project uses genetic sequencing devices installed in the front of the gallery space to collect, extract and analyze microbial life. The gallery will be divided into three spaces: an introduction area, an in-house laboratory and a mapping area that will visualize the data gathered in real time. The exhibit is open through January 2019. (Watch Slavin’s TED Talk.)

Harald WelteFernvale Kits - Lack of Interest - Discount

Back in December 2014 at 31C3, bunnie and xobs presented about their exciting Fernvale project, how they reverse engineered parts of the MT6260 ARM SoC, which also happens to contain a Mediatek GSM baseband.

Thousands (at least hundreds) of people have seen that talk live. To date, 2506 people (or AIs?) have watched the recordings on youtube, 4859 more people on

Given that Fernvale was the closest you could get to having a hackable baseband processor / phone chip, I expected at least as much interest into this project as we received four years earlier with OsmocomBB.

As a result, in early 2015, sysmocom decided to order 50 units of Fernvale DVT2 evaluation kits from bunnie, and to offer them in the sysmocom webshop to ensure the wider community would be able to get the boards they need for research into widely available, inexpensive 2G baseband chips.

This decision was made purely for the perceived benefit of the community: Make an exciting project available for anyone. With that kind of complexity and component density, it's unlikely anyone would ever solder a board themselves. So somebody has to build some and make it available. The mark-up sysmocom put on top of bunnie's manufacturing cost was super minimal, only covering customs/import/shipping fees to Germany, as well as minimal overhead for packing/picking and accounting.

Now it's almost four years after bunnie + xobs' presentation, and of those 50 Fernvale boards, we still have 34 (!) units in stock. That means, only 16 people on this planet ever had an interest in playing with what at the time I thought was one of the most exciting pieces of equipment to play with.

So we lost somewhere on the order of close to 3600 EUR in dead inventory, for something that never was supposed to be a business anyway. That sucks, but I still think it was worth it.

In order to minimize the losses, sysmocom has now discounted the boards and reduced the price from EUR 110 to to EUR 58.82 (excluding VAT). I have very limited hope that this will increase the amount of interest in this project, but well, you got to try :)

In case you're thinking "oh, let's wait some more time, until they hand them out for free", let me tell you: If money is the issue that prevents you from playing with a Fernvale, then please contact me with the details about what you'd want to do with it, and we can see about providing them for free or at substantially reduced cost.

In the worst case, it was ~ 3600 EUR we could have invested in implementing more Osmocom software, which is sad. But would I do it again if I saw a very exciting project? Definitely!

The lesson learned here is probably that even a technically very exciting project backed by world-renowned hackers like bunnie doesn't mean that anyone will actually ever do anything with it, unless they get everything handed on a silver plate, i.e. all the software/reversing work is already done for them by others. And that actually makes me much more sad than the loss of those ~ 3600 EUR in sysmocom's balance sheet.

I also feel even more sorry for bunnie + xobs. They've invested time, money and passion into a project that nobody really seemed to want to get involved and/or take further. ("nobody" is meant figuratively. I know there were/are some enthusiasts who did pick up. I'm talking about the big picture). My condolences to bunnie + xobs!

CryptogramFriday Squid Blogging: Squid Protein Used in Variable Thermal Conductivity Material

This is really neat.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Rondam RamblingsAn open letter to Governor Jerry Brown re: net neutrality

Dear Governor Brown: When I went to work for NASA as an AI researcher in 1988 there was no World Wide Web.  The first web browser, Netscape Navigator, was still three years in the future.  There was no Amazon, no Facebook, no Wikipedia.  If you wanted to look something up, you consulted your home encyclopedia (if you were fortunate enough to have one), or you went to the library, or you did

Krebs on SecurityFacebook Security Bug Affects 90M Users

Facebook said today some 90 million of its users may get forcibly logged out of their accounts after the company fixed a rather glaring security vulnerability in its Web site that may have let attackers hijack user profiles.

In a short blog post published this afternoon, Facebook said hackers have been exploiting a vulnerability in Facebook’s site code that impacted a feature called “View As,” which lets users see how their profile appears to other people.

“This allowed them to steal Facebook access tokens which they could then use to take over people’s accounts,” Facebook wrote. “Access tokens are the equivalent of digital keys that keep people logged in to Facebook so they don’t need to re-enter their password every time they use the app.”

Facebook said it was removing the insecure “View As” feature, and resetting the access tokens of 50 million accounts that the company said it knows were affected, as well as the tokens for another 40 million users that may have been impacted over the past year.

The company said it was just beginning its investigation, and that it doesn’t yet know some basic facts about the incident, such as whether these accounts were misused, if any private information was accessed, or who might be responsible for these attacks.

Although Facebook didn’t mention this in their post, one other major unanswered question about this incident is whether the access tokens could have let attackers interactively log in to third-party sites as the user. Tens of thousands of Web sites let users log in using nothing more than their Facebook profile credentials. If users have previously logged in at third-party sites using their Facebook profile, there’s a good chance the attackers could have had access to those third-party sites as well.

I have asked for clarification from Facebook on this point and will update this post when and if I receive a response. However, I would have expected Facebook to mention this as a mitigating factor if authorized logins at third-party sites were not impacted.

Update: 4:46 p.m. ET: A Facebook spokesperson confirmed that while it was technically possible that an attacker could have abused this bug to target third-party apps and sites that use Facebook logins, the company doesn’t have any evidence so far that this has happened.

“We have invalidated data access for third-party apps for the affected individuals,” the spokesperson said, referring to the 90 million accounts that were forcibly logged out today and presented with a notification about the incident at the top of their feed.

Original story:
Facebook says there is no need for users to reset their passwords as a result of this breach, although that is certainly an option.

More importantly, it’s a good idea for all Facebook users to review their login activity. This page should let you view which devices are logged in to your account and approximately where in the world those devices are at the moment. That page also has an option to force a simultaneous logout of all devices connected to your account.

CryptogramMajor Tech Companies Finally Endorse Federal Privacy Regulation

The major tech companies, scared that states like California might impose actual privacy regulations, have now decided that they can better lobby the federal government for much weaker national legislation that will preempt any stricter state measures.

I'm sure they'll still do all they can to weaken the California law, but they know they'll do better at the national level.

Worse Than FailureError'd: Full Price not Allowed

"When registering for KubeCon and CloudNativeCon, it's like they're saying: Pay full price? Oh no, we insist you use a discount code. No really. It's mandatory," writes Andy B.


Henry S. wrote, "I think this message should perhaps read Luxury Service Unavailable."


"At first glance, you may read the instruction to be 'check your dog file', but that is presently not the case," writes Daryl D.


Rich P. wrote, "Lite-On (a LED manufacturer located in Taiwan) seems to have given up on differentiating the countries across the Pacific..."


"Sorry glassdoor, I would be happy to leave a salary report for undefined, but my current contracts with null and NaN forbid me from doing so," writes Jeffrey King.


"You know, although my name is Bruce, my friends all call me undefined," Bruce R. wrote.


[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet Linux AustraliaOpenSTEM: Helping Migrants to Australia

The end of the school year is fast approaching with the third term either over or about to end and the start of the fourth term looming ahead. There never seems to be enough time in the last term with making sure students have met all their learning outcomes for the year and with final […]

Planet Linux AustraliaDavid Rowe: Codec 2 2200 Candidate D

Every time I start working on Deep Learning and Codec 2 I get side tracked! This time, I started developing a reference codec that could be used to explore machine learning, however the reference codec was sounding pretty good early in it’s development so I have pushed it through to a fully quantised state. For lack of a better name it’s called candidate D, as that’s where I am up to in a series of codec prototypes.

The previous Codec 2 2200 post described candidate C. That also evolved from a “quick effort” to develop a reference codec to explore my deep learning ideas.

Learning about Vector Quantisation

This time, I explored Vector Quantisation (VQ) of spectral magnitude samples. I feel my VQ skills are weak, so did a bit of reading. I really enjoy learning, especially in areas I have been fooling around for a while but never really understood. It’s a special feeling when the theory clicks into place with the practical.

So I have these vectors of K=40 spectral magnitude samples, that I want to quantise. To get a feel for the data I started out by looking at smaller 2 and 3 dimensional vectors. Forty dimensions is a bit much to handle, so I started out by plotting smaller slices. Here are 2D and 3D scatter plots of adjacent samples in the vector:

The data is highly correlated, almost a straight line relationship. An example of a 2-bit, 2D vector quantiser for this data might be the points (0,0) (20,20) (30,30) (40,40). Consider representing the same data with two 1D (scalar) quantisers over the same 2 bit range (0,20,30,40). This would take 4 bits in total, and be wasteful as it would represent points that would never occur, such as (60,0).

[1] helped me understand the relationship between covariance and VQ, using 2D vectors. For Candidate D I extended this to K=40 dimensions, the number of samples I am using for the spectral magnitudes. Then [2] (thirty year old!) paper how the DCT relates to vector quantisation and the eigenvector/value rotations described in [1]. I vaguely remember snoring my way through eigen-thingies at math lectures in University!

My VQ work to date has used minimum Mean Square Error (MSE) to train and match vectors. I have been uncomfortable with MSE matching for a while, as I have observed poor choices in matching vectors to speech. For example if the target vector falls off sharply at high frequencies (say a LPF at 3500 Hz), the VQ will try to select a vector that matches that fall off, and ignore smaller, more perceptually important features like formants.

VQs are often trained to minimise the average error. They tend to cluster VQ points closer to those samples that are more likely to occur. However I have found that beneath a certain threshold, we can’t hear the quantisation error. In Codec 2 it’s hard to hear any distortion when spectral magnitudes are quantised to 6 dB steps. This suggest that we are wasting bits with fine quantiser steps, and there may be better ways to design VQs, for example a uniform grid of points that covers a few standard deviations of data on the scatter plots above.

I like the idea of uniform quantisation across vector dimensions and the concepts I learnt during this work allowed me to do just that. The DCT effectively lets me use scalar quantisation of each vector element, so I can easily choose any quantiser shape I like.

Spectral Quantiser

Candidate D uses a similar design and bit allocation to Candidate C. Candidate D uses K=40 resampling of the spectral magnitudes, to help preserve narrow high frequency formants that are present for low pitch speakers like hts1a. The DCT of the rate K vectors is computed, and quantised using a Huffman code.

There are not enough bits to quantise all of the coefficients, so we stop when we run out of bits, typically after 15 or 20 (out of a total of 40) DCTs. On each frame the algorithm tries direct or differential quantisation, and chooses the method with the lowest error.


I have a couple of small databases that I use for listening tests (about 15 samples in total). I feel Candidate D is better than Codec 2 1300, and also Codec 2 2400 for most (but not all) samples.

In particular, Candidate D handles samples with lots of low frequency energy better, e.g. cq_ref and kristoff in the table below.

Sample 1300 2400 2200 D
cq_ref Listen Listen Listen
kristoff Listen Listen Listen
me Listen Listen Listen
vk5local_1 Listen Listen Listen
ebs Listen Listen Listen

For a high quality FreeDV mode I want to improve speech quality over FreeDV 1600 (which uses Codec 2 1300 plus some FEC bits), and provide better robustness to different speakers and recording conditions. As you can hear – there is a significant jump in quality between the 1300 bit/s codec and candidate D. Implemented as a FreeDV mode, it would compare well with SSB at high SNRs.

Next Steps

There are many aspects of Candidate D that could be explored:

  • Wideband audio, like the work from last year.
  • Back to my original aim of exploring deep learning with Codec 2.
  • Computing the DCT coefficients from the rate L (time varying) magnitude samples.
  • Better time/freq quantisation using a 2D DCT rather than the simple difference in time scheme used for Candidate D.
  • Porting to C and developing a real time FreeDV 2200 mode.

The current candidate D 2200 codec is implemented in Octave, so porting to C is required before it is usable for real world applications, plus some more C to integrate with FreeDV.

If anyone would like to help, please let me know. It’s fairly straight forward C coding, I have already done the DSP. You’ll learn a lot, and be part of the open source future of digital radio.

Reading Further

[1] A geometric interpretation of the covariance matrix, really helped me understand what was going on with VQ in 2 dimensions, which can then be extended to larger dimensions.

[2] Vector Quantization in Speech Coding, Makhoul et al.

[3 Codec 2 Wideband, previous DCT based Codec 2 Work.


Planet Linux AustraliaJames Morris: 2018 Linux Security Summit North America: Wrapup

The 2018 Linux Security Summit North America (LSS-NA) was held last month in Vancouver, BC.

Attendance continued to grow this year, with a record of 220+ attendees.  Our room was upgraded as a result, with spectacular views.

LSS-NA 2018 Vancouver BC

Linux Security Summit NA 2018, Vancouver,BC

We also had many great proposals and the schedule ended up being a very tight fit.  We’ve asked for an extra day for LSS-NA next year — here’s hoping.

Slides of all presentations are available here:

Videos may be found in this youtube playlist.

Once again, as is typical, the conference was focused around development, somewhat uniquely in the world of security conferences.  It’s interesting to see more attention seemingly being paid to the lower parts of the stack: secure booting, firmware, and hardware roots of trust, as well as the continued efforts in hardening the kernel.

LWN provided some excellent coverage of LSS-NA:

Paul Moore has a brief writeup here.

Thanks to everyone involved in the event for 2018: the speakers, attendees, the program committee, the sponsors, and the organizing team at the Linux Foundation.  LSS-NA would not be possible without all of you!

Krebs on SecuritySecret Service Warns of Surge in ATM ‘Wiretapping’ Attacks

The U.S. Secret Service is warning financial institutions about a recent uptick in a form of ATM skimming that involves cutting cupcake-sized holes in a cash machine and then using a combination of magnets and medical devices to siphon customer account data directly from the card reader inside the ATM.

According to a non-public alert distributed to banks this week and shared with KrebsOnSecurity by a financial industry source, the Secret Service has received multiple reports about a complex form of skimming that often takes thieves days to implement.

This type of attack, sometimes called ATM “wiretapping” or “eavesdropping,” starts when thieves use a drill to make a relatively large hole in the front of a cash machine. The hole is then concealed by a metal faceplate, or perhaps a decal featuring the bank’s logo or boilerplate instructions on how to use the ATM.

A thin metal faceplate is often used to conceal the hole drilled into the front of the ATM. The PIN pad shield pictured here is equipped with a hidden spy camera.

Skimmer thieves will fish the card skimming device through the hole and attach it to the internal card reader via a magnet.

Thieves often use a magnet to secure their card skimmer in place above the ATM’s internal card reader. Image: U.S. Secret Service.

Very often the fraudsters will be assisted in the skimmer installation by an endoscope, a slender, flexible instrument traditionally used in medicine to give physicians a look inside the human body. By connecting a USB-based endoscope to his smart phone, the intruder can then peek inside the ATM and ensure that his skimmer is correctly attached to the card reader.

The Secret Service says once the skimmer is in place and the hole patched by a metal plate or plastic decal, the skimmer thieves often will wait a day or so to attach the pinhole camera. “The delay is believed to take place to ensure that vibrations from the drilling didn’t trigger an alarm from anti-skimming technology,” the alert reads.

When the suspect is satisfied that his drilling and mucking around inside the cash machine hasn’t set off any internal alarms, he returns to finish the job by retrofitting the ATM with a hidden camera. Often this is a false fascia directly in front of or above the PIN pad, recording each victim entering his or her PIN in a time-stamped video.

In other cases, the thieves may replace the PIN pad security shield on the ATM with a replica that includes a hidden pinhole camera, tucking the camera components behind the cut hole and fishing the camera wiring and battery through the hole drilled in the front of the machine.

The image on the left shows the spy camera guts and battery hidden behind the hole (this view is from the inside of the ATM, and the card reader is on the left). The image on the right shows a counterfeit PIN pad shield equipped with a hidden camera that is wired to the taped components pictured in the left image.

It’s difficult to cite all of the Secret Service’s report without giving thieves a precise blueprint on how to conduct these attacks. But I will say that several sources who spend a great deal of time monitoring cybercrime forums and communications have recently shared multiple how-to documents apparently making the rounds that lay out in painstaking detail how to execute these wiretapping attacks. So that knowledge is definitely being shared more widely in the criminal community now.

Overall, it’s getting tougher to spot ATM skimming devices, many of which are designed to be embedded inside various ATM components (e.g., insert skimmers). It’s best to focus instead on protecting your own physical security while at the cash machine. If you visit an ATM that looks strange, tampered with, or out of place, try to find another machine. Use only ATMs in public, well-lit areas, and avoid those in secluded spots.

Most importantly, cover the PIN pad with your hand when entering your PIN: That way, even if the thieves somehow skim your card, there is less chance that they will be able to snag your PIN as well. You’d be amazed at how many people fail to take this basic precaution.

Sure, there is still a chance that thieves could use a PIN-pad overlay device to capture your PIN, but in my experience these are far less common than hidden cameras (and quite a bit more costly for thieves who aren’t making their own skimmers). Done properly, covering the PIN pad with your hand could even block hidden cameras like those embedded in the phony PIN pad security shield pictured above.

Fascinated by all things skimmer-related? Check out my series All About Skimmers for more images, videos and ingenious skimmer scams.

LongNowA Journey to Siberia in Search of Woolly Mammoths

Harvard geneticist George Church, who is leading efforts to de-extinct the woolly mammoth, explores a cave in Siberia. Photo by Brendan Hall.

There will be three long flights across 15 time zones before I sleep in a bed, and we still won’t be there. Our destination is vastly closer to where we start than the path we have to take.

Under the auspices of the documentary being made about Long Now co-founder Stewart Brand by Structure Films, we are heading to Pleistocene Park in Siberia, where we will be filming Stewart and a team of scientists while they visit one of the first places on Earth that is being readied for the de-extinction and re-introduction of the woolly mammoth.

Our circuitous route from San Francisco to Siberia.

After driving to the San Francisco airport with Stewart, we met up with Long Now board member Kevin Kelly, check in baggage, and rendezvous with Jason Sussberg from the film team in the terminal. We immediately see our first flight is delayed an hour. That should be okay, given we had a planned three hours in New York to get to our next flight and have a longer layover in Moscow before departing for Yakutsk. We quickly learned, however, that the delay might stretch to five hours due to a storm in North East. Other members of our expedition already had their flights cancelled, and had to take a train and a car through the storm to JFK from Boston. We could feel the trip unraveling even before it began.

But moments later they updated us that the storm was moving out, and our flight would be boarding shortly. Hopefully this would be the only snafu in our long chain of travel. We made it to New York and began checking in for our next flight, where we got our first taste of Russian bureaucracy at the Aeroflot ticket counter. Even though we had sent our passports to the embassy a month before, and had all the visas affixed, it took at least 30 minutes of mysterious typing to check us in and give us horrible middle seats.

Geneticist George Church travels very light.

Twelve hours later, bleary-eyed and stumbling through the Moscow airport, we were finally able to meet up with the rest of the group. David Alvarado, the other half of Structure Films, Gerry Ohrstrom, the executive producer, and Brendan Hall, who would be operating the drones and still cameras, would round out the film team. On the science side was the eminent geneticist George Church from Harvard, who is doing the primary work on de-extincting the mammoth, as well as one of his post doctoral researchers Eriona Hysolli, who would be collecting mammoth tissue on this trip. Raja Dhir, a young biotech entrepreneur protege of Church focused on bacteria, and Anya Bernstein, a Moscow born Harvard professor of anthropology specializing in Russian futurism.

Jason Sussberg, David Alvarado, Gerry Ohrstrom, George Church, Stewart Brand, Alexander Rose, Eriona Hysolli, Raja Dhir, Kevin Kelly, Anya Bernstein. Photo by Brendan Hall.

After a few hours of chatting and recharging devices we were back on an Aeroflot plane bound for Yakutsk, the capital of the Sakha Republic. Kevin Kelly pointed out the marked change in the people on this flight. While the people on our last flight had the fair complexions and chiseled features I normally associated with Russians, the people on this flight looked to have heritage from both Mongolian and Eskimo cultures. The vastness of Russia rolled by for hours under the plane. We traversed 6 time zones and had only crossed about two thirds of Russia. Our destination, the Sakha Republic or Yakutia, is an autonomous cultural region of Russia that is nearly the size of India and boasts the lowest temperatures ever recorded in the Northern hemisphere.

Landing in Yakutsk.

After landing I was stunned to see that the bags we checked in San Francisco with another airline actually rolled out of the conveyor… all except for one. Apparently the case with all the sound equipment had become separated from us at some point, and Aeroflot did not know where it was. This was a major setback for the filming team, as it was going to be very difficult for the equipment to catch up with us on the remainder of the trip — if it was ever found. Nonetheless, we headed out to our hotel, the Azimut Polar Star, which was complete with a stuffed Mammoth in the lobby, and had a delicious dinner of Georgian cuisine.

I woke up in dazed disbelief that we had to go to the airport yet again in the morning to board a plane for another 4 hour flight. On the Russian-made twin propeller plane we found they had removed many of the seats and replaced them with cargo bound for Cherskiy, our destination on the northern coast. Our group certainly stood out on this flight. As we flew north crossing the Arctic Circle and east across two more time zones, there was no sign of habitation. Almost everyone in Yakutia lives in a few major cities, leaving the 1.2 million square mile area — larger than Argentina — almost completely untouched. As we approached Cherskiy we could see the mighty Kolyma River beneath us, infamous as the region of the Russian Gulag labor camps in the 01930s-50s, and the watercourse we would be spending the next 10 days on.

The Top of the World

The moment the plane door opened after landing on the dirt strip, two Russian soldiers stepped in blocking our exit.

“Passports,” they said, just to our group.

We produced our documents.

“All with Zimov?”

We nodded.

They checked our names against a list and gave our passports back. After walking by scores of derelict planes, we met Nikita Zimov in the parking lot. With him was Luke Griswold-Tergis, a filmmaker who has spent the last 6 seasons with the Zimovs making a documentary about them and the Pleistocene Park. Generously, Luke was going to be able to loan the film crew some sound equipment while they hoped for their equipment to come in on the next flight — at best two days out.

Some of the many planes that never left Cherskiy Airport

The Cherskiy logistical hub was one of over a hundred polar outposts during the Soviet times. When the Soviet Union collapsed, so did the support and resources, leaving just a few places like this to scrape by. The 20,000 souls in Cherskiy at the height of the Cold War have dwindled to less than 2,000, and most of those are native people from local tribes. The city is replete with abandoned buildings and infrastructure, and if there were ever any paved streets, they are now long gone. We bounced through the two or three blocks that make up the town in a bit of a blur, and were taken down to the water just off one end of the air strip where a barge was waiting. Nikita was boisterous and cheerful, and rattled off facts about his hometown as we peppered him with questions. We loaded all the gear on to the barge and were joined by Nikita’s wife Nastia, two of their young daughters, and were soon underway on an amazingly warm and buggy evening up the river.

Sergey Zimov, Nikita’s father, arrived by speedboat after we loaded gear into the cozy bunk rooms. Pleistocene Park is Sergey’s brainchild, and he has been working on it with his family and a small staff for over 30 years. He has piercing eyes and kind of mythical presence that is both calming and demands your attention. Nikita explained that we would be cruising up river for about 20 hours to get to our destination, an eroding mud bank called Duvanii Yar (roughly translated to “windy cliff”) where we would search for mammoth bones. We sipped vodka and settled in for a long ride.

Sergey Zimov

Almost every meal on board included moose meat. Lucky for us it was delicious, as were all of the meals. But we quickly learned the limitations of living above the Arctic Circle at a remote outpost. If it cannot survive 10 months of winter, grows in the ground, or won’t keep indefinitely, it is a delicacy that has to be flown in. Everything else comes by barge in the summer, or is hunted and foraged locally — like moose. There are a few greenhouses in town whose soil beds are heated by the local coal-fired steam plant, where a few precious vegetables can be grown during the months when there is light. Everything else has a cripplingly high cost of shipping. Something like a potato or chicken meat could be the most expensive thing on your plate.

The sun was nearly always setting, and would barely dip below the horizon from midnight to 3am. It made for gorgeous “golden hour” lighting, and our trip up-river was glassy and calm in a way I had never experienced on the water. Even understanding that a body of water this large could be a river seemed to defy my definition of the term. The Kolyma is one of the last major wild rivers in the world. While it has a few interventions at its headwaters, the vast majority of it is untouched by civilization, and here, where it meets the Siberian Sea, it is wide enough that you can barely see both shores.

Duvanii Yar

Inthe slowly developing dawn, the muddy cliff of Duvanii Yar emerged on our port side. From a distance it did not stand out in any way, but this place is famous in the world of mammoth hunters. A mammoth tusk sells for $40,000 and is the last legal source of wild ivory on the planet. Locals scour these shores after large storms, or when the water is lowest revealing new bones. The windfall of a mammoth tusk find is like winning the lottery.

We take small boats to the shore and Sergey explains that the water is too high to find much today, but reaches down by his feet amidst the driftwood and pulls up a bone proclaiming, “Buffalo, 35,000 years old.” We soon come to realize that we are standing in a place that used to be a wildly dense ecosystem. In Sergey’s Pleistocene Manifesto he writes:

During the ice age, Northern Siberia accumulated a thick layer of loess sediments. These are the soils of the mammoth steppe. By counting bones in these frozen sediments, it is possible to accurately estimate the density of animals in this ecosystem. On each square kilometer of pasture lived 1 mammoth, 5 bison, 8 horses, 15 reindeer. Additionally, more rare musk ox, elks, wooly rhinoceros, saiga, snow sheep, and moose were present. Wolves, cave lions and wolverines occupied the landscape as predators. In total, over 10 tons of animals lived on each square kilometer of pasture — hundreds of times higher than modern animal densities in the mossy northern landscape.

This immense density of life has stacked up in the permafrost layers at a scale that is hard to comprehend until you learn how to see the bones.

Duvanii coastline

The difficulty in finding bones was not that there were so few, but that they were mixed with the driftwood that looked nearly identical. Every piece of worn wood looks like a bone at first. Those that have a reddish tint often are. One of my first finds was a 40,000 year-old jaw from some sort of grazer.

One of the first ancient bones I found

Once your eyes learn the subtle differences to look for, bones are everywhere. Over the course of the morning, scores of specimens were found: Lots of buffalo, deer and other more common pleistocene grazers, and even a few mammoth bones, including a nice large shoulder bone. I asked Sergey if bones were found in these densities everywhere, or if this place was some sort of graveyard that accumulated remains. “Like this everywhere, but here, the river digs them up for you.”

Some of the bones found at Duvanii

Another feature of this location that was exposed by the river erosion was the tundra itself, and the ice wedge structures that litter the underground landscape. Hiking up from the river through gorgeous wildflowers and swarming mosquitoes, you can look up at the natural ground level before it was cut by the river. I had heard about permafrost and understood it to be permanently frozen ground, but I had never understood that it was filled with varying structures like this. The dirty ice melting in front of us was tens of thousands of years old, and filled with smelly organic compounds and gases being released into the atmosphere.

Tundra ice wedges exposed by river erosion

The visit to Duvanii made the idea of the “mammoth steppe” extremely real. There was something elemental about being surrounded by Pleistocene bones in a melting tundra landscape. You could close your eyes and feel how rich this ecosystem was before it was hunted into extinction.

Our trip down river went faster as we traveled with the flow of the water. We stopped for the night at the tiny home of a few fishermen who live along the banks of one of the smaller tributaries. They brought dried fish aboard, and one of them turned out to be quite a singer, regaling us all with Russian folk songs. After each song, Nikita translated the plot of suffering and injustice. This guy was basically the Siberian Johnny Cash.

The North-East Scientific Station (NESS)

Returning to Cherskiy we lugged our gear up to the The North-East Scientific Station (NESS) run by the Zimovs as a kind of HQ, bunkhouse and science facility. We joined a group of Russian geologists who were there studying the tundra both in and around Pleistocene Park. The station itself is a repurposed TV Satellite dish facility left over from the Soviet era. The dish is no longer functional but it lends a bit of gravitas to the location and is easy to pick out as you approach from the land or river.

For the next week this would be our home base as we struck out for the Pleistocene Park, ice caves, and other scientific outposts. It turned out to be good timing to be returning to land. Our moment of warm Siberian summer was beginning to come to an end.

The Park

The next day we were heading to our primary destination — Pleistocene Park. We had been learning about the theory of the park over the last few days and it at least sounded simple: Bring the grazing species back to Siberia, clear the trees and brush so that the grasses could come back, expose the soil to the cold air, and increase its reflectance (albedo). This would help keep the tundra frozen, and all the greenhouse gases like CO2 and methane in the ground. One of the sticking points in scaling this plan, however, is the part where you clear the trees and brush over millions of square miles. The Zimovs can do it in the few square miles of the park, but not all of Siberia, much less Northern Europe, Alaska, Canada and Greenland. This is where the mammoth comes in. It is believed that mammoths would keep the small trees and brush in check, leaving the majority of the land as fertile grasses for grazing, and, most importantly for climate change, flat expanses for bright white fields of ice and snow reflecting sunlight.

Everyone bundled up for the one hour speedboat ride to the park. The little open aluminum boats were great workhorses on this trip, but for each excursion we made, at least one had an issue that required a bit of repair. The Zimovs and Luke fixed each problem deftly, and only once did a boat get stranded after dark, requiring someone to go back out for them.

That time they trusted me to drive the boat! In the few months where the water is liquid, pretty much all travel is by on the water. The rest of the year it all done on frozen rivers by snowcat and snowmobile.

Arriving at the park, we were met by two of the employees that live there year round in a house built on top of two shipping containers (the area floods and they have already had to go in and out of the second story windows a few times). Right at the headquarters we were able to see a baby moose they were nursing to release soon, a recently introduced herd of muskox, and a lone buffalo.

Park headquarters

The buffalo was actually a developing story while we there. The Zimovs have been trying to import 10–20 more buffalo for months and have them all ready to go out of Alaska, but the air transport companies keep backing out at the last minute. Bringing animals in from around Russia is tricky due to the distances and lack of roads, but bringing wild animals from other countries is proving to be quite difficult logistically.

Nikita walked us around the park and showed us an area where they have cleared the scrub forest mechanically (with a bulldozer), as well as areas where they were draining some of the ponds into the river, leaving fertile grassy pastures. Both of these methods are being tested to generate the desired grasslands that the mammoths and grazers used to create and maintain. As we toured around, we were stalked by a few curious reindeer while we swatted at the mosquitos.

I should take a moment to talk about the bugs. On any day above freezing we were generally inundated by mosquitos. We were told that this was not bad, and that we “should have seen it a couple weeks ago.” Indeed none of our hosts even zipped up their bug shirts, but us visitors were covering as much of our bodies as possible with netting and DEET.

Nikita Zimov and Stewart Brand. Photo by Brendan Hall.

One thing that we all noticed about this area was the profound lack of birds in the air, or fish jumping in the water. I am not sure what the mosquitoes feed on when we are not there, but it was clear the fish were not preying on them. Perhaps it was just the time of year, or that behavior is just different, but I have never visited a wilderness as devoid of visible fish and birds as this place.

After a lunch in the cozy bunkhouse structure we got in the boats to go and see if we could find one of the herds of wild Yakutian horses that were in the park. We located them fairly quickly and I think they were one of the most majestic species that they have on the property. They let us get pretty close as they are clearly not afraid of humans.

Muskox, Yakutian horses, reindeer, and the buffalo we saw in the park.

We returned to the park headquarters for our last stop of the day — the ice cave. In this area it is normal to dig ice caves to act as a year round freezer. But at Pleistocene Park, researchers wanted a window into how the soils and ice wedge structures were fairing underground, and dug an elaborate multi-level set of tunnels that access hundreds of lateral feet and at least 50 feet down. They use these tunnels to study the ongoing effects of the changes on the surface as well as soil chemistry. After several hours in these caves standing on solid ice, we were ready to head back.

Inside the Ice Cave. Photo by Brendan Hall.

We got back in our boats to take a long cold ride in the rain back to Cherskiy. Again one of the boats broke down and had to be retrieved, but we all made it home safely. Every one of us walked away in awe of how much work is being done in this far corner of the world, as well as amazed to see what a shoestring budget it is being done with. This is possibly one of the most important climate experiments underway, and it is almost completely un-funded and unnoticed. We all agreed there should be experiments like this going on in multiple places and informing each other. The world does not just need one Pleistocene Park, it needs a network of parks in Alaska, Canada, Norway, Sweden, Finland, and across all of northern Russia.

Mammoth Tissue

Many more days were spent in and around Cherskiy, returning to the park, capturing methane from lakes, visiting scientific outposts, and an evening in the recently built Russian Banya (steam bath). But the final stop for the trip was a visit to the Mammoth Museum in Yukutsk on our way home. A few of us had ventured on ahead to spend a couple days in Moscow, but George Church and Eriona Hysolli were able to acquire some small tissue samples that would allow their work to continue in identifying the genetic differences between mammoths and modern Asian elephants.

Eriona Hysolli of the Church Lab taking a tissue sample from the trunk of a frozen mammoth.

Once these two genomes are properly compared, George Church and his lab hope to be able to use modern genetic techniques, and likely some yet to be invented gestation techniques, to be able to bring the mammoth back. The most obvious place for its reintroduction is a place like Pleistocene Park. Mammoths will hopefully once again be roaming the steppe, and keeping the tundra safely frozen, after being absent for nearly 10,000 years.

Alexander Rose — Executive Director — Long Now

Learn More

CryptogramCounting People Through a Wall with WiFi

Interesting research:

In the team's experiments, one WiFi transmitter and one WiFi receiver are behind walls, outside a room in which a number of people are present. The room can get very crowded with as many as 20 people zigzagging each other. The transmitter sends a wireless signal whose received signal strength (RSSI) is measured by the receiver. Using only such received signal power measurements, the receiver estimates how many people are inside the room ­ an estimate that closely matches the actual number. It is noteworthy that the researchers do not do any prior measurements or calibration in the area of interest; their approach has only a very short calibration phase that need not be done in the same area.

Academic paper.

Worse Than FailureCodeSOD: Off by Dumb Error

“We’re bringing on my nephew, he’s super smart with computers, so you make sure he is successful!”

That was the long and short of how Reagan got introduced to the new hire, Dewey. Dewey’s keyboard only really needed three keys: CTRL, C, and V. They couldn’t write a line of code to save their life. Once, when trying to fumble through a FizzBuzz as a simple practice exercise, Dewey took to Google to find a solution. Because Dewey couldn’t quite understand how Google worked, instead of copy/pasting out of StackOverflow, they went to r/ProgrammerHumor and copied code out of a meme image instead.

Reagan couldn’t even just try and shove Dewey off on a hunt for a left-handed packet shifter in the supply closet, because Dewey’s patron was watching source control, and wanted to see Dewey’s brilliant commits showing up. Even if Reagan didn’t give Dewey any tasks, Dewey’s uncle did.

That’s how Dewey got stumped trying to fetch data from a database. They simply needed to read one column and present it as a series of HTML list items, using PHP.

This was their approach.

$sql = "SELECT information FROM table"; 
//yes, that is actually what Dewey named things in the DB
$result = $conn->query($sql);
$list = $result->fetch_assoc();
$i = 1;
$run = true;
while ( $list == true && $run != false ) {
  if ( $list[$i] <= count($list) ) {
    echo '<li>' . $list[$i] . '</li>';
  } else {
    $last = array_pop(array_reverse($list));
    echo '<li>' . $last . '</li>';
    $run = false;

Presumably, this is one of the cases where Dewey didn’t copy and paste code, because I don’t think anyone could come up with code like that on purpose.

The fundamental misunderstanding of loops, lists, conditions, arrays, and databases is just stunning. Somehow, Dewey couldn’t grasp that arrays started at zero, but blundered into a solution where they could reverse and pop the array instead.

Needless to say, Dewey never actually had any code get past the review stage. Shortly after this, Dewey got quietly shuffled to some other part of the organization, and Reagan never heard from them again.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet Linux AustraliaOpenSTEM: Our interwoven ancestry

In 2008 a new group of human ancestors – the Denisovans, were defined on the basis of a single finger knuckle (phalanx) bone discovered in Denisova cave in the Altai mountains of Siberia. A molar tooth, found at Denisova cave earlier (in 2000) was determined to be of the same group. Since then extensive work […]


Worse Than FailureCodeSOD: Ten Times as Unique

James works with a financial services company. As part of their security model, they send out verification codes for certain account operations, and these have to be unique.

So you know what happens. Someone wrote their own random string generator, then wrapped it up into a for loop and calls it until they get a random string which is unique:

private string GetUniqueVerificationCode()
    // Generate a new code up to 10 times and check for uniqueness - if it's unique jump out
    // IRL this should only hit once, it;s a random 25 char string ffs but you can never be too careful :)
    for(var tries = 0; tries < 10; tries++)
        var code = RandomStringGenerator.GetRandomAlphanumericString(50);
             return code;
    throw new Exception("Unable to generate unique verification code.");

It’s the details, here. According to the comment, we expect 25 characters, but according to the call, it looks like it’s actually 50- GetRandomAlphanumericString(50). If, after ten tries, there isn’t a unique and random code, give up and chuck an exception- an untyped exception, making it essentially impossible to catch and respond to in a useful way.

As the comment points out- the odds of a collision are exceedingly small- at least depending on how the “random alphanumeric string” is generated. Even with case insensitive “alphanumerics”, there are quadrillions of possible strings at twenty five characters. If it’s actually fifty, well, it’s a lot.

Now, sure, maybe there’s a bias in the random generation, making collisions more likely, but that’s why we try and design our applications to avoid generating the random numbers ourselves.

James pointed out that this was silly, but the original developer misunderstood, and thought the for loop was the silly part, so now the code looks like this:

private string GetUniqueVerificationCode()
    var code = RandomStringGenerator.GetRandomAlphanumericString(50);
    while (this.userVerificationCodeRepository.CodeExists(code))
        code = RandomStringGenerator.GetRandomAlphanumericString(50);
    return code;

Might as well have gone all the way to a do...while. The best part is that regardless of which version of the code you use, since it’s part of a multiuser web application, there’s a race condition- the same code could be generated twice before being logged as an existing code in the database. That’s arguably more likely, depending on how the random generation is implemented.

Based on a little googling, I suspect that the GetRandomAlphanumericString was copy-pasted from StackOverflow, and I’m gonna bet it wasn’t one of the solutions that used a cryptographic source.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.


TEDHumanizing our future: A night of talks from TED and Verizon

Hosts Bryn Freedman, left, and Kelly Stoetzel open the “Humanizing Our Future” salon, presented by Verizon at the TED World Theater, September 20, 2018, in New York. Photo: Ryan Lash / TED

There are moments when the world begins to shift beneath our feet. Sometimes slowly, sometimes dramatically. Now more than ever we are living and working in an era of exponential technological advancement. How we address rapid change, what collaborative relationships we create, how we find our humanity — all this will determine the future we step into.

For the first time, TED has partnered with Verizon for a salon focused on building that future. In a night of talks at the TED World Theater in New York City — hosted by TED curators Bryn Freedman and Kelly Stoetzel — six speakers and one performer shared fresh thinking on healing our hospital system, empowering rural women, creating a safer internet, harnessing intergenerational wisdom and much more.

How intergenerational wisdom helps companies thrive. In 2013, Chip Conley, who built a multi-decade career running boutique hotels, was brought into Airbnb to be the mentor of CEO Brian Chesky. Conley was 52 (and thus 21 years older than Chesky) and he wondered what, if anything, he could offer these digital natives. But he realized he could become what he calls a “Modern Elder,” someone with the “ability to use timeless wisdom and apply it to modern-day problems.” For instance, he shares with the younger employees the people skills he gained over decades, while they teach him about technology. Nearly 40 percent of Americans have a boss who is younger than them — and when people of all ages exchange knowledge and learn from each other, good things happen. “This is the new sharing economy,” Conley says.

Can hospitals heal our environmental illness? “It’s not possible to have healthy people on a sick planet,” says healthcare change agent Gary Cohen. Working in healthcare for 30 years, Cohen has seen firsthand the pollution created by hospitals in the United States — if American hospitals were a country, he says, they would have more greenhouse gas emissions than the entire United Kingdom. Cohen suggests that it’s time for hospitals to go beyond medical practice and become centers of holistic community healing. What could that look like? Investment in sustainable, renewable energy and transportation, green and affordable housing, and partnership with schools to pool local food resources. “Transform hospitals from being cathedrals of chronic disease to beacons of community wellness,” Cohen says.

Meagan Fallone works on an education program that’s teaching thousands of rural, illiterate women to create solar power systems — and improve their communities and lives along the way. She speaks at the “Humanizing Our Future” salon. (Photo: Ryan Lash / TED)

Empowering rural women through solar-powered education. The innovators best prepared to cope with the issues of the future won’t be found in Silicon Valley or at an Ivy League school, says Barefoot College CEO Meagan Fallone. Instead, they’ll be found among the impoverished women of the Global South. Fallone works on groundbreaking programs at Barefoot College, a social work and research center, helping illiterate women break cycles of poverty through solar power education and training. Nearly 3,000 women have completed Barefoot College’s six-month business and solar engineering curriculum, and their skills have brought solar light to more than one million people. Following the success of the solar education program, and at the request of graduates, Barefoot College developed a follow-up program called “Enriche,” which offers a holistic understanding of enterprise skills, digital literacy, human rights and more. By democratizing and demystifying technology and education, Fallone says, we can empower illiterate women with the skills to become leaders and entrepreneurs — and make real change in their communities.

It’s fine to enjoy a dystopian movie, says Rima Qureshi — but when we’re building our real future, dystopia is a choice. She speaks at the “Humanizing Our Future” salon. {Photo: Ryan Lash / TED)

Dystopia is a choice. From The Matrix to Black Mirror, many of us crave science-fiction tales of rogue technologies: robots that will take our jobs, enslave us, destroy us or pit us against one another. Is our dread of dystopia a self-fulfilling prophecy? Rima Qureshi offers a warning — and some hopeful advice to remind us that dystopia is a choice. Our love for dystopia courts actual disaster through “target fixation”: the phenomenon where a driver or a pilot panics when a hazard looms, and thus becomes more likely to actually strike it. Although we should always keep cyber threats in our peripheral vision, Qureshi says, we should remain focused on the technologies that will help us: virtual classrooms, drones that race into burning buildings to find survivors, or VR that allows doctors to perform surgery remotely. We should not assume the future will be terrible (though we can still enjoy the next apocalyptic movie about how technology will destroy us all).

Ever played a djembe? The audience at the “Humanizing Our Future” salon got to try their skills on this traditional drum, led by motivator Doug Manuel, at the TED World Theater. (Photo: Ryan Lash / TED)

How drums build community. In 1995, entrepreneur Doug Manuel made a trip to West Africa and fell in love — with a drum. That drum is called the djembe, a rope-tuned instrument played with the hands; it’s one of the world’s oldest forms of communication. “With its more than 300 different traditional rhythms, it’s accompanied every aspect of life — from initiations to celebrations and even sowing the seeds for an abundant harvest,” Manuel says. Since his life-changing trip, Manuel has used the djembe to develop team-building programs and build bridges between Africa and the West. In a live demo of his work, Manuel invites the audience to try their hands at the djembe during two upbeat drum lessons. Backed by two professional drummers, Manuel teaches a few beats — and shows how the djembe can still bring people together around a collective rhythm.

Healing the pain of racial division. During the Civil Rights era, Ruby Sales joined a group of freedom fighters in Alabama, where she met Jonathan Daniels, a fellow student. The two became friends, and in 1965 they were jailed during a labor demonstration, ostensibly to save them from vigilantes. After six days in jail, the sheriff released the activists — but shortly after, they were attacked by a man with a shotgun. Daniels pulled Sales out of the way, and he was killed by the blast. In this moment, Sales witnessed “both love and hate coming from two very different white men that represented the best and the worst of white America.” Traumatized, she was stricken silent for six months. Fifty years later, our nation is still mired in what Sales calls a “culture of whiteness”: “a systemic and organized set of beliefs … [that] maintain a hierarchical power structure based on skin color.” To battle this culture, Sales calls for each of us to embrace our multi-ethnic identities and stories. Collectively shared, these stories can relieve racial tension and, with the help of connective technology, expand our vistas beyond our segregated daily lives.

Bryn Freedman, left, interviews technologist Fadi Chehadé at the “Humanizing Our Future” salon in New York. (Photo: Ryan Lash / TED)

What the internet is missing right now. Technology architect Fadi Chehadé helped set up the infrastructure that makes the internet work — basic things like the domain name system and IP address standards. Today as an advisory board member with the World Economic Forum’s Center for the Fourth Industrial Revolution and a member of the UN Secretary-General’s High-Level Panel on Digital Cooperation, Chehadé is focused on finding ways for society to benefit from technology and on strengthening international cooperation in the digital space. In a crisp conversation with Bryn Freedman, curator of the TED Institute, Chehadé discusses the need for norms on issues like privacy and security, the ongoing war between the West and China over artificial intelligence, how tech companies can become stewards of the power they have to shape lives and economies, and what everyday citizens can do to claim power on the internet. “My biggest hope is that we will each become stewards of this new digital world,” Chehadé says.

Sociological ImagesThe Tennis Dress Code Racket

Many tennis clubs today uphold an all-white dress code. But does this homage to tradition come with the racism and sexism of the past? Wimbledon’s achromatic clothing policy hearkens back to the Victorian era, when donning colorless attire was regarded as a necessary measure to combat the indecency of sweat stains, particularly for women. Of course, back then, women customarily played tennis in full-length skirts and men in long cotton pants — also for propriety’s sake.  

Serena Williams at the French Open, 2018 Anne White at Wimbledon, 1985

But today, not all tennis clubs insist on all-white.While Wimbledon is known for having the strictest dress standards (even Anne White’s catsuit pictured above got banned there in 1985), the other grand slams, including the French Open (along with the U.S. Open and the Australian Open), have recently become venues for athletes to showcase custom fashions in dramatic colors and patterns. Since the advent of color TV, athletes have used their clothing to express their personality and distinguish themselves from their competitors.

For instance, Serena Williams wore a black Nike catsuit to this year’s French Open. Her catsuit, a full-body compression garment, not only made her feel like a “superhero,” but also functioned to prevent blood clots, a health issue she’s dealt with frequently and which contributed to complications with the birth of her daughter. On Instagram, she dedicated it to “all the moms out there who had a tough recovery from pregnancy.”

Despite this supposed freedom, Williams’ catsuit drew the ire of the French Tennis Federation. Its president, Bernard Giudicelli, said in an interview with Tennis Magazine that “[Catsuits] will no longer be accepted.” The FTF will be asking designers to give them an advance look at designs for players and will “impose certain limits.” His rationale?I think that sometimes we’ve gone too far,” and “One must respect the game and the place.”

The new policy and the coded language Giudicelli used to justify it have been called out as both racist and sexist. By characterizing Williams’s catsuit as a failure to “respect the game,” the FTF echoes other professional sporting associations who have criticized Black football players kneeling during the anthem and Black or Latino baseball players’ celebrating home runs. Moreover, the criticism of Williams’ form-fitting clothing and the reactionary new dress code it spawned are merely the latest in a series of critiques of Williams’ physique.

Sociologist Pierre Bourdieu explains in his “Program for a Sociology of Sport” that practices like the policing of athletes’ apparel are a way for the tennis elite to separate themselves from other players and preserve a hierarchy of social status. This became necessary as the sport, derived from royal tennis and known as the “Sport of Kings,” experienced a huge increase in popularity since the 1960s. Bourdieu describes how this expansion resulted in a variety of ways to play tennis, some more distinctive than others:

…under the same name, one finds ways of playing that are as different as cross-country skiing, mountain touring, and downhill skiing are in their own domain. For example, the tennis of small municipal clubs, played in jeans and Adidas on hard surfaces, has very little in common with the tennis in white outfits and pleated skirts which was the rule some 20 years ago and still endures in select clubs. (One would also find a world of differences at the level of the style of the players, in their relation to competition and to training, etc.)

In reanimating the dress code, FTF officials are engaging in boundary work to preserve the status of a certain kind of tennis — and, by extension, a certain kind of tennis player — at the top of the hierarchy. In so doing, it is limiting the expression of a sports icon who redefines beauty and femininity and perhaps elite tennis itself.

Amy August is a doctoral candidate in Sociology at the University of Minnesota. Her research focuses on education, family, culture, and sport. Her dissertation work uses qualitative methods to compare the forms of social capital recognized and rewarded by teachers and coaches in school and sports. Amy holds a BA in English Literature from the University of Illinois at Chicago, a MA in Teaching from Dominican University, and a MA in Comparative Human Development from the University of Chicago.

(View original at

CryptogramEvidence for the Security of PKCS #1 Digital Signatures

This is interesting research: "On the Security of the PKCS#1 v1.5 Signature Scheme":

Abstract: The RSA PKCS#1 v1.5 signature algorithm is the most widely used digital signature scheme in practice. Its two main strengths are its extreme simplicity, which makes it very easy to implement, and that verification of signatures is significantly faster than for DSA or ECDSA. Despite the huge practical importance of RSA PKCS#1 v1.5 signatures, providing formal evidence for their security based on plausible cryptographic hardness assumptions has turned out to be very difficult. Therefore the most recent version of PKCS#1 (RFC 8017) even recommends a replacement the more complex and less efficient scheme RSA-PSS, as it is provably secure and therefore considered more robust. The main obstacle is that RSA PKCS#1 v1.5 signatures use a deterministic padding scheme, which makes standard proof techniques not applicable.

We introduce a new technique that enables the first security proof for RSA-PKCS#1 v1.5 signatures. We prove full existential unforgeability against adaptive chosen-message attacks (EUF-CMA) under the standard RSA assumption. Furthermore, we give a tight proof under the Phi-Hiding assumption. These proofs are in the random oracle model and the parameters deviate slightly from the standard use, because we require a larger output length of the hash function. However, we also show how RSA-PKCS#1 v1.5 signatures can be instantiated in practice such that our security proofs apply.

In order to draw a more complete picture of the precise security of RSA PKCS#1 v1.5 signatures, we also give security proofs in the standard model, but with respect to weaker attacker models (key-only attacks) and based on known complexity assumptions. The main conclusion of our work is that from a provable security perspective RSA PKCS#1 v1.5 can be safely used, if the output length of the hash function is chosen appropriately.

I don't think the protocol is "provably secure," meaning that it cannot have any vulnerabilities. What this paper demonstrates is that there are no vulnerabilities under the model of the proof. And, more importantly, that PKCS #1 v1.5 is as secure as any of its successors like RSA-PSS and RSA Full-Domain.

Worse Than FailureCodeSOD: The UI Annoyance

Daniel has a bit of a story. The story starts many months ago, on the very first day of the month.

Angular 1.x has something called a filter as a key concept. This is a delightfully misleading name, as it's more meant to be used as a formatting function, but because it takes any arbitrary input and converts it to any arbitrary output, people did use it to filter, which had all sorts of delightful performance problems in practice.

Well, Daniel found this perfectly sensible formatting filter. It's well documented. It's also wrong.

/** * Given a timestamp in the format "2018-06-22T14:55:44+00:00", this filter * returns a date in human-readable format following our style guide. * Assuming the browser's timezone is EDT, the filter applied to the above string * would return "Jun 22, 2018 10:55 AM". * When applicable, this filter returns "today at" or "yesterday at" as date abbreviations in lowercase. * These can be capitalized using the "capitalize" filter above directly in an HTML file. */ ourApp.filter('ourTimestamp', ['$filter', function($filter) { return function(timestamp) { // Guard statement for when timestamp is null, undefined or empty string. if (!timestamp) { return ''; } let TODAY = new Date(); let TODAY_YEAR = TODAY.getFullYear(); let TODAY_MONTH = TODAY.getMonth(); let TODAY_DAY = TODAY.getDate(); let TIMESTAMP_FORMAT = 'MMM d, y h:mm a'; let TIME_FORMAT = 'h:mm a'; let originalTimestampDate = new Date(timestamp); let year = originalTimestampDate.getFullYear(); let month = originalTimestampDate.getMonth(); let day = originalTimestampDate.getDate(); let dateAbbreviation = null; if (year === TODAY_YEAR && month === TODAY_MONTH && day === TODAY_DAY) { dateAbbreviation = 'today at '; } else if (year === TODAY_YEAR && month === TODAY_MONTH && day === (TODAY_DAY - 1)) { dateAbbreviation = 'yesterday at '; } if (dateAbbreviation) { return dateAbbreviation + $filter('date')(timestamp, TIME_FORMAT); } else { return $filter('date')(timestamp, TIMESTAMP_FORMAT); } };

This code, like so much bad code, touches dates. This time, its goal is to output a more friendly date- like, if an event happened today, it simply says, "today at" or if it happened yesterday, it says "yesterday at". On the first day of the month, this fails to output "yesterday at". The bug is simple to spot:

if (year === TODAY_YEAR && month === TODAY_MONTH && day === (TODAY_DAY - 1)) { dateAbbreviation = 'yesterday at '; }

On September first, this only outputs "yesterday at" for September zeroth, not August 31st.

Now, that's a simple brainfart bug, and it could be fixed quite easily, and there are many libraries which could be used. But Daniel ran a git blame to see who on the development team was responsible... only to find that it was nobody on the development team.

It probably isn't much of a shock to learn that this particular application has lots of little UI annoyances. There's a product backlog a mile long with all sorts of little things that could be better, but can be lived with, for now. Because it's a mile of things that can be lived with, they keep getting pushed behind things that are more serious, necessary, or just have someone screaming more loudly about them.

Sprint after sprint, the little UI annoyances keep sitting on the backlog. There's always another problem, another fire to put out, another new feature which needs to be there. The CTO kept trying to raise the priority of the little annoyances, and the line kept getting jumped. So the CTO just took matters into their own hands and put this patch into the codebase, and pushed through to release. As the CTO, they bypassed all the regular sign-off procedures. "The test suite passes, what could be wrong?"

Of course, it has its own little UI annoyance, in that it misbehaves on the first day of the month. The test suite, on the other hand, assumes that the code will run as intended. And the test suite actually uses the current date (and calculates yesterday using date arithmetic). Which means on the first day of the month, the test fails, breaking the build.

Unfortunately for Daniel and the CTO, this bug ended up on the backlog. Since it only impacts developers one day a month, and since it's pretty much invisible to the users, it's got a very low priority. It might get fixed, someday.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


Rondam RamblingsThe last word (I hope!) on Fitch's paradox

I was really hoping to leave Fitch's paradox in the rear view mirror, but like a moth to the flame (or perhaps a better metaphor would be like an alcoholic to the bottle) I find myself compelled to make one more observation. First a quick review for those of you who haven't been following along: Fitch's paradox is a formal proof that starts with some mostly innocuous-seeming assumptions and

Rondam RamblingsThe staggering hypocrisy of Brett Kavanaugh and his supporters

I stole the title of this entry from this op ed in The Washington Post, which is worth reading.  It contrasts Brett Kavanaugh's indignation at being asked questions about his personal life with his shameless willingness to ask deeply personal questions of Bill Clinton when the shoe was on the other foot. But the hypocrisy goes well beyond Kavanaugh.  There is so much of it that it is hard to

Krebs on SecurityBeware of Hurricane Florence Relief Scams

If you’re thinking of donating money to help victims of Hurricane Florence, please do your research on the charitable entity before giving: A slew of new domains apparently related to Hurricane Florence relief efforts are now accepting donations on behalf of victims without much accountability for how the money will be spent.

For the past two weeks, KrebsOnSecurity has been monitoring dozens of new domain name registrations that include the terms “hurricane” and/or “florence” and some word related to support (e.g., “relief,” “assistance,” etc.). Most of these domains have remained parked or dormant since their creation earlier this month; however, several of them became active only in the past few days, directing visitors to donate money through private PayPal accounts without providing any information about who is running the site or what will be done with donated funds.

The landing page for hurricaneflorencerelieffund-dot-com also is the landing page for at least 4 other Hurricane Florence donation sites that use the same anonymous PayPal address.

Among the earliest of these is hurricaneflorencerelieffund-dot-com, registered anonymously via GoDaddy on Sept. 13, 2018. Donations sent through the site’s PayPal page go to an email address tied to the PayPal account on the site (info@hurricaneflorencerelieffund-dot-com); emails to that address did not elicit a response.

Sometime in the past few days, several other Florence-related domains that were previous parked at GoDaddy now redirect to this domain, including hurricanflorence-dot-org (note the missing “e”); florencedisaster-dot-org; florencefunds-dot-com; and hurricaneflorencedonation-dot-com. All of these domains include the phone number 833-FLO-FUND, which rings to an automated system that ultimately asks the caller to leave a message. There is no information provided about the organization or individual running the sites.

The domain hurricaneflorencedisasterfund-dot-com has a slightly different look and feel, invokes the name of the Red Cross and also includes the 833-FLO-FUND number. Likewise, it accepts PayPal donations tied to the same email address mentioned above. It claims “80% of all donations go directly to FIRST RESPONDERS in North & South Carolina!” although it provides no clear way to verify that claim.

Hurricaneflorencedisasterfund-dot-com is one of several domains anonymously accepting PayPal donations, purportedly on behalf of Hurricane Florence victims.

The domain hurricaneflorencerelief-dot-fund, registered on Sept. 11, also accepts PayPal donations with minimal information about who might benefit from monies given. The site links to Facebook, Twitter and other social network accounts set up with the same name, although none of them appear to have any meaningful content. The email address tied to that PayPal account — — did not respond to requests for comment.

The domain theflorencefund-dot-com until recently also accepted PayPal donations and had an associated Twitter account (now deleted), but that domain recently changed its homepage to include the message, “Due to the change in Florence’s path, we’re suspending our efforts.”

Here is a Google spreadsheet that tracks some of the domains I’ve been monitoring, including notations about whether the domains are active and if they point to sites that ask for donations. I’ll update this sheet as the days go by; if anyone has any updates to add, please drop a comment below. All of the domains mentioned above have been reported to the Justice Department’s National Center for Disaster Fraud, which accepts tips at

Let me be clear: Just because a site is listed here doesn’t mean it’s a scam (or that it will be). Some of these sites may have been set up by well-intentioned people; others appear to have been established by legitimate aid groups who are pooling their resources to assist local victims.

For example, several of these domains redirect to, a legitimate nonprofit religious group based in North Carolina that accepts donations through several domains that use an inline donation service from — a maker of “church management software.”

Another domain in this spreadsheet — — accepts donations on its site via a third party fundraising network The site belongs to a legitimate 501(c)(3) Muslim faith-based nonprofit in Raleigh, N.C, that is collecting money for Hurricane Florence victims.

If you’re familiar with these charities, great. Otherwise, it’s a good idea to research the charitable group before giving them money to help victims.

As The New York Times noted on Sept. 15, one way to do that is through Charity Navigator, which grades established charities on transparency and financial health, and has compiled a list of those active in the recovery from Florence. Other sites like GuideStar, the Better Business Bureau’s Wise Giving Alliance and Charity Watch perform similar reviews. You can find more details about how those sites work here.

Finally, remember that phishers and malware purveyors love to seize on the latest disasters to further their schemes. Never click on links or attachments in emails or social media messages that you weren’t expecting.

Planet Linux AustraliaGary Pendergast: Straight White Guy Discovers Diversity and Inclusion Problem in Open Source

This is a bit of strange post for me to write, it’s a topic I’m quite inexperienced in. I’ll warn you straight up: there’s going to be a lot of talking about my thought processes, going off on tangents, and a bit of over-explaining myself for good measure. Think of it something like high school math, where you had to “show your work”, demonstrating how you arrived at the answer. 20 years later, it turns out there really is a practical use for high school math. 😉

I’m Gary. I come from a middle-class, white, Australian family. My parents both worked, but also had the time to encourage me to do well in school. By way of doing well in school, I was able to get into a good university, I could support myself on a part time job, because I only had to pay my rent and bar tab. There I met many friends, who’ve helped me along the way. From that, I’ve worked a series of well paid tech jobs, allowing me to have savings, and travel, and live in a comfortable house in the suburbs.

I’ve learned that it’s important for me to acknowledge the privileges that helped me get here. As a “straight white male”, I recognise that a few of my privileges gave me a significant boost that many people aren’t afforded. This is backed up by the data, too. Men are paid more than women. White women are paid more than black women. LGBT people are more likely to suffer workplace bullying. The list goes on and on.

Some of you may’ve heard the term “privilege” before, and found it off-putting. If that’s you, here’s an interesting analogy, take a moment to read it (and if the title bugs you, please ignore it for a moment, we’ll get to that), then come back.

Welcome back! So, are you a straight white male? Did that post title make you feel a bit uncomfortable at being stereotyped? That’s okay, I had a very similar reaction when I first came across the “straight white male” stereotype. I worked hard to get to where I am, trivialising it as being something I only got because of how I was born hurts. The thing is, this is something that many people who aren’t “straight white males” experience all the time. I have a huge amount of respect for people who have to deal with that on daily basis, but are still able to be absolute bosses at their job.

My message to my dudes here is: don’t sweat it. A little bit of a joke at your expense is okay, and I find it helps me see things from another person’s perspective, in what can be a light-hearted, friendly manner.

Diversity Makes WordPress Better

My job is to build WordPress, which is used by just shy of a third of the internet. That’s a lot of different people, building a lot of different sites, for a lot of different purposes. I can draw on my experiences to imagine all of those use cases, but ultimately, this is a place where my privilege limits me. Every time I’ve worked on a more diverse team, however, I’m exposed to a wider array of experiences, which makes the things we build together better.

Of course, I’m not even close to being the first person to recognise how diversity can improve WordPress, and I have to acknowledge the efforts of many folks across the community. The WordPress Community team are doing wonderful work helping folks gain confidence with speaking at WordPress events. WordCamps have had a Code of Conduct for some time, and the Community team are working creating a Code of Conduct for the entire WordPress project. The Design team have built up excellent processes and resources to help folks get up to speed with helping design WordPress. The Core Development team run regular meetings for new developers to learn how to write code for WordPress.

We Can Do Better. I Can Do Better.

As much as I’d love it to be, the WordPress community isn’t perfect. We have our share of problems, and while I do believe that everyone in our community is fundamentally good, we don’t always do our best. Sometimes we’re not as welcoming, or considerate, as we could be. Sometimes we don’t take the time to consider the perspectives of others. Sometimes it’s just a bunch of tech-dude-bros beating their chests. 🙃

Nobody wins when we’re coming from a place of inequality.

So, this post is one of my first steps in recognising there’s a real problem, and learning about how I can help make things better. I’m not claiming to know the answers, I barely know where to start. But I’m hoping that my voice, added to the many that have come before me, and the countless that will come after, will help bring about the changes we need.