Planet Russell


Worse Than FailureEditor's Soapbox: The Billable Hour

For every line of code that ends up in software the general public sees or interacts with, for every line in your Snapchats and Battlezone: Call of Honor Duty Warfare, there are a thousand lines of code written to handle a supply chain processing rule that only applies to one warehouse on alternating Thursdays, but changes next month thanks to a union negotiation. Or it’s a software package that keeps track of every scale owned by a company and reminds people to calibrate them. Or a data-pump that pulls records out of one off-the-shelf silo and pushes them into another.

That’s the “iceberg” of software development. In terms of sheer quantity, most software is written below the waterline, deep in the bowels of companies that don’t sell software, but need it anyway. That’s the world of internal software development.

And internal software development, in-house software shops, have a problem. Well, they have lots of problems, but we’re going to focus on one today: Internal Billing and the Billable Hour.

At a first pass, internal billing makes sense. If you are a widget manufacturer, then you want the entire company aligned with that goal. If you buy raw materials, those raw materials are going into widgets. If you pay laborers, their labor should be going into making widgets. If you buy capital assets, they should be things like widget-stamping machines.

A person using a computer and a calculator at the same time for some insane, stock-photo related reason

But you can’t just build widgets. Your workers need to organize via email. The new Widget-Stamper 9000 needs network connectivity, so you need network drops and routers and switches, which in turn need regular maintenance. This is, in pointy-haired-boss speak, “overhead”. You need it, but it doesn’t directly make widgets. It’s a cost center.

So what do large companies do? Well, they take all those “non-productive” activities and shuffle them off into their own departments, usually in a “corporate SBU”. Then all the departments doing “real” work get a budget to spend on those “cost centers”. And thus, internal billing is born.

Each employee needs an email account. Let’s assign that a cost, a rough—sometimes very rough—estimate of the total cost of managing the email account. Our corporate IT department will charge $20/yr per employee to cover the storage, configuration, management, and helpdesk support associated with their email account—and so on through the list of IT-related goods and services. Now the idea is that individual departments know their IT needs better than anyone else. By putting them in control of their IT budgets, they can spend their money wisely.

If you’re a widget-making company, you view software and IT support as an overhead cost, and recognize that you only have the capacity to pursue a certain number of IT projects, this makes perfect sense. Budgets and billing create a supply/demand relationship, and they give corporate the ability to cut overhead costs by controlling budgets. (Of course, this is all founded on the faulty assumption that in-house software development is simply overhead, but let’s set that aside for now.)

The problems start when internal billing meets software development, usually through the interface of the “billable hour”. The combination of these factors creates a situation where people who are ostensibly co-workers are locked into a toxic client/vendor relationship. The IT department is usually in a disadvantageous negotiating position, often competing against external vendors for a business department’s IT budget. Treating corporate IT as the preferred vendor isn’t all sunshine and roses for the business, either. There are definitely cases where external vendors are better suited to solve certain problems.

Putting IT resources on a billable hours system introduces a slew of bizarre side effects. For one thing, hours have to be tracked. That overhead might be relatively small, but it’s a cost. “Idling” becomes a serious concern. If developers aren’t assigned to billable projects, the IT department as a whole starts looking like it’s being non-productive. Practices like refactoring have to be carefully concealed, because business units aren’t going to pay for that.

Spending more billable hours on a project than estimated throws budgets out of whack. This forces developers into “adaptive strategies”. For example: padding estimates. If you can get an extremely padded estimate, or can get a long-running project into a steady-state where no one’s looking too closely at the charges, you can treat these as “banks”. A project starts running over your estimate? Start charging that work against a project that has some spare time.

Of course, that makes it impossible to know how much time was actually spent on a project, so forget about using that for process improvement later. It also makes every project more expensive, driving up the costs of internal development. This drives business users to seek external solutions, spending their IT budget outside of the company, or worse: to find workarounds. Workarounds like maybe just building a big complicated Excel spreadsheet with macros in it.

This isn’t even always restricted to hourly charges, either. I saw one organization that had a charge-back rate of $10,000/yr for a SQL Server database. That wasn’t licensing or hardware, that was just to create a new database on an existing instance of SQL Server. The result? Pretty much no business unit had a working test environment, and they’d often stack four or five different applications into the same database. Rarely, they’d use schemas to organize their tables, but usually you’d have tables like: Users, Users_1, UsersNew, UsersWidgetsoft, ___Users.

Forget about upgrades, even if they’re required. Short of making organization-wide modernization a capital project, no department or business unit is going to blow their IT budget on upgrading software that already works. For example, Microsoft still supports the VB6 runtime, but hasn’t supported the VB6 development environment since 2008. So, when the users say, “We need to add $X to the application,” IT has to respond, “We can’t add $X unless we do a complete rewrite, because we can’t support it in the state it’s in.” Either the business ends up doing without the feature or they make it a demand: “We need $X and we need it without a complete rewrite.” Then your developers end up trying to breathe life into a Windows 2000 VM without connecting it to the network in hopes that they can get something to build.

Billable hours turn work into a clock-punching exercise. Billing time is how you’re judged, and whether or not that time is spent effectively becomes less relevant. Often, by the end of the week, employees are looking for ways to fill up their hours. This is a task that should be easy, but I’ve watched developers agonize over how much they’re going to lie to make their timesheet look good, and hit their “85% billable” targets. This gets especially bizarre since you’re not self-assigning tasks, but you have to have 85% of your time billable, and thus you need to take the tasks you’ve been assigned and spend a lot of time on the billable ones to make sure you hit your targets, turning five-minute jobs into ten-hour slogs.

We could go on dissecting the problems with billable hours, and these problems exist even when we assume that you can just view your in-house software as a cost center. Some of these problems can get managed around, but the one that can’t is this harsh reality: software isn’t a cost center.

I’ve heard a million variations on the phrase, “we make widgets, not software!” Twenty years ago, perhaps even ten years ago, this may have been true. Today, if you are running a business of any scale, it simply is not. It’s trite to say, but these days, every business is an IT business.

One project I worked on was little more than a datapump application with a twist: the data source was a flow meter attached to a pipe delivering raw materials to a manufacturing process. The driver for reading the data was an out-of-date mess, and so I basically had to roll my own. The result was that, as raw material flowed through the pipe, the ERP system was updated in real-ish time with that material consumption, allowing up-to-the-minute forecasts of consumption, output, and loss.

How valuable was that? It’s not simply an efficiency gain, but having that sort of data introduces new ways of managing the production process. From allowing management to have a better picture of the actual state of the process, to helping supply chain plan out a just-in-time inventory strategy, this sort of thing could have a huge change on the way the business works. That wasn’t a skunkworks idea, that wasn’t IT just going off and doing its own thing. That was a real hook for business process improvement.

Smart companies are starting to figure this out. I’ve been doing some work for a financial services company that just hired a new CTO, and he’s turned around the “We make $X, not software,” and started his tenure by saying, “We are a software company that provides financial services.” Instead of viewing IT as a sink, he’s pushing the company to view IT as a tool for opening up new markets and new business models.

So, yes, IT is a cost of doing business. You’ll need certain IT services no matter what, often fulfilled with off-the-shelf solutions, but configured and modeled to your needs. IT can also be a cost savings. Automation can free up employees to focus on value-added tasks.

But the one that’s underestimated in a lot of companies is IT’s ability to create value-added situations. If you make widgets, sure, it’s unlikely that your software is going to directly change the process of making widgets, so it’s unlikely that your software is itself technically “value added”. But a just-in-time supply chain system is more than just a cost savings or an efficiency booster. It can completely change how you manage your production process.

By placing the wall of billable hours between IT and the business, you’re discouraging the business from leveraging IT. So here are a few ways that corporations and management could possibly fix this problem.

First, as much as possible, integrate the IT staff into the business-unit staff. This might mean moving some corporate IT functionality out into individual departments or business units (if they’re large enough to support it), or dedicating corporate staff to a relationship with specific business units. Turn IT workers into a steady flat cost, not a per-hour cost. When trying to handle priorities and deciding how to spend this limited resource to get new software developed, business-unit management can set priorities.

If an organization absolutely must use internal billing to set priorities and control demand for IT resources, move as much work as possible into fixed-rate, flat-fee type operations. If a business unit requests a new piece of software, build a fixed-bid cost for that project, not an hourly cost.

While a “20% time” approach, where employees are allowed to spend 20% of their time on their own projects, doesn’t work in these kinds of environments, an organizational variation where some IT budget is used on speculative projects that simply might not work—a carefully managed skunkworks approach—can yield great benefits. It’s also an opportunity to keep your internal IT staff’s skills up to date. When you’re simply clocking billable hours, it’s hard to do any self-training, and unless your organization really invests in a training program, it’s easy to ossify. This can also involve real training, not “I sent my drones to a class, why don’t they know $X by now?” but actual hands-on experimentation, the only way to actually learn new IT skills.

All in all: billable hours are poison. It doesn’t matter that they’re a standard practice, they drag your IT department down and make your entire organization less effective. If you’re in a position to put a stop to it, I’m asking you, stop this. If you can’t stop it, find someone who can. Corporate IT is one of the most important yet under-prioritized sectors of our industry, and we need to keep it effective.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianLucas Nussbaum: Implementing “right to disconnect” by delaying outgoing email?

France passed a law about “right to disconnect” (more info here or here). The idea of not sending professional emails when people are not supposed to read them in order to protect their private lifes, is a pretty good one, especially when hierarchy is involved. However, I tend to do email at random times, and I would rather continue doing that, but just delay the actual sending of the email to the appropriate time (e.g., when I do email in the evening, it would actually be sent the following morning at 9am).

I wonder how I could make this fit into my email workflow. I write email using mutt on my laptop, then push it locally to nullmailer, that then relays it,  over an SSH tunnel, to a remote server (running Exim4).

Of course the fallback solution would be to use mutt’s postponing feature. Or to draft the email in a text editor. But that’s not really nice, because it requires going back to the email at the appropriate time. I would like a solution where I would write the email, add a header (or maybe manually add a Date: header — in all cases that header should reflect the time the mail was sent, not the time it was written), send the email, and have nullmailer or the remote server queue it until the appropriate time is reached (e.g., delaying while “current_time < Date header in email”). I don’t want to do that for all emails: e.g. personal emails can go out immediately.

Any ideas on how to implement that? I’m attached to mutt and relaying using SSH, but not attached to nullmailer or exim4. Ideally the delaying would happen on my remote server, so that my laptop doesn’t need to be online at the appropriate time.

Update: mutt does not allow to set the Date: field manually (if you enable the edit_headers option and edit it manually, its value gets overwritten). I did not find the relevant code yet, but that behaviour is mentioned in that bug.

Update 2: ah, it’s this code in sendlib.c (and there’s no way to configure that behaviour):

 /* mutt_write_rfc822_header() only writes out a Date: header with
 * mode == 0, i.e. _not_ postponment; so write out one ourself */
 if (post)
   fprintf (msg->fp, "%s", mutt_make_date (buf, sizeof (buf)));

Planet DebianGunnar Wolf: Spam: Tactics, strategy, and angry bears

I know spam is spam is spam, and I know trying to figure out any logic underneath it is a lost cause. However... I am curious.

Many spam subjects are seemingly random, designed to convey whatever "information" they contain and fool spam filters. I understand that.

Many spam subjects are time-related. As an example, in the last months there has been a surge of spam mentioning Donald Trump. I am thankful: Very easy to filter out, even before it reaches spamassassin.

Of course, spam will find thousands of ways to talk about sex; cialis/viagra sellers, escort services, and a long list of WTF.

However... Tactical flashlights. Bright enough to blind a bear.


I mean... Truly. Really. WTF‽‽

What does that mean? Why is that even a topic? Who is interested in anything like that? How often does the average person go camping in the woods? Why do we need to worry about stupid bears attacking us? Why would a bear attack me?

The list of WTF questions could go on forever. What am I missing? What does "tactical flashlight" mean that I just fail to grasp? Has this appeared in your spam?

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 1, week 4

This week the Understanding Our World program for primary schools has younger students looking at time passing, both in their own lives and as marked by others, including the seasons recognised by different Aboriginal groups. Older students are looking at how Aboriginal people interacted with the Australian environment, as it changed at the end of the Ice Age, and how they learnt to manage the environment and codified that knowledge into their lore.

Foundation to Year 3

This week our standalone Foundation classes  (Unit F.1) are thinking about what they were like as babies. They are comparing photographs or drawings of themselves as babies, with how they are now. This is a great week to involve family members and carers into class discussions, if appropriate. Students in multi-age classes and Years 1 to 3 (Units F.5, 1.1, 2.1 and 3.1) , are examining how weather and seasons change throughout the year and comparing our system of seasons with those used by different groups of Aboriginal people in different parts of Australia. Students can compare these seasons to the weather where they live and think about how they would divide the year into seasons that work where they live. Students can also discuss changes in weather over time with older members of the community.

Years 3 to 6

Older students, having followed the ancestors of Aboriginal people all the way to Australia, are now examining how the climate changed in Australia after the Ice Age, and how this affected Aboriginal people. They learn how Aboriginal people adapted to their changing environment and learned to manage it in a sustainable way. This vitally important knowledge about how to live with, and manage, the Australian environment, was codified into Aboriginal lore and custom and handed down in stories and laws, from generation to generation. Students start to examine the idea of Country/Place, in this context.


TEDWatch tomorrow: What’s next for democracy? A live Facebook chat

Tomorrow, join us on Facebook Live for another episode of TED Dialogues, our response to current events, adding insight, context and nuance to the conversations we’re having right now. Join us Thursday, February 23, 2017 at 1–2pm on TED’s Facebook page.

Our speakers are two historians who will try to help us make sense of what’s going on in Washington: Rick Perlstein, journalist and expert on the history of conservatism in the US, will moderate a conversation with Yale history professor Timothy Snyder. Snyder’s book On Tyranny: Twenty Lessons from the Twentieth Century will be published next week. There will be an opportunity for questions from the Facebook Live audience!


TEDTED Dialogues: An urgent response to a dangerously divisive time

These are astonishing days. Amid rapid-fire policy changes, America has grown even more divided; similar divisions are spreading across the world. Vitriolic rhetoric roars from all sides, and battle lines are hardening.

We aren’t listening to one another.

Is there space left for dialogue? For reason? For thoughtful persuasion?

We’re determined to find that space. This goes to the core of TED’s mission. We’re therefore launching a series of public events, TED Dialogues, that will focus on the burning questions of the moment; questions about security and fear, democracy and demagoguery, neo-nationalism and neo-globalism.

We’re not looking for more angry soundbites. We’re looking to pull the camera back a little and get a clearer understanding of what’s going on, where we are, how we got here—and how we must move forward. Our speakers will be invited to give short, powerful talks, followed by probing questions from both live and virtual audiences — including you. We will search for voices from many parts of the political landscape. And we will focus on the ideas that can best shed light, bring hope and inspire courageous action.

Events will be held at our theater in New York and streamed freely online.

The first event was held Wednesday, February 15, featuring the acclaimed historian and author Yuval Noah Harari. Watch it here.

Next up: A conversation with Timothy Snyder and Rick Perlstein, Thursday, February 23, 1pm, streamed live on Facebook Live

Further dates: Wednesday, March 1, 1pm, on Facebook Live
Wednesday, March 8, 1pm, on Facebook Live

If you have specific speakers to recommend or topics to suggest, please submit them here. Please sign up for email notification of these events. Or join our Facebook community.

The need for ideas—and listening—has never mattered more.

Planet DebianNeil McGovern: A new journey – GNOME Foundation Executive Director

IMG_0726For those who haven’t heard, I’ve been appointed as the new Executive Director of the GNOME Foundation, and I started last week on the 15th February.

It’s been an interesting week so far, mainly meeting lots of people and trying to get up to speed with what looks like an enormous job! However, I’m thoroughly excited by the opportunity and am very grateful for everyone’s warm words of welcome so far.

One of the main things I’m here to do is to try and help. GNOME is strong because of its community. It’s because of all of you that GNOME can produce world leading technologies and a desktop that is intuitive, clean and functional. So, if you’re stuck with something, or if there’s a way that either myself or the Foundation can help, then please speak up!

Additionally, I intend on making this blog a much more frequently updated one – letting people know what I’m doing, and highlighting cool things that are happening around the project. In that vein, this week I’ve also started contacting all our fantastic Advisory Board members. I’m also looking at finding sponsors for GUADEC and GNOME.Asia, so if you know of anyone, let me know! I also booked my travel to the GTK+ hackfest and to LibrePlanet – if you’re going to either of those, make sure you come and introduce yourself :)

Finally, a small advertisement for Friends of GNOME. Your generosity really does help the Foundation support development of GNOME. Join up today!

Planet DebianLisandro Damián Nicanor Pérez Meyer: Developing an nrf51822 based embedded device with Qt Creator and Debian

I'm currently developing an nRF51822-based embedded device. Being one the Qt/Qt Creator maintainers in Debian I would of course try to use it for the development. Turns out it works pretty good... with some caveats.

There are already two quite interesting blog posts about using Qt Creator on MAC and on Windows, so I will not repeat the basics, as they are there. Both use qbs, but I managed to use CMake.

Instead I'll add some tips on the stuff that I needed to solve in order to make this happen on current Debian Sid.

  • The required toolchain is already in Debian, just install binutils-arm-none-eabi, gcc-arm-none-eabi and gdb-arm-none-eabi.
  • You will not find arm-none-eabi-gdb-py on the gdb-arm-none-eabi package. Fear not, the provided gdb binary is compiled against python so it will work.
  • To enable proper debugging be sure to follow this flag setup. If you are using CMake like in this example be sure to modify CMake/toolchain_gcc.cmake as necessary.
  • In Qt Creator you might find that, while try to run or debug your app, you are greated with a message box that says "Cannot debug: Local executable is not set." Just go to Projects →Run and change "Run configuration" until you get a valid path (ie, a path to the .elf or .out file) in the "Executable" field.


Planet DebianEnrico Zini: staticsite news: github mode and post series

GitHub mode

Tobias Gruetzmacher implemented GitHub mode for staticsite.

Although GitHub now has a similar site rendering mode, it doesn't give you a live preview: if you run ssite serve on a GitHub project you will get a live preview of and the project documentation.

Post series

I have added support for post series, that allow you to easily interlink posts with previous/next links.

You can see it in action on links and on An Italian song a day, an ongoing series that is currently each day posting a link to an Italian song.

CryptogramNSA Using Cyberattack for Defense

These days, it's rare that we learn something new from the Snowden documents. But Ben Buchanan found something interesting. The NSA penetrates enemy networks in order to enhance our defensive capabilities.

The data the NSA collected by penetrating BYZANTINE CANDOR's networks had concrete forward-looking defensive value. It included information on the adversary's "future targets," including "bios of senior White House officials, [cleared defense contractor] employees, [United States government] employees" and more. It also included access to the "source code and [the] new tools" the Chinese used to conduct operations. The computers penetrated by the NSA also revealed information about the exploits in use. In effect, the intelligence gained from the operation, once given to network defenders and fed into automated systems, was enough to guide and enhance the United States' defensive efforts.

This case alludes to important themes in network defense. It shows the persistence of talented adversaries, the creativity of clever defenders, the challenge of getting actionable intelligence on the threat, and the need for network architecture and defenders capable of acting on that information. But it also highlights an important point that is too often overlooked: not every intrusion is in service of offensive aims. There are genuinely defensive reasons for a nation to launch intrusions against another nation's networks.


Other Snowden files show what the NSA can do when it gathers this data, describing an interrelated and complex set of United States programs to collect intelligence and use it to better protect its networks. The NSA's internal documents call this "foreign intelligence in support of dynamic defense." The gathered information can "tip" malicious code the NSA has placed on servers and computers around the world. Based on this tip, one of the NSA's nodes can act on the information, "inject[ing a] response onto the Internet towards [the] target." There are a variety of responses that the NSA can inject, including resetting connections, delivering malicious code, and redirecting internet traffic.

Similarly, if the NSA can learn about the adversary's "tools and tradecraft" early enough, it can develop and deploy "tailored countermeasures" to blunt the intended effect. The NSA can then try to discern the intent of the adversary and use its countermeasure to mitigate the attempted intrusion. The signals intelligence agency feeds information about the incoming threat to an automated system deployed on networks that the NSA protects. This system has a number of capabilities, including blocking the incoming traffic outright, sending unexpected responses back to the adversary, slowing the traffic down, and "permitting the activity to appear [to the adversary] to complete without disclosing that it did not reach [or] affect the intended target."

These defensive capabilities appear to be actively in use by the United States against a wide range of threats. NSA documents indicate that the agency uses the system to block twenty-eight major categories of threats as of 2011. This includes action against significant adversaries, such as China, as well as against non-state actors. Documents provide a number of success stories. These include the thwarting of a BYZANTINE HADES intrusion attempt that targeted four high-ranking American military leaders, including the Chief of Naval Operations and the Chairman of the Joint Chiefs of Staff; the NSA's network defenders saw the attempt coming and successfully prevented any negative effects. The files also include examples of successful defense against Anonymous and against several other code-named entities.

I recommend Buchanan's book: The Cybersecurity Dilemma: Hacking, Trust and Fear Between Nations.

Planet Linux AustraliaJulien Goodwin: Making a USB powered soldering iron that doesn't suck

Today's evil project was inspired by a suggestion after my talk on USB-C & USB-PD at this years's Open Hardware miniconf.

Using a knock-off Hakko driver and handpiece I've created what may be the first USB powered soldering iron that doesn't suck (ok, it's not a great iron, but at least it has sufficient power to be usable).

Building this was actually trivial, I just wired the 20v output of one of my USB-C ThinkPad boards to a generic Hakko driver board, the loss of power from using 20v not 24v is noticeable, but for small work this would be fine (I solder in either the work lab or my home lab, where both have very nice soldering stations so I don't actually expect to ever use this).

If you were to turn this into a real product you could in fact do much better, by doing both power negotiation and temperature control in a single micro, the driver could instead be switched to a boost converter instead of just a FET, and by controlling the output voltage control the power draw, and simply disable the regulator to turn off the heater. By chance, the heater resistance of the Hakko 907 clone handpieces is such that combined with USB-PD power rules you'd always be boost converting, never needing to reduce voltage.

With such a driver you could run this from anything starting with a 5v USB-C phone charger or battery (15W for the nicer ones), 9v at up to 3A off some laptops (for ~25W), or all the way to 20V@5A for those who need an extremely high-power iron. 60W, which happens to be the standard power level of many good irons (such as the Hakko FX-888D) is also at 20v@3A a common limit for chargers (and also many cables, only fixed cables, or those specially marked with an ID chip can go all the way to 5A). As higher power USB-C batteries start becoming available for laptops this becomes a real option for on-the-go use.

Here's a photo of it running from a Chromebook Pixel charger:

Sociological ImagesWhere do LGBT People in the U.S. Live?

I love gender and sexual demography.  It’s incredibly important work.  Understanding the size and movements of gender and sexual minority populations can help assess what kinds of resources different groups might require and where those resources would be best spent, among others things.  Gary J. Gates and Frank Newport initially published results from a then-new Gallup question on gender/sexual identity in 2012-2013 (here).  At the time, 3.4% of Americans identified as either lesbian, gay, bisexual, or transgender.  It’s a big deal – particularly as “identity” is likely a conservative measure when it comes to assessing the size of the population of LGBT persons.  After I read the report, I was critical of one element of the reporting: Gates and Newport reported proportions of LGBT persons by state.  As data visualizations go, I felt the decision concealed more than it revealed.

From 2015-2016, Gallup collected a second round of data. These new data allowed Gates to make some really amazing observations about shifts in the proportion of the U.S. population identifying themselves as LGBT.  It’s a population that is, quite literally on the move.  I posted on this latter report here.  The shifts are astonishing – particularly given the short period of time between waves of data collection.  But, again, data on where LGBT people are living was reported by state.  I suspect that much of this has to do with sample size or perhaps an inability to tie respondents to counties or anything beyond state and time zone.  But, I still think displaying the information in this way is misleading.  Here’s the map Gallup produced associated with the most recent report:

During the 2012-2013 data collection, Hawaii led U.S. states with the highest proportions of LGBT identifying persons (with 5.1% identifying as LGBT)–if we exclude Washington D.C. (with 10% identifying as LGBT).  By 2016, Vermont led U.S. states with 5.3%; Hawaii dropped to 3.8%.  Regardless of state rank, however, in both reports, the states are all neatly arranged with small incremental increases in the proportions of LGBT identifying persons, with one anomaly–Washington D.C.  Of course, D.C. is not an anomaly; it’s just not a state. And comparing Washington D.C. with other states is about as meaningful as examining crime rate by European nation and including Vatican City.  In both examples, one of these things is not like the others in a meaningful sense.

In my initial post, I suggested that the data would be much more meaningfully displayed in a different way.  The reason D.C. is an outlier is that a good deal of research suggests that gender and sexual minorities are more populous in cities; they’re more likely to live in urban areas.  Look at the 2015-2016 state-level data on proportion of LGBT people by the percentage of the state population living in urban areas (using 2010 Census data).  The color coding reflects Census regions (click to enlarge).

Vermont is still a state worth mentioning in the report as it bucks the trend in an impressive way (as do Maine and New Hampshire).  But I’d bet you a pint of Cherry Garcia and a Magic Hat #9 that this has more to do with Burlington than with thriving communities of LGBT folks in the towns like Middlesex, Maidstone, or Sutton.

I recognize that the survey might not have a sufficient sample to enable them to say anything more specific (the 2015-2016 sample is just shy of 500,000).  But, sometimes data visualizations obscure more than they reveal.  And this feels like a case of that to me.  In my initial post, I compared using state-level data here with maps of the U.S. after a presidential election.  While the maps clearly delineate which candidate walked away with the electoral votes, they tell us nothing of the how close it was in each state, nor do they provide information about whether all parts of the state voted for the same candidates or were regionally divided.  In most recent elections traditional electoral maps might leave you wondering how a Democrat ever gets elected with the sea of red blanketing much of the nation’s interior.  But, if you’ve ever seen a map showing you data by county, you realize there’s a lot of blue in that red as well–those are the cities, the urban areas of the nation.  Look at the results of the 2016 election by county (produced by physicist Mark Newman – here).  On the left, you see county level voting data, rather that simply seeing whether a state “went red” or “went blue.”  On the right, Newman uses a cartogram to alter the size of each county relative to its population density.  It paints a bit of a different picture, and to some, it probably makes that state-level data seem a whole lot less meaningful.

Maps from Mark Newman’s website:

The more recent report also uses that state-level data to examine shifts in LGBT identification within Census regions as well.  Perhaps not surprisingly, there are more people identifying as LGBT everywhere in the U.S. today than there were 5 years ago (at least when we ask them on surveys).  But rates of identification are growing faster in some regions (like the Pacific, Middle Atlantic, and West Central) than others (like New England).  Gates suggests that while this might cause some to suggest that LGBT people are migrating to different regions, data don’t suggest that LGBT people are necessarily doing that at higher rates than other groups.

The recent shifts are largely produced by young people, Millennials in the Gallup sample.  And those shifts are more pronounced in those same states most likely to go blue in elections.  As Gates put it, “State-level rankings by the portion of adults identifying as LGBT clearly relate to the regional differences in LGBT social acceptance, which tend to be higher in the East and West and lower in the South and Midwest. Nevada is the only state in the top 10 that doesn’t have a coastal border. States ranked in the bottom 10 are dominated by those in the Midwest and South” (here).

When we compare waves of data collection, we can see lots of shifts in the LGBT-identifying population by state (see below; click to enlarge).  While the general trend was for states to have increasing proportions of people claiming LGBT identities in 2015-2016, a collection of states do not follow that trend.  And this struck me as an issue that ought to provoke some level of concern.  Look at Hawaii, Rhode Island, and South Dakota, for example.  These are among the biggest shifts among any of the states and they are all against the liberalizing trend Gates describes.

Presentation of data is important.  And while the report might help you realize, if you’re LGBT, that you might enjoy living in Vermont or Hawaii more than Idaho or Alabama if living around others who share your gender or sexual identity is important to you, that’s a fact that probably wouldn’t surprise many.  I’d rather see maps illustrating proportions of LGBT persons by population density rather than by state.  I don’t think we’d be shocked by those results either.  But it seems like it would be provide a much better picture of the shifts documented by the report than state-level data allow.

Tristan Bridges, PhD is a professor at The College at Brockport, SUNY. He is the co-editor of Exploring Masculinities: Identity, Inequality, Inequality, and Change with C.J. Pascoe and studies gender and sexual identity and inequality. You can follow him on Twitter here. Tristan also blogs regularly at Inequality by (Interior) Design.

(View original at

Worse Than FailureCodeSOD: Well Padded

We don’t receive many CSS-based submissions, possibly because CSS is already pretty horrible. There are real-world, practical things that you simply need to take a hacky, awkward approach with.

Matthew found this code, however, which isn’t a clever hack to work around a limitation, and instead just leaves me scratching my head.

    @media (min-width: $screen-lg-min) {
      padding: 1rem 1rem 9999px;
      margin-bottom: -9999px;

I liked this approach so much I went ahead and used it on this article. It's a brillant way to control page layout.

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

Planet DebianJonathan Dowland: Hans Rosling and Steve Hewlett

I've begun to listen to BBC4's "More Or Less" Podcast. They recently had an episode covering the life and work of Hans Rosling, the inspirational swedish statistician, who has sadly died of pancreatic cancer. It was very moving. Some of Professor Rosling's videos are available to view online. I've heard that they are very much worth watching.

Over the last few months I have also been listening to regular updates by BBC broadcaster Steve Hewlett on his own journey as a cancer sufferer. These were remarkably frank discussions of the ins and outs of his diagnosis, treatment, and the practical consequences on his everyday life. I was very sad to tune in on Monday evening and hear a series of repeated clips from his previous appearances on the PM show, as the implications were clear. And indeed, Steve Hewlett died from oesophagal cancer on Monday. Here's an obituary in the Guardian.

Planet DebianJunichi Uekawa: Trying to use Termux on chromebook.

Trying to use Termux on chromebook. I am exclusively using chromebook for my client side work. Android apps work on this device, and so does Termux. I was pondering how to make things more useful, like using Download directory integration and chrome apps, but not quite got things set up. Then I noticed that it's possible to use sshd on termux. It only accepts public key authentication, but that's enough for me. I can now use my SecureShell chrome app to connect and get things working. Android apps don't support all the keybinds but SecureShell does, which improves my life a bit.

Planet DebianJoey Hess: early spring

Sun is setting after 7 (in the JEST TZ); it's early spring. Batteries are generally staying above 11 volts, so it's time to work on the porch (on warmer days), running the inverter and spinning up disc drives that have been mostly off since fall. Back to leaving the router on overnight so my laptop can sync up before I wake up.

Not enough power yet to run electric lights all evening, and there's still a risk of a cloudy week interrupting the climb back up to plentiful power. It's happened to me a couple times before.

Also, turned out that both of my laptop DC-DC power supplies developed partial shorts in their cords around the same time. So at first I thought it was some problem with the batteries or laptop, but eventually figured it out and got them replaced. (This may have contributed the the cliff earier; seemed to be worst when house voltage was low.)

Soon, 6 months of more power than I can use..

Previously: battery bank refresh late summer the cliff


Planet DebianShirish Agarwal: The Indian elections hungama

a person showing s(he) showing s(he)

Before I start, I would like to point out #855549 . This is a normal/wishlist bug I have filed against apt, the command-line package manager. I sincerely believe having a history command to know what packages were installed, which were upgraded, which were purged should be easily accessible, easily understood and if the output looks pretty, so much the better. Of particular interest to me is having a list of new packages I have installed in last couple of years after jessie became the stable release. It probably would make for some interesting reading. I dunno how much efforts would be to code something like that, but if it works, it would be the greatest. Apt would have finally arrived. Not that it’s a bad tool, it’s just that it would then make for a heck of a useful tool.

Coming back to the topic on hand, Now for the last couple of weeks we don’t have water or rather pressure of water. Water crisis has been hitting Pune every year since 2014 with no end in sight. This has been reported in newspapers addendum but it seems it has been felling on deaf ears. The end result of it is that I have to bring buckets of water from around 50 odd metres.

It’s not a big thing, it’s not like some women in some villages in Rajasthan who have to walk in between 200 metres to 5 odd kilometres to get potable water or Darfur, Western Sudan where women are often kidnapped and sold as sexual slaves when they get to fetch water. The situation in Darfur has been shown quite vividly in Darfur is Dying . It is possible that I may have mentioned about Darfur before. While unfortunately the game is in flash as a web resource, the most disturbing part is that the game is extremely depressing, there is a no-win scenario.

So knowing and seeing both those scenarios, I can’t complain about 50 metres. BUT….but… when you extrapolate the same data over some more or less 3.3-3.4 million citizens, 3.1 million during 2011 census with a conservative 2.3-2.4 percent population growth rate according to

Fortunately or unfortunately, Pune Municipal Corporation elections were held today. Fortunately or unfortunately, this time all the political parties bought majorly unknown faces in these elections. For e.g. I belong to ward 14 which is spread over quite a bit of area and has around 10k of registered voters.

Now the unfortunate part of having new faces in elections, you don’t know anything about them. Apart from the affidavits filed, the only thing I come to know is whether there are criminal cases filed against them and what they have shown as their wealth.

While I am and should be thankful to ADR which actually is the force behind having the collated data made public. There is a lot of untold story about political push-back by all the major national and regional political parties even when this bit of news were to be made public. It took major part of a decade for such information to come into public domain.

But for my purpose of getting clean air and water supply 24×7 to each household seems a very distant dream. I tried to connect with the corporators about a week before the contest and almost all of the lower party functionaries hid behind their political parties manifestos stating they would do the best without any viable plan.

For those not knowing, India has been blessed with 6 odd national parties and about 36 odd regional parties and every election some 20-25 new parties try their luck every time.

The problem is we, the public, don’t trust them or their manifestos. First of all the political parties themselves engage in mud-slinging as to who’s copying whom with the manifesto.Even if a political party wins the elections, there is no *real* pressure for them to follow their own manifesto. This has been going for many a year. OF course, we the citizens are to also blame as most citizens for one reason or other chose to remain aloof of the process. I scanned/leafed through all the manifestos and all of them have the vague-wording ‘ we will make Pune tanker-free’ without any implementation details. While I was unable to meet the soon-to-be-Corporators, I did manage to meet a few of the assistants but all the meetings were entirely fruitless.

Diagram of Rain Water Harvesting

I asked why can’t the city follow the Chennai model. Chennai, not so long ago was at the same place where Pune is, especially in relation to water. What happened next, in 2001 has been beautifully chronicled in Hindustan Times . What has not been shared in that story is that the idea was actually fielded by one of Chennai Mayor’s assistants, an IAS Officer, I have forgotten her name, Thankfully, her advise/idea was taken to heart by the political establishment and they drove RWH.

Saying why we can’t do something similar in Pune, I heard all kinds of excuses. The worst and most used being ‘Marathas can never unite’ which I think is pure bullshit. For people unfamiliar to the term, Marathas was a warrior clan in Shivaji’s army. Shivaji, the king of Marathas were/are an expert tactician and master of guerilla warfare. It is due to the valor of Marathas, that we still have the Maratha Light Infantry a proud member of the Indian army.

Why I said bullshit was the composition of people living in Maharashtra has changed over the decades. While at one time both the Brahmins and the Marathas had considerable political and population numbers, that has changed drastically. Maharashtra and more pointedly, Mumbai, Pune and Nagpur have become immigrant centres. Why just a decade back, Shiv Sena, an ultra right-wing political party used to play the Maratha card at each and every election and heckle people coming from Uttar Pradesh and Bihar, this has been documented as the 2008 immigrants attacks and 9 years later we see Shiv Sena trying to field its candidates in Uttar Pradesh. So, obviously they cannot use the same tactics which they could at one point of time.

One more reason I call it bullshit, is it’s a very lame excuse. When the Prime Minister of the country calls for demonetization which affects 1.25 billion people, people die, people stand in queues and is largely peaceful, I do not see people resisting if they bring a good scheme. I almost forgot, as an added sweetener, the Chennai municipality said that if you do RWH and show photos and certificates of the job, you won’t have to pay as much property tax as otherwise you would, that also boosted people’s participation.

And that is not the only solution, one more solution has been outlined in ‘Aaj Bhi Khade hain talaab’ written by just-deceased Gandhian environmental activist Anupam Mishra. His Book can be downloaded for free at India Water Portal . Unfortunately, the said book doesn’t have a good English translation till date. Interestingly, all of his content is licensed under public domain (CC-0) so people can continue to enjoy and learn from his life-work.

Another lesson or understanding could be taken from Israel, the father of the modern micro-drip irrigation for crops. One of the things on my bucket lists is to visit Israel and if possible learn how they went from a water-deficient country to a water-surplus one.

India labor

Which brings me to my second conundrum, most of the people believe that it’s the Government’s job to provide jobs to its people. India has been experiencing jobless growth for around a decade now, since the 2008 meltdown. While India was lucky to escape that, most of its trading partners weren’t hence it slowed down International trade which slowed down creation of new enterprises etc. Laws such as the Bankruptcy law and the upcoming Goods and Services Tax . As everybody else, am a bit excited and a bit apprehensive about how the actual implementation will take place.


Even International businesses has been found wanting. The latest example has been Uber and Ola. There have been protests against the two cab/taxi aggregators operating in India. For the millions of jobless students coming out of schools and Universities, there aren’t simply enough jobs for them, nor are most (okay 50%) of them qualified for the jobs, these 50 percent are also untrainable, so what to do ?

In reality, this is what keeps me awake at night. India is sitting on this ticking bomb-shell. It is really, a miracle that the youths have not rebelled yet.

While all the conditions, proposals and counter-proposals have been shared before, I wanted/needed to highlight it. While the issue seems to be local, I would assert that they are all glocal in nature. The questions we are facing, I’m sure both developing and to some extent even developed countries have probably been affected by it. I look forward to know what I can learn from them.

Update – 23/02/17 – I had wanted to share about Debian’s Voting system a bit, but that got derailed. Hence in order not to do, I’ll just point towards 2015 platforms where 3 people vied for DPL post. I *think* I shared about DPL voting process earlier but if not, would do in detail in some future blog post.

Filed under: Miscellenous Tagged: #Anupam Mishra, #Bankruptcy law, #Chennai model, #clean air, #clean water, #elections, #GST, #immigrant, #immigrants, #Maratha, #Maratha Light Infantry, #migration, #national parties, #Political party manifesto, #regional parties, #ride-sharing, #water availability, Rain Water Harvesting

Planet DebianSteinar H. Gunderson: 8-bit Y'CbCr ought to be enough for anyone?

If you take a random computer today, it's pretty much a given that it runs a 24-bit mode (8 bits of each of R, G and B); as we moved from palettized displays at some point during the 90s, we quickly went past 15- and 16-bit and settled on 24-bit. The reasons are simple; 8 bits per channel is easy to work with on CPUs, and it's on the verge of what human vision can distinguish, at least if you add some dither. As we've been slowly taking the CPU off the pixel path and replacing it with GPUs (which has specialized hardware for more kinds of pixels formats), changing formats have become easier, and there's some push to 10-bit (30-bit) “deep color” for photo pros, but largely, 8-bit per channel is where we are.

Yet, I'm now spending time adding 10-bit input (and eventually also 10-bit output) to Nageru. Why? The reason is simple: Y'CbCr.

Video traditionally isn't done in RGB, but in Y'CbCr; that is, a black-and-white signal (Y) and then two color-difference signals (Cb and Cr, roughly “additional blueness“ and “additional redness”, respectively). We started doing this because it was convenient in analog TV (if you separate the two, black-and-white TVs can just ignore the color signal), but we kept doing it because it's very nice for reducing bandwidth: Human vision is much less sensitive to color than to brightness, so we can transfer the color channels in lower resolution and get away with it. (Also, a typical Bayer sensor can't deliver full color resolution anyway.) So most cameras and video codecs work in Y'CbCr, not RGB.

Let's look at the implications of using 8-bit Y'CbCr, using a highly simplified model for, well, simplicity. Let's define Y = 1/3 (R + G + B), Cr = R - Y and Cb = B - Y. (The reverse transformation becomes R = Y + Cr, B = Y + Cb and G = 3Y - R - B.)

This means that an RGB color such as pure gray ([127, 127, 127]) becomes [127, 0, 0]. All is good, and Y can go from 0 to 255, just like R, G and B can. A pure red ([255, 0, 0]) becomes [85, 170, 0], and a pure blue ([255, 0, 0]) becomes correspondingly [85, 0, 170]. But we can also have negative Cr and Cb values; a pure yellow ([0, 255, 255]) becomes [170, -170, 85], for instance. So we need to squeeze values from -170 to +170 into an 8-bit range, losing accuracy.

Even worse, there are valid Y'CbCr triplets that don't correspond to meaningful RGB colors at all. For instance, Y'CbCr [255, 170, 0] would be RGB [425, 85, 255]; R is out of range! And Y'CbCr [255, -170, 0] would be RGB [85, -85, 255], that is, negative green.

This isn't a problem for compression, as we can just avoid using those illegal “colors” with no loss of efficiency. But it means that the conversion in itself causes a loss; actually, if you do the maths on the real formulas (using the BT.601 standard), it turns out only 17% of the 24-bit Y'CbCr code words are valid!

In other words, we lose about two and a half bits of data, and our 24 bits of accuracy have been reduced to 21.5. Or, to put it another way; 8-bit Y'CbCr is roughly equivalent to 7-bit RGB.

Thus, pretty much all professional video uses 10-bit Y'CbCr. It's much more annoying to deal with (especially when you've got subsampling!), but if you're using SDI, there's not even any 8-bit version defined, so if you insist on 8-bit, you're taking data you're getting on the wire (whether you want it or not) and throwing 20% of it away. UHDTV standards (using HEVC) are also simply not defined for 8-bit; it's 10- and 12-bit only, even on the codec level. Parts of this is because UHDTV also supports HDR, so you have a wider RGB range than usual to begin with, and 8-bit would cause excessive banding.

Using it on the codec level makes a lot of sense for another reason, namely that you reduce internal roundoff errors during processing by a lot; errors equal noise, and noise is bad for compression. I've seen numbers of 15% lower bitrate for H.264 at the same quality, although you also have to take into account that the encoeder also needs more CPU power that you could have used for a higher preset in 8-bit. I don't know how the tradeoff here works out, and you also have to take into account decoder support for 10-bit, especially when it comes to hardware. (When it comes to HEVC, Intel didn't get full fixed-function 10-bit support before Kaby Lake!)

So indeed, 10-bit Y'CbCr makes sense even for quite normal video. It isn't a no-brainer to turn it on, though—even though Nageru uses a compute shader to convert the 4:2:2 10-bit Y'CbCr to something the GPU can sample from quickly (ie., the CPU doesn't need to touch it), and all internal processing is in 16-bit floating point anyway, it still takes a nonzero amount of time to convert compared to just blasting through 8-bit, so my ultraportable probably won't make it anymore. (A discrete GPU has no issues at all, of course. My laptop converts a 720p frame in about 1.4 ms, FWIW.) But it's worth considering when you want to squeeze even more quality out of the system.

And of course, there's still 10-bit output support to be written...

Planet DebianReproducible builds folks: Reproducible Builds: week 95 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday February 12 and Saturday February 18 2017:

Upcoming Events

The Reproducible Build Zoo will be presented by Vagrant Cascadian at the Embedded Linux Conference in Portland, Oregon, February 22nd.

Introduction to Reproducible Builds will be presented by Vagrant Cascadian at Scale15x in Pasadena, California, March 5th.

Toolchain development and fixes

Ximin Luo posted a preliminary spec for BUILD_PATH_PREFIX_MAP, bringing together work and research from previous weeks.

Ximin refactored and consolidated much of our existing documentation on both SOURCE_DATE_EPOCH and BUILD_PATH_PREFIX_MAP into one unified page, Standard Environment Variables, with extended discussion on related solutions and how these all fit into people's ideas of what reproducible builds should look like in the long term. The specific pages for each variable still remain, at Timestamps Proposal and Build Path Proposal, only without content that was previously duplicated on both pages.

Ximin filed #855282 against devscripts for debsign(1) to support buildinfo files, and wrote an initial series of patches for it with some further additions from Guillem Jover.

Packages reviewed and fixed, and bugs filed

Chris Lamb:

Reviews of unreproducible packages

35 package reviews have been added, 1 have been updated and 17 have been removed in this week, adding to our knowledge about identified issues.

1 issue type has been added:

Weekly QA work

During our reproducibility testing, the following FTBFS bugs have been detected and reported by:

  • Chris Lamb (2)

diffoscope development

diffoscope 77 was uploaded to unstable by Mattia Rizzolo. It included contributions from:

  • Chris Lamb:
    • Some fixes to tests and testing config
    • Don't track archive directory locations, a better fix for CVE-2017-0359.
    • Add --exclude option. Closes: #854783
  • Mattia Rizzolo:
    • Add my key to debian/upstream/signing-key.asc
    • Add CVE-2017-0359 to the changelog of v76
  • Ximin Luo:
    • When extracting archives, try to keep directory sizes small

strip-nondeterminism development

strip-nondeterminism 0.031-1 was uploaded to unstable by Chris Lamb. It included contributions from:

  • Chris Lamb:
    • Make the tests less brittle, by not testing for stat(2) blksize and blocks. #854937

strip-nondeterminism 0.031-1~bpo8+1 was uploaded to jessie-backports by Mattia.

  • Vagrant Cascadian and Holger Levsen set up two new armhf nodes, p64b and p64c running on pine64 boards with an arm64 kernel and armhf userland. This introduces kernel variations to armhf. New setup & maintenance jobs were set up too, plus 6 new builder jobs for armhf.


This week's edition was written by Ximin Luo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Krebs on SecurityHow to Bury a Major Breach Notification

Amid the hustle and bustle of the RSA Security Conference in San Francisco last week, researchers at RSA released a startling report that received very little press coverage relative to its overall importance. The report detailed a malware campaign that piggybacked on a popular piece of software used by system administrators at some of the nation’s largest companies. Incredibly, the report did not name the affected software, and the vendor in question has apparently chosen to bury its breach disclosure. This post is an attempt to remedy that.

The RSA report detailed the threat from a malware operation the company dubbed “Kingslayer.” According to RSA, the attackers compromised the Web site of a company that sells software to help Windows system administrators better parse and understand Windows event logs. RSA said the site hosting the event log management software was only compromised for two weeks — from April 9, 2015 to April 25, 2015 — but that the intrusion was likely far more severe than the short duration of the intrusion suggests.

That’s because in addition to compromising the download page for this software package, the attackers also hacked the company’s software update server, meaning any company that already had the software installed prior to the site compromise would likely have automatically downloaded the compromised version when the software regularly checked for available updates (as it was designed to do).

Image: RSA

Image: RSA

RSA said that in April 2016 it “sinkholed” or took control over the Web site that the malware used as a control server — oraclesoft[dot]net — and from there they were able to see indicators of which organizations might still be running the backdoored software. According to RSA, the victims included five major defense contractors; four major telecommunications providers; 10+ western military organizations; more than two dozen Fortune 500 companies; 24 banks and financial institutions; and at least 45 higher educational institutions.

RSA declined to name the software vendor whose site was compromised, but said the company issued a security notification on its Web site on June 30, 2016 and updated the notice on July 17, 2016 at RSA’s request following findings from further investigation into a defense contractor’s network. RSA also noted that the victim software firm had a domain name ending in “.net,” and that the product in question was installed as a Windows installer package file (.msi).

Using that information, it wasn’t super difficult to find the product in question. An Internet search for the terms “event log security notification april 2015” turns up a breach notification from June 30, 2016 about a software package called EVlog, produced by an Altair Technologies Ltd. in Mississauga, Ontario. The timeline mentioned in the breach notification exactly matches the timeline laid out in the RSA report.

As far as breach disclosures go, this one is about the lamest I’ve ever seen given the sheer number of companies that Altair Technologies lists on its site as subscribers to, an online service tied to EVlog. I could not locate a single link to this advisory anywhere on the company’s site, nor could I find evidence that Altair Technologies had made any effort via social media or elsewhere to call attention to the security advisory; it is simply buried in the site. A screenshot of the original, much shorter, version of that notice is here.

Just some of the customers of Eventid.

Just some of the customers of Eventid.

Perhaps the company emailed its subscribers about the breach, but that seems doubtful. The owner of Altair Technologies, a programmer named Adrian Grigorof, did not respond to multiple requests for comment.

“This attack is unique in that it appears to have specifically targeted Windows system administrators of large and, perhaps, sensitive organizations,” RSA said in its report. “These organizations appeared on a list of customers still displayed on the formerly subverted software vendor’s Web site. This is likely not coincidence, but unfortunately, nearly two years after the Kingslayer campaign was initiated, we still do not know how many of the customers listed on the website may have been breached, or possibly are still compromised by the Kingslayer perpetrators.”

It’s perhaps worth noting that this isn’t the only software package sold by Altair Technologies. An analysis of shows that the site is hosted on a server along with three other domains,, and (the latter being a vanity domain of the software developer). The other two domains — and — correspond to different software products sold by Altair.

The fact that those software titles appear to have been sold and downloadable from the same server as (going back as far as 2010) suggests that those products may have been similarly compromised. However, I could find no breach notification mentioning those products. Here is a list of companies that Altair says are customers of Firegen; they include 3M, DirecTV, Dole Food Company, EDS, FedEx, Ingram Micro, Northrop Grumman, Symantec and the U.S. Marshals Service.

RSA calls these types of intrusions “supply chain attacks,” in that they provide one compromise vector to multiple targets. It’s not difficult to see from the customer lists of the software titles mentioned above why an attacker might salivate over the idea of hacking an entire suite of software designed for corporate system administrators.

“Supply chain exploitation attacks, by their very nature, are stealthy and have the potential to provide the attacker access to their targets for a much longer period than malware delivered by other common means, by evading traditional network analysis and detection tools,” wrote RSA’s Kent Backman and Kevin Stear. “Software supply chain attacks offer considerable ‘bang for the buck’ against otherwise hardened targets. In the case of Kingslayer, this especially rings true because the specific system-administrator-related systems most likely to be infected offer the ideal beachhead and operational staging environment for system exploitation of a large enterprise.”

A copy of the RSA report is available here (PDF).

Update, 3:35 p.m. ET: I first contacted Altair Technologies’ Grigorof on Feb. 9. I heard back from him today, post-publication. Here is his statement:

“Rest assured that the EvLog incident has been reviewed by a high-level security research company and the relevant information circulated to the interested parties, including antivirus companies. We are under an NDA regarding their internal research though the attack has already been categorized as a supply chain attack.”

“The notification that you’ve seen was based on their recommendations and they had our full cooperation on tracking down the perpetrators. It’s obviously not as spectacular as a high visibility, major company breach and surely there wasn’t anything in the news – we are not that famous.”

“I’m sure a DDoS against our site would remain unnoticed while the attack against your blog site made headlines all over the world. We also don’t expect that a large organization would use EvLog to monitor their servers – it is a very simple tool. We identified the problem within a couple of weeks (vs months or years that it takes for a typical breach) and imposed several layers of extra security in order prevent this type of problem.”

“To answer your direct question about notifications, we don’t keep track on who downloads and tries this software, therefore there is no master list of users to notify. Any anonymous user can download it and install it. I’m not sure what you mean by ‘you still haven’t disclosed this breach’ – it is obviously disclosed and the notification is on our website. The notification is quite explicit in my opinion – the user is warned that even if EvLog is removed, there may still be other malware that used EvLog as a bridgehead.”

My take on this statement? I find it to be wholly incongruent. Altair Technologies obviously went to great lengths to publish who its major customers were on the same sites it was using to the sell the software in question. Now the owner says he has no idea who uses the software. That he would say it was never intended to be used in major organizations seems odd in contrast.

Finally, publishing a statement somewhere in the thick of your site and not calling attention to it on any other portion of your site isn’t exactly a disclosure. If nobody knows that there’s a breach notice there to find, how should they be expected to find it? Answer: They’re not, because it wasn’t intended for that purpose. This statement hasn’t convinced me to the contrary.

Update, 11:13 p.m. ET: Altair Technologies now has a link to the breach notification on the homepage for Evlog:

Worse Than FailureBlind Obedience

Murray F. took a position as an Highly Paid Consultant at a large firm that had rules for everything. One of the more prescient rules specified that for purposes of budgeting, consultants were only allowed to bill for 8 hours of work per day, no exceptions. The other interesting rule was that only certain employees were allowed to connect to the VPN to work from home; consultants had to physically be in the office.

The project to which Murray was assigned had an international staff of more than 100 developers; about 35 of them were located locally. All of the local development staff were HPCs.

With that much staff, as you would expect, there was a substantial MS Project plan detailing units of work at all levels, and assorted roll-ups into the master time line.

A soccer ref holding up a red card

The managers that had created this plan took all sorts of things into account. For example, if you attended three hours of meetings two days a week, then you only had 34 hours available for work; if you had to leave early one day to pick up your kid, it set those hours aside as non-work, and so on. The level of detail even took into account the time it takes to mentally put down one complex task and pick up another one. It was awful to look at but it was reasonably accurate.


Weather forecasters are wrong as often as they are right. However, the spiraling pin-wheel of snowstorms was getting bigger and barreling down on the local office, and was so imminent that even the forecasters were issuing absolute warnings. Not "It looks like we might get six inches"; but more along the lines of "Get groceries and plan to be shut in for a while".

The storm hit at night and by first light, anyone who looked out the window immediately realized that the forecasters were right and that they weren't going anywhere. In an attempt to be good team players, the consultants called their managers, pointed out that they were snowed in and unable to travel, and given the special circumstances, could they use the VPN and work from home?

The managers all responded that the rules were very specific and that the consultants could only work from the office. Since the consultants were powerless to do anything about the weather or the mountain of snow that had to be shoveled, they took snow days and no work was done.

That's 35 consultants for 2 days or 70 days of (loaded) work, or about 2 ½ months of work that vaporized. Needless to say, this turned the otherwise green time line quite red.

The managers called a meeting to discuss how to make up the time. Their first suggestion was that the consultants put in more time, to which they responded The rules specify that we cannot bill more than 8 hours each day. The managers then asked the consultants if they would work without pay - to get it done. Wisely, the consultants said that they were required to play by the rules set forth by the company, and could not falsify the billing sheets with the wrong number of hours worked.

The sponsoring agencies of the consultants all agreed on that one (free labor means no commissions on said labor).

This went back and forth for a while until it came time for scheduled demos. Only the work was about ten person-weeks behind schedule and the features to be demo'd had not yet been built.

At this point, the senior people who could not see their expected features in action had no choice but to address the snow delay. After much discussion, they decreed that the budgets had to be adhered to (e.g.: billing was limited to 8 hours per day), but the line development managers could hire additional consultants to make up the missed work. The managers got to work adjusting the master project plan.

The existing consultants pointed out that it would take a substantial amount of time to find new consultants, get computers, set up development environments, do general on-boarding and get new developers up to speed; and that it didn't make sense to hire new developers for something like this.

It was decreed that rules had to be followed, and it didn't matter if it wasn't cost efficient to follow those rules.

So they spent about a month interviewing (new project task for existing senior consultants and managers), bringing new consultants on board (getting them equipment, access, etc. - a new project task for managers) , and giving them architecture and code walk-throughs (new project task for existing senior consultants). This necessitated increasing the expense to the project to cover all the additional overhead.

All to save a few bucks in additional billing by already-trained-and-equipped developers, which would have been completely unnecessary if they had just let them work from home in the first place.

But hey, those were the rules.

[Advertisement] Application Release Automation – build complex release pipelines all managed from one central dashboard, accessibility for the whole team. Download and learn more today!

Planet DebianJonathan Dowland: Blinkstick and Doom

I recently implemented VGA "porch" flashing support in Chocolate Doom.

Since I'd spent some time playing with a blinkstick on my NAS, I couldn't resist trying it out with Chocolate Doom too. The result:

Planet DebianArturo Borrero González: About process limits, round 2


I was wrong. After the other blog post About process limits, some people contacted me with additional data and information. I myself continued to investigate on the issue, so I have new facts.

I read again the source code of the slapd daemon and the picture seems clearer now.

A new message appeared in the log files:

Feb 20 06:26:03 slapd[18506]: daemon: 1025 beyond descriptor table size 1024
Feb 20 06:26:03 slapd[18506]: daemon: 1025 beyond descriptor table size 1024
Feb 20 06:26:03 slapd[18506]: daemon: 1025 beyond descriptor table size 1024
Feb 20 06:26:03 slapd[18506]: daemon: 1025 beyond descriptor table size 1024
Feb 20 06:26:03 slapd[18506]: daemon: 1025 beyond descriptor table size 1024

This message is clearly produced by the daemon itself, and searching for the string leads to this source code, in servers/slapd/daemon.c:

sfd = SLAP_SOCKNEW( s );

/* make sure descriptor number isn't too great */
if ( sfd >= dtblsize ) {
		"daemon: %ld beyond descriptor table size %ld\n",
		(long) sfd, (long) dtblsize, 0 );

	return 0;

In that same file, dtblsize is set to:

        dtblsize = sysconf( _SC_OPEN_MAX );
        dtblsize = getdtablesize();
        dtblsize = FD_SETSIZE;

If you keep pulling the string, the first two options use system limits to know the value, getrlimit(), and the last one uses a fixed value of 4096 (set at build time).

It turns out that this routine slapd_daemon_init() is called once, at daemon startup (see main() function at servers/slapd/main.c). So the daemon is limiting itself to the limit imposed by the system at daemon startup time.

That means that our previous limits settings at runtime was not being read by the slapd daemon.

Let’s back to the previous approach of establishing the process limits by setting them on the user. The common method is to call ulimit in the init.d script (or systemd service file). One of my concerns of this approach was that slapd runs as a different user, usually openldap.

Again, reading the source code:

if( check == CHECK_NONE && slapd_daemon_init( urls ) != 0 ) {
	rc = 1;
        goto stop;

#if defined(HAVE_CHROOT)
	if ( sandbox ) {
		if ( chdir( sandbox ) ) {
			rc = 1;
			goto stop;
		if ( chroot( sandbox ) ) {
			rc = 1;
			goto stop;

#if defined(HAVE_SETUID) && defined(HAVE_SETGID)
	if ( username != NULL || groupname != NULL ) {
		slap_init_user( username, groupname );

So, the slapd daemon first reads the limits and then change user to openldap, (the slap_init_user() function).

We can then asume that if we set the limits to the root user, calling ulimit in the init.d script, the slapd daemon will actually inherint them.

This is what is originally suggested in debian bug #660917. Let’s use this solution for now.

Many thanks to John Hughes for the clarifications via email.

Planet Linux AustraliaLinux Australia News: $AUD 35k available in 2017 Grants Program

Linux Australia is delighted to announce the availability of $AUD 35,000
for open source, open data, open government, open education, open
hardware and open culture projects, as part of the organisation’s
commitment to free and open source systems and communities in the region.

This year, we have deliberately weighted some areas in which we strongly
welcome grant applications.

More information is available at:

Please do share this with colleagues who may find it of interest, and
feel free to contact the Linux Australia Council if you would like a
private discussion.

With kind regards,

Kathy Reid

President, Linux Australia


Planet DebianPetter Reinholdtsen: Detect OOXML files with undefined behaviour?

I just noticed the new Norwegian proposal for archiving rules in the goverment list ECMA-376 / ISO/IEC 29500 (aka OOXML) as valid formats to put in long term storage. Luckily such files will only be accepted based on pre-approval from the National Archive. Allowing OOXML files to be used for long term storage might seem like a good idea as long as we forget that there are plenty of ways for a "valid" OOXML document to have content with no defined interpretation in the standard, which lead to a question and an idea.

Is there any tool to detect if a OOXML document depend on such undefined behaviour? It would be useful for the National Archive (and anyone else interested in verifying that a document is well defined) to have such tool available when considering to approve the use of OOXML. I'm aware of the officeotron OOXML validator, but do not know how complete it is nor if it will report use of undefined behaviour. Are there other similar tools available? Please send me an email if you know of any such tool.

Planet DebianRitesh Raj Sarraf: Setting up appliances - the new way

I own a Fitbit Surge. But Fitibit chose to remain exclusive in terms of interoperability. Which means to make any sense out of the data that the watch gathers, you need to stick with what Fitbit mandates. Fair enough in today's trends. It also is part of their business model to restrict useful aspects of the report to Premium Membership.  Again, fair enough in today's business' trends.

But a nice human chose to write a bridge; to extract Fitbit data and feed into Google Fit. The project is written in Python, so you can get it to work on most common computer platforms. I never bothered to package this tool for Debian, because I never was sure when I'd throw away the Fitbit. But until that happens, I decided to use the tool to sync my data to Google Fit. Which led me to requirements.txt

This project's requirement.txt lists versioned module dependencies, of which many modules in Debian, were either older or newer than what was mentioned in the requirements. To get the tool working, I installed it the pip way. 3 months later, something broke and I needed to revisit the installed modules. At that point, I realized that there's no such thing as: pip upgrade

That further led me to dig on why anyone wouldn't add something so simple, because today, in the days of pip, snap, flatpak and dockers, Distributions are predicted to go obsolete and irrelevant. Users should get the SOURCES directly from the developers. But just looking at the date the bug was filed, killed my enthusiasm any further.

So, without packaging for Debian, and without installing through pip, I was happy that my init has the ability to create confined and containerized environments, something that I could use to get the job done.


rrs@chutzpah:~$ sudo machinectl login fitbit
[sudo] password for rrs:
Connected to machine fitbit. Press ^] three times within 1s to exit session.

Debian GNU/Linux 9 fitbit pts/0

fitbit login: root
Last login: Fri Feb 17 12:44:25 IST 2017 on pts/1

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@fitbit:~# tail -n 25 /var/tmp/lxc/fitbit-google.log
synced calories - 1440 data points

------------------------------   2017-02-19  -------------------------
synced steps - 1440 data points
synced distance - 1440 data points
synced heart_rate - 38215 data points
synced weight - 0 logs
synced body_fat - 0 logs
synced calories - 1440 data points

------------------------------   2017-02-20  -------------------------
synced steps - 1270 data points
synced distance - 1270 data points
synced heart_rate - 32547 data points
synced weight - 0 logs
synced body_fat - 0 logs
synced calories - 1271 data points

Synced 7 exercises between : 2017-02-15 -- 2017-02-20

                                     Like it ?
star the repository :






Planet DebianHolger Levsen: How to use .ics files like it's 1997

$ sudo apt install khal
Unpacking khal (0.8.4-3) ...
$ (echo 1;echo 0;echo y;echo 0; echo y; echo n; echo y; echo y)  | khal configure
Do you want to write the config to /home/user/.config/khal/khal.conf? (Choosing `No` will abort) [y/N]: Successfully wrote configuration to /home/user/.config/khal/khal.conf
$ wget
HTTP request sent, awaiting response... 200 OK
Length: 6120 (6.0K) [text/plain]
Saving to: ‘until-dc17.ics’
$ khal import --batch -a private until-dc17.ics
$ khal agenda --days 14
16:30-17:30: DebConf Weekly Meeting ⟳

16:30-17:30: DebConf Weekly Meeting ⟳

khal is available in stretch and newer and is probably best run from cron piping into '/usr/bin/mail' :-) Thanks to Gunnar Wolf for figuring it all out.

Planet DebianJonathan Dowland: Blinkenlights, part 3

red blinkenlights!

red blinkenlights!

Part three of a series. part 1, part 2.

One morning last week I woke up to find the LED on my NAS a solid red. I've never been happier to have something fail.

I'd set up my backup jobs to fire off a systemd unit on failure


This is a generator-service, which is used to fire off an email to me when something goes wrong. I followed these instructions on the Arch wiki to set it up). Once I got the blinkstick, I added an additional command to that service to light up the LED:

ExecStart=-/usr/local/bin/blinkstick --index 1 --limit 50 --set-color red

The actual failure was a simple thing to fix. But I never did get the email.

On further investigation, there are problems with using exim and systemd in Debian at the moment: it's possible for the exim4 daemon to exit and for systemd not to know that this is a failure, thus, the mail spool never gets processed. This should probably be fixed by the exim4 package providing a proper systemd service unit.

Planet DebianJonathan Dowland: Blinkenlights, part 2

Part two of a series. part 1, part 3.

To start with configuring my NAS to use the new blinkenlights, I thought I'd start with a really easy job: I plug in my iPod, a script runs to back it up, then the iPod gets unmounted. It's one of the simpler jobs to start with because the iPod is a simple block device and there's no encryption in play. For now, I'm also going to assume the LED Is going to be used exclusively for this job. In the future I will want many independent jobs to perhaps use the LED to signal things and figuring out how that will work is going to be much harder.

I'll skip over the journey and go straight to the working solution. I have a systemd job that is used to invoke a sync from the iPod as follows:

ExecStart=/bin/mount /media/ipod
ExecStart=/usr/local/bin/blinkstick --index 1 --limit 10 --set-color 33c280
ExecStart=/usr/bin/rsync ...
ExecStop=/bin/umount /media/ipod
ExecStop=/usr/local/bin/blinkstick --index 1 --limit 10 --set-color green



/media/ipod is a classic mount configured in /etc/fstab. I've done this rather than use the newer systemd .mount units which sadly don't give you enough hooks for running things after unmount or in the failure case. This feels quite unnatural, much more "systemdy" would be to Requires= the mount unit, but I couldn't figure out an easy way to set the LED to green after the unmount. I'm sure it's possible, but convoluted.

The first blinkstick command sets the LED to a colour to indicate "in progress". I explored some of the blinkstick tool's options for a fading or throbbing colour but they didn't work very well. I'll take another look in the future. After the LED is set, the backup job itself runs. The last blinkstick command, which is only run if the previous umount has succeeded, sets the LED to indicate "safe to unplug".

The WantedBy here instructs systemd that when the iPod device-unit is activated, it should activate my backup service. I can refer to the iPod device-unit using this name based on the partition's UUID; this is not the canonical device name that you see if you run systemctl but it's much shorter and crucially its stable, the canonical name depends on exactly where you plugged it in and what other devices might have been connected at the same time.

If something fails, a second unit blinkstick-fail.service gets activated. This is very short:

ExecStart=/usr/local/bin/blinkstick --index 1 --limit 50 --set-color red

This simply sets the LED to be red.

Again it's a bit awkward that in 2 cases I'm setting the LED with a simple Exec but in the third I have to activate a separate systemd service: this seems to be the nature of the beast. At least when I come to look at concurrent jobs all interacting with the LED, the failure case should be simple: red trumps any other activity, user must go and check what's up.

Planet DebianJonathan Dowland: Blinkenlights!



Part one of a series. part 2, part 3.

Late last year, I was pondering how one might add a status indicator to a headless machine like my NAS to indicate things like failed jobs.

After a brief run through of some options (a USB-based custom device; a device pretending to be a keyboard attached to a PS/2 port; commandeering the HD activity LED; commandeering the PC speaker wire) I decided that I didn't have the time to learn the kind of skills needed to build something at that level and opted to buy a pre-assembled programmable USB thing instead, called the BlinkStick.

Little did I realise that my friend Jonathan McDowell thought that this was an interesting challenge and actually managed to design, code and build something! Here's his blog post outlining his solution and here's his code on github (or canonically)

Even thought I've bought the blinkstick, given Jonathan's efforts (and the bill of materials) I'm going to have to try and assemble this for myself and give it a go. I've also managed to borrow an Arduino book from a colleague at work.

Either way, I still have some work to do on the software/configuration side to light the LEDs up at the right time and colour based on the jobs running on the NAS and their state.

Planet Linux AustraliaBinh Nguyen: Life in India, Prophets/Pre-Cogs/Stargate Program 7, and More

On India: - your life is very much dependent on how you were born, how much you money you have, etc... In spite of being a capitalist, democracy it still bears aspects of being stuck with a caste/feudal/colonial system. Electrical power stability issues still. Ovens generally lacking. Pollution and traffic problems no matter where you live in India. They have the same number of hours in each

CryptogramGerman Government Classifies Doll as Illegal Spyware

This is interesting:

The My Friend Cayla doll, which is manufactured by the US company Genesis Toys and distributed in Europe by Guildford-based Vivid Toy Group, allows children to access the internet via speech recognition software, and to control the toy via an app.

But Germany's Federal Network Agency announced this week that it classified Cayla as an "illegal espionage apparatus". As a result, retailers and owners could face fines if they continue to stock it or fail to permanently disable the doll's wireless connection.

Under German law it is illegal to manufacture, sell or possess surveillance devices disguised as another object.

Another article.

Sociological ImagesWhat can the history of divorce tell us about the future of marriage?

A different version of this post was originally published at Timeline.

To get some perspective on the long term trend in divorce, we need to check some common assumptions. Most importantly, we have to shake the idea that the trend is just moving in one direction, tracking a predictable course from “olden days” to “nowadays.”

It’s so common to think of society developing in on direction over time that people rarely realize they are doing it. Regardless of political persuasion, people tend to collapse history into then versus now whether they’re using specific dates and facts or just imagining the sweep of history.

In reality, sometimes it’s true and sometimes it’s not true that society has a direction of change over a long time period. Some social trends are pretty clear, such as population growth, longevity, wealth, or the expansion of education. But when you look more closely, and narrow the focus to the last century or so, it turns out that even the trends that are following some path of progress aren’t moving linearly, and the fluctuations can be the big story.

Demography provides many such examples. For example, although it’s certainly true that Americans have fewer children now than they did a century ago, the Baby Boom – that huge spike in birth rates from 1946 to 1964 – was such a massive disruption that in some ways it is the big story of the century. Divorce is another.

The most popular false assumption about divorce – sort of like crime or child abuse – is that it’s always getting worse (which isn’t true of crime or child abuse, either). In the broadest sense, yes, there is more divorce nowadays than there was in the olden days, but the trend is complicated and has probably reversed.

It turns out, however, that the story of divorce rates is ridiculously complicated. For one thing, there is no central data source that simply counts all divorces. The National Center for Health Statistics used to divorces from states, but now six states don’t feel like cooperating anymore, including, unbelievably, California. Even where divorces are counted, key information may not be available, such as the people’s age or how long they were married (or, now that there is gay divorce, their genders). Fortunately, the Census Bureau (for now) does a giant sample survey, the American Community Survey, which gives us great data on divorce patterns, but they only started collecting that information in 2008.

The way demographers ask the question is also different from what the public wants to know. The typical concerned citizen (or honeymooner) wants to know: what are the odds that I (or someone else getting married today) will end up divorced? Science can guess, but it’s impossible to give a definitive answer, because we can’t actually predict human behavior. Still, we can help.

The short answer is that divorce is more common than it was a 75 years ago, but less common than it was at the peak in 1979. Here’s the trend in what we call the “refined” divorce rate – the number of divorces each year for every thousand married women in the country:

The figure uses the federal tally from states from 1940 to 1997, leaves out the period when there was no national collection, and then picks up again when the American Community Survey started asking about divorce.

So the long term upward trend is complicated by a huge spike from soldiers returning home at the end of World War II (a divorce boom, to go with the Baby Boom), a steep increase in the sixties and seventies, and then a downward glide to the present.

How is it possible that divorce has been declining for more than three decades? Part of it is a function of the aging population. As demographers Sheela Kennedy and Steven Ruggles have argued, old people divorce less, and the married population is older now than it was in 1979, because the giant Baby Boom is now mostly in its sixties and people are getting married at older ages. This is tricky, though, because although older people still divorce less, the divorce rates for older people (50+) have doubled in the last two decades. Baby Boomers especially like to get divorced and remarried once their kids are out of the house.

But there is a real divorce decline, too, and this is promising about the future, because it’s concentrated among young people – their chances of divorcing have fallen over the last decade. So, although in my own research I’ve estimated that estimated that 53% of couples marrying today will get divorced, that is probably skewed by all the older people still pulling up the rates. Typical Americans getting married in their late 20s today probably have a less than even chance of getting divorced. The divorce will probably keep falling.

Rather than a conservative turn toward family values, I think this represents an improving quality of marriages. When marriage is voluntary – when people really choose to get married instead of simply marching into it under pressure to conform – one hopes they would be making better choices, and the data support that. Further, as marriage has become more rare, it has also become more select. Despite more than a decade of futile marriage promotion efforts by the federal government, marriage is still moving up the income scale. The people getting married today are more privileged than they used to be: more highly educated (both partners), and more stably situated. All that bodes well for the survival of their marriages, but doesn’t help the people left out of the institution. If less divorce just means only perfect couples are getting married, that’s merely another indicator of rising inequality.

Putting this trend back in that long term context, we should also ask whether falling divorce rates – which run counter to the common assumption that everything modern in family life is about the destruction of the nuclear family – are always a good thing. Most people getting married would like to think they’ll stay together for the long haul. But what is the right amount of divorce for a society to have? It seems like an odd question, but divorce really isn’t like crime or child abuse. You want some divorces, because otherwise it means people are stuck in bad marriages. If you have no divorce that means even abusive marriages can’t break up. If you have a moderate amount, it means pretty bad marriages can break up but people don’t treat it lightly. And if you have tons of divorce it means people are just dropping each other willy-nilly. When you put it that way, moderate sounds best. No one has been able to put numbers to those levels, but it’s still good to ask. Even as we shouldn’t assume families are always falling apart more than they used to, we should consider the pros and cons of divorce, rather than insisting more is always worse.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park. He writes the blog Family Inequality and is the author of The Family: Diversity, Inequality, and Social Change. You can follow him on Twitter or Facebook.

(View original at

Worse Than FailureCodeSOD: A Date With a Parser

PastorGL inherited some front-end code. This front-end code only talks to a single, in-house developed back-end. Unfortunately, that single backend wasn’t developed with any sort of consistency in mind. So, for example, depending on the end-point, sometimes you need to pass fields back and forth as productID, sometimes it’s id, productId, or even _id.

Annoying, but even worse is dealing with the dreaded date datatype. JSON, of course, doesn’t have a concept of date datatypes, which leaves the web-service developer needing to make a choice about how to pass the date back. As a Unix timestamp? As a string? What kind of string? With no consistency on their web-service design, the date could be passed back and forth in a number of formats.

Now, if you’re familiar with the JavaScript Date datatype, you’d know that it can take most date formats as an input and convert them into a Date object, which gives you all the lovely convenience methods you might need. So, if for example, you wanted to convert a date string into a Unix timestamp, you might do something like this:

    var d = new Date(someDataThatProbablyIsADateStringButCouldAlsoBeANumber); //Could also use Date.parse
    return d.getTime();

That would cover 99% of cases, but PastorGL’s co-worker didn’t want to cover just those cases, and they certainly didn’t want to try and build any sort of consistency into the web service. Not only that, since they knew that the web service was inconsistent, they even protected against date formats that it doesn’t currently send back, just in case it starts doing so in the future. This is their solution:

 * Converts a string into seconds since epoch, e.g. tp.str2timestamp('December 12, 2015 04:00:00');
 * @param str The string to convert to seconds since epoch
 * @returns {*}
function str2timestamp(str) {
    if (typeof(str) == "undefined" || str.length == 0) {

    if (typeof(str) != "string") {
        str = "" + str;

    str = str.trim();

    // already a unix timestamp
    if (str.match(/^[0-9]{0,10}$/)) {
        // TODO: fix this before january 19th, 2038 at 03:14:08 UTC
        return parseInt(str);

    // most likely a javascript Date timestamp that is in milliseconds
    if (str.match(/^[0-9]{13,}$/)) {
        return parseInt(str) / 1000;

    var ts = Date.parse(str);
    if (ts) {
        return ts / 1000;

    // fix 00:XX AM|PM & 00:XX:XX AM|PM
    str = str.replace(/00:([0-9]{2}(:[0-9]{2})?\s*[AP]M)/i, "12:$1").replace(/([0-9]{2})([AP|M])/i, "$1 $2");

    // remove any "on, at, @, or -" from the date
    str = str.replace(/\s*(at|@|\-|on|\|)\s*/gi, " ");

    str = str.replace(/\s*(mon(day)?|tue(s?day)?|wed(nesday)?|thu((rs)?day)?|fri(day)?|sat(urday)?|sun(day)?)\s*/gi, "");

    str = str.replace(/([0-9]{1,2})(st|nd|rd|th)/, "$1");

    // replace bad time zone
    if (str.match(/\s+ET$/)) {
        if (d.getTimezoneOffset() == 240) {
            str = str.replace(/\s+ET$/, " EDT");
        } else {
            str = str.replace(/\s+ET$/, " EST");

    str = str.trim();

    var ts;

    ts = Date.parse(str);
    if (ts) {
        return ts / 1000;

    // jan/3/2001
    if (m = str.match(/!^([a-z]+)[-/ ]([0-9]+)[-/ ]([0-9]+)(.*)$!i/)) {
        str = m[2] + " " + m[1] + " " + m[3] + m[4];
    } else if (m = str.match(/!^([0-9]+)[-/ ]([a-z]+)[-/ ]([0-9]+)(.*)$!i/)) {
        // 3/jan/2008
        str = m[1] + " " + m[2] + " " + m[3] + m[4];

    ts = Date.parse(str);
    if (ts) {
        return ts / 1000;

I particularly like that, if it’s not a string already, they turn it into one, because regexes are the best way to count the digits in a string, apparently.

In the end, is TRWTF this code, or the inconsistent representations of the web service endpoints?

The Mexican girl from the taco commercial who says, 'Why not both?'

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

Planet DebianRuss Allbery: Haul via parents

My parents were cleaning out a bunch of books they didn't want, so I grabbed some of the ones that looked interesting. A rather wide variety of random stuff. Also, a few more snap purchases on the Kindle even though I've not been actually finishing books recently. (I do have two finished and waiting for me to write reviews, at least.) Who knows when, if ever, I'll read these.

Mark Ames — Going Postal (nonfiction)
Catherine Asaro — The Misted Cliffs (sff)
Ambrose Bierce — The Complete Short Stores of Ambrose Bierce (collection)
E. William Brown — Perilous Waif (sff)
Joseph Campbell — A Hero with a Thousand Faces (nonfiction)
Jacqueline Carey — Miranda and Caliban (sff)
Noam Chomsky — 9-11 (nonfiction)
Noam Chomsky — The Common Good (nonfiction)
Robert X. Cringely — Accidental Empires (nonfiction)
Neil Gaiman — American Gods (sff)
Neil Gaiman — Norse Mythology (sff)
Stephen Gillet — World Building (nonfiction)
Donald Harstad — Eleven Days (mystery)
Donald Harstad — Known Dead (mystery)
Donald Harstad — The Big Thaw (mystery)
James Hilton — Lost Horizon (mainstream)
Spencer Johnson — The Precious Present (nonfiction)
Michael Lerner — The Politics of Meaning (nonfiction)
C.S. Lewis — The Joyful Christian (nonfiction)
Grigori Medredev — The Truth about Chernobyl (nonfiction)
Tom Nadeu — Seven Lean Years (nonfiction)
Barak Obama — The Audacity of Hope (nonfiction)
Ed Regis — Great Mambo Chicken and the Transhuman Condition (nonfiction)
Fred Saberhagen — Berserker: Blue Death (sff)
Al Sarrantonio (ed.) — Redshift (sff anthology)
John Scalzi — Fuzzy Nation (sff)
John Scalzi — The End of All Things (sff)
Kristine Smith — Rules of Conflict (sff)
Henry David Thoreau — Civil Disobedience and Other Essays (nonfiction)
Alan W. Watts — The Book (nonfiction)
Peter Whybrow — A Mood Apart (nonfiction)

I've already read (and reviewed) American Gods, but didn't own a copy of it, and that seemed like a good book to have a copy of.

The Carey and Brown were snap purchases, and I picked up a couple more Scalzi books in a recent sale.

Planet DebianNorbert Preining: Ryu Murakami – Tokyo Decadence

The other Murakami, Ryu Murakami (村上 龍), is hard to compare to the more famous Haruki. His collection of stories reflects the dark sides of Tokyo, far removed from the happy world of AKB48 and the like. Criminals, prostitutes, depression, loss. A bleak image onto a bleak society.

This collection of short stories is a consequent deconstruction of happiness, love, everything we believe to make our lives worthwhile. The protagonists are idealistic students loosing their faith, office ladies on aberrations, drunkards, movie directors, the usual mixture. But the topic remains constant – the unfulfilled search for happiness and love.

I felt I was beginning to understand what happiness is about. It isn’t about guzzling ten or twenty energy drinks a day, barreling down the highway for hours at a time, turning over your paycheck to your wife without even opening the envelope, and trying to force your family to respect you. Happiness is based on secrets and lies.Ryu Murakami, It all started just about a year and a half ago

A deep pessimistic undertone is echoing through these stories, and the atmosphere and writing reminds of Charles Bukowski. This pessimism resonates in the melancholy of the running themes in the stories, Cuban music. Murakami was active in disseminating Cuban music in Japan, which included founding his own label. Javier Olmo’s pieces are often the connecting parts, as well as lending the short stories their title: Historia de un amor, Se fué.

The belief – that what’s missing now used to be available to us – is just an illusion, if you ask me. But the social pressure of “You’ve got everything you need, what’s your problem?” is more powerful than you might ever think, and it’s hard to defend yourself against it. In this country it’s taboo even to think about looking for something more in life.Ryu Murakami, Historia de un amor

It is interesting to see that on the surface, the women in the stories are the broken characters, leading feminists to incredible rants about the book, see the rant^Wreview of Blake Fraina at Goodreads:

I’ll start by saying that, as a feminist, I’m deeply suspicious of male writers who obsess over the sex lives of women and, further, have the audacity to write from a female viewpoint…
…female characters are pretty much all pathetic victims of the male characters…
I wish there was absolutely no market for stuff like this and I particularly discourage women readers from buying it…Blake Fraina, Goodreads review

On first sight it might look like that the female characters are pretty much all pathetic victims of the male characters, but in fact it is the other way round, the desperate characters, the slaves of their own desperation, are the men, and not the women, in these stories. It is dual to the situation in Hitomi Kanehara’s Snakes and Earrings, where on first sight the tattooist and the outlaw friends are the broken characters, but the really cracked one is the sweet Tokyo girly.

Male-female relationships are always in transition. If there’s no forward progress, things tend to slip backwards.Ryu Murakami, Se fué

Final verdict: Great reading, hard to put down, very much readable and enjoyable, if one is in the mood of dark and depressing stories. And last but not least, don’t trust feminist book reviews.

Planet Linux AustraliaSridhar Dhanapalan: New LinkedIn Interface Delete Your Data? Here’s How to Bring it Back.

Over the past few years it has seemed like LinkedIn were positioning themselves to take over your professional address book. Through offering CRM-like features, users were able to see a summary of their recent communications with each connection as well as being able to add their own notes and categorise their connections with tags. It appeared to be a reasonable strategy for the company, and many users took the opportunity to store valuable business information straight onto their connections.

Then at the start of 2017 LinkedIn decided to progressively foist a new user experience upon its users, and features like these disappeared overnight in lieu of a more ‘modern’ interface. People who grew to depend on this integration were in for a rude shock — all of a sudden it was missing. Did LinkedIn delete the information? There was no prior warning given and I still haven’t seen any acknowledgement or explanation (leave alone an apology) from LinkedIn/Microsoft on the inconvenience/damage caused.

If anything, this reveals the risks in entrusting your career/business to a proprietary cloud service. Particularly with free/freemium (as in cost) services, the vendor is more likely to change things on a whim or move that functionality to a paid tier.

It’s another reason why I’ve long been an advocate for open standards and free and open source software.

Fortunately there’s a way to export all of your data from LinkedIn. This is what we’ll use to get back your tags and notes. These instructions are relevant for the new interface. Go to your account settings and in the first section (“Basics”) you should see an option called “Getting an archive of your data”.

altLinkedIn: Getting an archive of your data

Click on Request Archive and you’ll receive an e-mail when it’s available for download. Extract the resulting zip file and look for a file called Contacts.csv. You can open it in a text editor, or better yet a spreadsheet like LibreOffice Calc or Excel.

In my copy, my notes and tags were in columns D and E respectively. If you have many, it may be a lot of work to manually integrate them back into your address book. I’d love suggestions on how to automate this. Since I use Gmail, I’m currently looking into Google’s address book import/export format, which is CSV based.

As long as Microsoft/LinkedIn provide a full export feature, this is a good way to maintain ownership of your data. It’s good practice to take an export every now and then to give yourself some peace-of-mind and avoid vendor lock-in.

This article has also been published on LinkedIn.


Planet DebianGregor Herrmann: RC bugs 2016/52-2017/07

debian is in deep freeze for the upcoming stretch release. still, I haven't dived into fixing "general" release-critical bugs yet; so far I mostly kept to working on bugs in the debian perl group:

  • #834912 – src:libfile-tee-perl: "libfile-tee-perl: FTBFS randomly (Failed 1/2 test programs)"
    add patch from ntyni (pkg-perl)
  • #845167 – src:lemonldap-ng: "lemonldap-ng: FTBFS randomly (failing tests)"
    upload package prepared by xavier with disabled tests (pkg-perl)
  • #849362 – libstring-diff-perl: "libstring-diff-perl: FTBFS: test failures with new libyaml-perl"
    add patch from ntyni (pkg-perl)
  • #851033 – src:jabref: "jabref: FTBFS: Could not find org.postgresql:postgresql:9.4.1210."
    update maven.rules
  • #851347 – libjson-validator-perl: "libjson-validator-perl: uses deprecated Mojo::Util::slurp, makes libswagger2-perl FTBFS"
    upload new upstream release (pkg-perl)
  • #852853 – src:libwww-curl-perl: "libwww-curl-perl: FTBFS (Cannot find curl.h)"
    add patch for multiarch curl (pkg-perl)
  • #852879 – src:license-reconcile: "license-reconcile: FTBFS: dh_auto_test: perl Build test --verbose 1 returned exit code 255"
    update tests (pkg-perl)
  • #852889 – src:liblatex-driver-perl: "liblatex-driver-perl: FTBFS: Test failures"
    add missing build dependency (pkg-perl)
  • #854859 – lemonldap-ng-doc: "lemonldap-ng-doc: unhandled symlink to directory conversion: /usr/share/doc/lemonldap-ng-doc/pages/documentation/current"
    help with dpkg-maintscript-helper, upload on xavier's behalf (pkg-perl)

thanks to the release team for pro-actively unblocking the packages with fixes which were uploaded after the begin of the freeze!

Krebs on SecurityFebruary Updates from Adobe, Microsoft

A handful of readers have inquired as to the whereabouts of Microsoft‘s usual monthly patches for Windows and related software. Microsoft opted to delay releasing any updates until next month, even though there is a zero-day vulnerability in Windows going around. However, Adobe did push out updates this week as per usual to fix critical issues in its Flash Player software.

brokenwindowsIn a brief statement this week, Microsoft said it “discovered a last minute issue that could impact some customers” that was not resolved in time for Patch Tuesday, which normally falls on the second Tuesday of each month. In an update to that advisory posted on Wednesday, Microsoft said it would deliver February’s batch of patches as part of the next regularly-scheduled Patch Tuesday, which falls on March 14, 2017.

On Feb. 2, the CERT Coordination Center at Carnegie Mellon University warned that an unpatched bug in a core file-sharing component of Windows (SMB) could let attackers crash Windows 8.1, and Windows 10 systems, as well as server equivalents of those platforms. CERT warned that exploit code for the flaw was already available online.

The updates from Adobe fix at least 13 vulnerabilities in versions of Flash Player for Windows, Mac, ChromeOS and Linux systems. Adobe said it is not aware of any exploits in the wild for any of the 13 flaws fixed in this update.

The latest update brings Flash to v. The update is rated “critical” for all OSes except Linux; critical flaws can be exploited to compromise a vulnerable system through no action on the part of the user, aside from perhaps browsing to a malicious or hacked Web site.

Flash has long been a risky program to leave plugged into the browser. If you have Flash installed, you should update, hobble or remove Flash as soon as possible. To see which version of Flash your browser may have installed, check out this page.

brokenflash-aThe smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. An extremely powerful and buggy program that binds itself to the browser, Flash is a favorite target of attackers and malware. For some ideas about how to hobble or do without Flash (as well as slightly less radical solutions) check out A Month Without Adobe Flash Player.

If you choose to keep and update Flash, please do it today. The most recent versions of Flash should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates and/or restart the browser to get the latest Flash version). Chrome users may need to restart the browser to install or automatically download the latest version. When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then.

Planet Linux AustraliaLev Lafayette: HPC/Cloud Hybrids for Efficient Resource Allocation and Throughput

HPC systems running massively parallel jobs need a fairly static software operating environment running on bare metal hardware, a high speed interconnect to reach their full potential, and offer linear performance scaling for cleverly designed applications. Cloud computing, on the other hand, offers flexible virtual environments and can be used for pleasingly parallel workloads.

read more

Planet Linux AustraliaSridhar Dhanapalan: CareerNexus: a new way to find work

The start-up that I have co-founded, CareerNexus, is looking for job seekers to take part in a product test and market experiment. If you, or someone you know, wants to know more and potentially take part, message me.

If we can help just a fraction of those people who have difficulty finding work through traditional means — people returning from parental leave, people looking for roles after being made redundant, mature workers, even some highly skilled professionals — we’ll be doing something great.

As an alternate means of finding work, it need not replace any mechanisms that you may already be engaged in. In other words, there is nothing for you to lose and hopefully much for you to gain.

Planet Linux AustraliaSridhar Dhanapalan: Published in Engineers Without Borders Magazine

Engineers Without Borders asked me to write something for their Humanitarian Engineering magazine about One Laptop per Child. Here is what I wrote.

The school bell rings, and the children filter into the classroom. Each is holding an XO – their own personal learning device.

altStudents from Doomadgee often use their XOs for outdoors education. The sunlight-readable screen
combined with the built-in camera allow for hands-on exploration of their environment.

This is no ordinary classroom. As if by magic, the green and white XOs automatically see each other as soon as they are started up, allowing children to easily share information and collaborate on activities together. The kids converse on how they can achieve the tasks at hand. One girl is writing a story on her XO, and simultaneously on the same screen she can see the same story being changed by a boy across the room. Another group of children are competing in a game that involves maths questions.

altChildren in Kiwirrkurra, WA, collaborate on an activity with help from teachers.

Through the XO, the learning in this classroom has taken on a peer-to-peer character. By making learning more fun and engaging, children are better equipped to discover and pursue their interests. Through collaboration and connectivity, they can exchange knowledge with their peers and with the world. In the 21st century, textbooks should be digital and interactive. They should be up-to-date and locally relevant. They should be accessible and portable.

Of course, the teacher’s role remains vital, and her role has evolved into that of a facilitator in this knowledge network. She is better placed to provide more individual pathways for learning. Indeed the teacher is a learner as well, as the children quickly adapt to the new technology and learn skills that they can teach back.

altA teacher in Jigalong, WA, guides a workgroup of children in their class.

Helping to keep the classroom session smoothly humming along are children who have proven themselves to be proficient with assisting their classmates and fixing problems (including repairing hardware). These kids have taken part in training programmes that award them for their skills around the XO. In the process, they are learning important life skills around problem solving and teamwork.

altDozens of students in Doomadgee State School are proficient in fixing XO hardware.

This is all part of the One Education experience, an initiative from One Laptop per Child (OLPC) Australia. This educational programme provides a holistic educational scaffolding around the XO, the laptop developed by the One Laptop per Child Association that has its roots in the internationally-acclaimed MIT Media Lab in the USA.

The XO was born from a desire to empower each and every child in the world with their own personal learning device. Purpose-built for young children and using solid open source software, the XO provides an ideal platform for classroom learning. Designed for outdoors, with a rugged design and a high-resolution sunlight-readable screen, education is no longer confined to a classroom or even to the school grounds. Learning time needn’t stop with the school bell – many children are taking their XOs home. Also important is the affordability and full repairability of the devices, making it cost-effective versus non-durable and ephemeral items such as stationery, textbooks and other printed materials. There are over 3 million XOs in distribution, and in some countries (such as Uruguay) every child owns one.

altA One Education classroom in Kenya.

One Education’s mission is to provide educational opportunities to every child, no matter how remote or disadvantaged. The digital divide is a learning divide. This can be conquered through a combination of modern technology, training and support, provided in a manner that empowers local schools and communities. The story told above is already happening in many classrooms around the country and the world.

altA One Education classroom in northern Thailand.

With teacher training often being the Achilles’ heel of technology programmes in the field of education, One Education focuses only on teachers who have proven their interest and aptitude through the completion of a training course. Only then are they eligible to receive XOs (with an allocation of spare parts) into their classroom. Certified teachers are eligible for ongoing support from OLPC Australia, and can acquire more hardware and parts as required.

As a not-for-profit, OLPC Australia works with sponsors to heavily subsidise the costs of the One Education programme for low socio-economic status schools. In this manner, the already impressive total cost of ownership can be brought down even further.

High levels of teacher turnover are commonplace in remote Australian schools. By providing courses online, training can be scalable and cost-effective. Local teachers can even undergo further training to gain official trainer status themselves. Some schools have turned this into a business – sending their teacher-trainers out to train teachers in other schools.

altStudents in Geeveston in Tasmania celebrate their attainment of XO-champion status, recognising
their proficiency in using the XO and their helpfulness in the classroom.

With backing from the United Nations Development Programme, OLPC are tackling the Millennium Development Goals by focusing on Goal 2 (Achieve Universal Primary Education). The intertwined nature of the goals means that progress made towards this goal in turn assists the others. For example, education on health can lead to better hygiene and lower infant mortality. A better educated population is better empowered to help themselves, rather than being dependent on hand-outs. For people who cannot attend a classroom (perhaps because of remoteness, ethnicity or gender), the XO provides an alternative. OLPC’s focus on young children means that children are becoming engaged in their most formative years. The XO has been built with a minimal environmental footprint, and can be run off-grid using alternate power sources such as solar panels.

One Education is a young initiative, formed based on experiences learnt from technology deployments in Australia and other countries. Nevertheless, results in some schools have been staggering. Within one year of XOs arriving in Doomadgee State School in northern Queensland, the percentage of Year 3 pupils meeting national literacy standards leapt from 31% to 95%.

altA girl at Doomadgee State School very carefully removes the screen from an XO.

2013 will see a rapid expansion of the programme. With $11.7m in federal government funding, 50,000 XOs will be distributed as part of One Education. These schools will be receiving the new XO Duo (AKA XO-4 Touch), a new XO model developed jointly with the OLPC Association. This version adds a touch-screen user experience while maintaining the successful laptop form factor. The screen can swivel and fold backwards over the keyboard, converting the laptop into a tablet. This design was chosen in response to feedback from educators that a hardware keyboard is preferred to a touch-screen for entering large amounts of information. As before, the screen is fully sunlight-readable. Performance and battery life have improved significantly, and it is fully repairable as before.

As One Education expands, there are growing demands on OLPC Australia to improve the offering. Being a holistic project, there are plenty of ways in which we could use help, including in education, technology and logistics. We welcome you to join us in our quest to provide educational opportunities to the world’s children.

Planet DebianSteve Kemp: Apologies for the blog-churn.

I've been tweaking my blog a little over the past few days, getting ready for a new release of the chronicle blog compiler (github).

During the course of that I rewrote all the posts to have 100% lower-case file-paths. Redirection-pages have been auto-generated for each page which was previously mixed-case, but unfortunately that will have meant that the RSS feed updated unnecessarily:

  • If it used to contain:
  • It would have been updated to contain

That triggered a lot of spamming, as the URLs would have shown up as being new/unread/distinct.


Planet Linux AustraliaPia Waugh: Choose your own adventure – keynote

This is a blog version of the keynote I gave at 2017. Many thanks to everyone who gave such warm feedback, and I hope it helps spur people to think about systemic change and building the future. The speech can be watched at

I genuinely believe we are at a tipping point right now. A very important tipping point where we have at our disposal all the philosophical and technical means to invent whatever world we want, but we’re at risk of reinventing the past with shiny new things. This talk is about trying to make active choices about how we want to live in future and what tools we keep or discard to get there. Passive choices are still a choice, they are choosing the status quo. We spend a lot of our time tinkering around the edges of life as it is, providing symptomatic relief for problems we find, but we need to take a broader systems based view and understand what systemic change we can make to properly address those problems.

We evolved over hundreds of thousands of years using a cooperative competitive social structure that helped us work together to flourish in every habitat, rapidly and increasingly evolve an learn, and establish culture, language, trade and travel. We were constantly building on what came before and we built our tools as we went.

In recent millennia we invented systems of complex differentiated and interdependent skills, leading to increasingly rapid advancements in how we live and organise ourselves physically, politically, economically and socially, especially as we started building huge cities. Lots of people meant a lot of time to specialise, and with more of our basic needs taken care of, we had more time for philosophy and dreaming.

Great progress created great surplus, creating great power, which we generally centralised in our great cities under rulers that weren’t always so great. Of course, great power also created great inequalities so sometimes we burned down those great cities, just to level the playing field. We often took a symptomatic relief approach to bad leaders by replacing them, without fundamentally changing the system.

But in recent centuries we developed the novel idea that all people have inalienable rights and can be individually powerful. This paved the way for a massive culture shift and distribution of power combined with heightened expectations of individuals in playing a role in their own destiny, leading us to the world as we know it today. Inalienable rights paved the way for people thinking differently about their place in the world, the control they had over their lives and how much control they were happy to cede to others. This makes us, individually, the most powerful we have ever beed, which changes the game moving forward.

You see, the internet was both a product and an amplifier of this philosophical transition, and of course it lies at the heart of our community. Technology has, in large part, only sped up the cooperative competitive models of adapting, evolving and flourishing we have always had. But the idea that anyone has a right to life and liberty started a decentralisation of power and introduced the need for legitimate governance based on the consent of citizens (thank you Locke).

Citizens have the powers of publishing, communications, monitoring, property, even enforcement. So in recent decades we have shifted fundamentally from kings in castles to nodes in a network, from scarcity to surplus or reuse models, from closed to open systems, and the rate of human progress only continues to grow towards an asymptoic climb we can’t even imagine.

To help capture this, I thought I’d make a handy change.log on human progress to date.

# Notable changes to homo sapiens – change.log
## [2.1.0] – 1990s CE “technology revolution & internet”
### Changed
– New comms protocol to distribute “rights”. Printing press patch unexpectedly useful for distributing resources. Moved from basic multi-core to clusters of independent processors with exponential growth in power distribution.

## [2.0.0] – 1789 CE “independence movements”
### Added
– Implemented new user permissions called “rights”, early prototype of multi-core processing with distributed power & comms.

## [1.2.0] – 1760 CE “industrial revolution”
### Changed
– Agricultural libraries replaced by industrial libraries, still single core but heaps faster.

## [1.1.1] – 1440 CE “gutenberg”
### Patched
– Printing press a minor patch for more efficient instructions distribution, wonder if it’d be more broadly useful?

## [1.1.0] – 2,000 BCE “cities era”
### Changed
– Switched rural for urban operating environment. Access to more resources but still on single core.

## [1.0.0] – 8,000 BCE “agricultural revolution”
### Added
– New agricultural libraries, likely will create surplus and population explosion. Heaps less resource intensive.

## [0.1.0] – 250,000 BCE “homo sapiens”
### Added
– Created fork from homo erectus, wasn’t confident in project direction though they may still submit contributions…

(For more information about human evolution, see

The point to this rapid and highly oversimplified historical introduction is threefold: 1) we are more powerful than ever before, 2) the rate of change is only increasing, and 3) we made all this up, and we can make it up again. It is important to recognise that we made all of this up. Intellectually we all understand this but it matters because we often assume things are how they are, and then limit ourselves to working within the constraints of the status quo. But what we invented, we can change, if we choose.

We can choose our own adventure, or we let others choose on our behalf. And if we unthinkingly implement the thinking, assumptions and outdated paradigms of the past, then we are choosing to reimplement the past.

Although we are more individually and collectively powerful than ever before, how often do you hear “but that’s just how we’ve always done it”, “but that’s not traditional”, or “change is too hard”. We are demonstrably and historically utter masters at change, but life has become so big, so fast, and so interrelated that change has become scary for many people, so you see them satisfied by either ignoring change or making iterative improvements to the status quo. But we can do better. We must do better.

I believe we are at a significant tipping point in history. The world and the very foundations our society were built on have changed, but we are still largely stuck in the past in how we think and plan for the future. If we don’t make some active decisions about how we live, think and act, then we will find ourselves subconsciously reinforcing the status quo at every turn and not in a position to genuinely create a better future for all.

So what could we do?

  • Solve poverty and hunger: distributed property through nanotechnology and 3D printing, universal education and income.
  • Work 2 days a week, automate the rest: work, see “Why the Future is Workless” by Tim Dunlop
  • Embrace and extend our selves: Transhumanism, para olympics, “He was more than a dolphin, but from another dolphin’s point of view he might have seemed like something less.” — William Gibson, from Johnny Mnemonic. Why are we so conservative about what it means to be human? About our picture of self? Why do we get caught up on what is “natural” when almost nothing we do is natural.
  • Overcome the tyranny of distance: rockets for international travel, interstellar travel, the opportunity to have new systems of organising ourselves
  • Global citizens: Build a mighty global nation where everyone can flourish and have their rights represented beyond the narrow geopolitical nature of states: peer to peer economy, international rights, transparent gov, digital democracy, overcome state boundaries,
  • ?? What else ?? I’m just scratching the surface!

So how can we build a better world? Luckily, the human species has geeks. Geeks, all of us, are special because we are the pioneers of the modern age and we get to build the operating system for all our fellow humans. So it is our job to ensure what we do makes the world a better place.

rOml is going to talk more about future options for open source in the Friday keynote, but I want to explore how we can individually and collectively build for the future, not for the past.

I would suggest, given our role as creators, it is incumbent on us to both ensure we build a great future world that supports all the freedoms we believe in. It means we need to be individually aware of our unconscious bias, what beliefs and assumptions we hold, who benefits from our work, whether diversity is reflected in our life and work, what impact we have on society, what we care about and the future we wish to see.

Collectively we need to be more aware of whether we are contributing to future or past models, whether belief systems are helping or hindering progress, how we treat others and what from the past we want to keep versus what we want to get rid of.

Right now we have a lot going on. On the one hand, we have a lot of opportunities to improve things and the tools and knowledge at our disposal to do so. On the other hand we have locked up so much of our knowledge and tools, traditional institutions are struggling to maintain their authority and control, citizens are understandably frustrated and increasingly taking matters into their own hands, we have greater inequality than ever before, an obsession with work at the cost of living, and we are expected to sacrifice our humanity at the alter of economics

Questions to ask yourself:

Who are/aren’t you building for?
What is the default position in society?
What does being human mean to you?
What do we value in society?
What assumptions and unconscious bias do you have?
How are you helping non-geeks help themselves?
What future do you want to see?

What should be the rights, responsibilities and roles of
citizens, governments, companies, academia?

Finally,we must also help our fellow humans shift from being consumers to creators. We are all only as free as the tools we use, and though geeks will always be able to route around damage, be that technical or social, many of our fellow humans do not have the same freedoms we do.

Fundamental paradigm shifts we need to consider in building the future.

Scarcity → Surplus
Closed → Open
Centralised → Distributed
Belief → Rationalism
Win/lose → Cooperative competitive
Nationalism → Transnationalism
Normative humans → Formative humans

Open source is the best possible modern expression of cooperative competitiveness that also integrates our philosophical shift towards human rights and powerful citizens, so I know it will continue to thrive and win when pitted against closed models, broadly speaking.

But in inventing the future, we need to be so very careful that we don’t simply rebuild the past with new shiny tools. We need to keep one eye always on the future we want to build, on how what we are doing contributes to that future, and to ensuring we have enough self awareness and commitment to ensuring we don’t accidentally embed in our efforts the outdated and oftentimes repressive habits of the past.

To paraphrase Gandhi, build the change you want to see. And build it today.

Thank you, and I hope you will join me in forging a better future.

TED“Humanity can rise to the challenge”: Yuval Harari in conversation at TED Dialogues

Yuval Harari (right) in conversation with TED's Chris Anderson at our New York theater. Photo: Dian Lofton / TED

In a wide-ranging conversation with Yuval Harari at TED’s theater, TED’s Chris Anderson (left) asked: How should we behave in this post-truth era? And Harari replied: “My basic reaction as a historian is: if this is the era of post-truth, when the hell was the era of truth?” Photo: Dian Lofton /TED

How to explain the stunning political upheaval of 2016 — Brexit in the UK and Donald Trump’s election to the presidency in the US — as well as the current and ongoing atmosphere of division, discontent and disquiet that fills many people’s lives? One simple answer: “We’ve lost our story,” says Jerusalem University historian Yuval Harari, in a conversation with TED curator Chris Anderson during the first of a series of TED Dialogues in NYC.

Humans “think in stories and make sense of the world by telling stories,” says Harari, the author of Sapiens and the new book Homo Deus. In the past few decades, many of us believed in the “simple and attractive story” that we existed in a world that was both politically liberal and economically global. At the same time, some people felt left out of — or didn’t believe — this story. By 2016, they voiced their discontent by supporting Brexit and Trump and LePen, retreating into the cozy confines of nationalism and nostalgia.

Right now, “almost everywhere around the globe, we see [politicians with a] retrograde vision.” Harari points to Trump’s efforts to “Make America Great Again,” Putin’s hearkening back to the Tsarist empire, and leadership in Israel, his home country, seeking to build temples. He views leaders — and citizens — a bit like lost children retracing their steps back to the place they once felt safety and security.

Unfortunately, taking refuge in nationalism will not help humanity tackle the huge and looming problems of climate change and technological disruption at global scale. While climate occupies the worries of many people, Harari believes the general public is less informed about the latter problem: that in the next 20 to 30 years, hundreds of millions of people might be put out of work due to automation. “It’s not the Mexicans, it’s not the Chinese who are going to take jobs from people in Pennsylvania,” he says, “it’s the robots and algorithms. And we have to do something about it now. What we teach children in school and college now is completely irrelevant for what they will need in 2040.”

And nothing less than a concerted international solution — most likely, in the form of global governance — is needed to take on these planet-scale issues. “I don’t know what it would look like,” admits Harari. “But we need it because these situations are lose-lose situations.” When it comes to an area like trade, where both sides can benefit, it’s easy to get national governments to come together and negotiate an agreement. But with an issue like climate change in which all nations stand to lose, an overarching authority is needed in order to force them to act. Such a global government would “most likely look more like ancient China than modern Denmark,” he says.

Harari notes that most of today’s nationalist governments seem loath to address — or even acknowledge — global problems like climate change. “There’s a close correlation between nationalism and climate change denialism,” he says. “Nationalists are focused on their most immediate loyalties and commitments, to their people, to their country.” But, he asks, “Why can’t you be loyal to humankind as a whole?” So in our current political climate, Harari says anyone interested in global governance needs to make it clear to people that “it doesn’t replace local identities and communities.”

Harari concluded the conversation on a cautiously optimistic note. We should take heart in the fact that the world is much less violent than it was 100 years ago: “More people die from eating too much than eating too little; more people die from old age than infectious disease; more people commit suicide than are killed by war, terrorism and crime,” he says. Or, as he wryly sums up, “You are your own worst enemy.” He believes that Brexit could be as seen as a statement of independence, and by those standards, “it’s the most peaceful war of independence in human history.” And he emphasizes that humanity has shown it can rise to major challenges. The best example is how we reacted to nuclear weapons. In the middle of the 20th century, many people believed that a nuclear war would inevitably lead world into catastrophic destruction. “But instead, nuclear weapons caused humans all over the world to change the way they managed international politics and to reduce violence.”

He adds, “The problem is we have very little margin for error [now]. If we don’t get it right, we might not have a second option to try again.”

Yuval Harari says: "One of the most powerful forces in history is human stupidity. But another powerful force is human wisdom. We have both.” Photo: Jasmina Tomic / TED

Yuval Harari says: “One of the most powerful forces in history is human stupidity. But another powerful force is human wisdom. We have both.” Photo: Jasmina Tomic / TED

Planet Linux AustraliaBinh Nguyen: Life in Sudan (technically North and South Sudan), Life in Somalia, and More

- Kingdom of Kush was one of first recorded instances of Sudan in history. Strong influence of ancient Egyptian empire and their Gods. Strong religious influence with Christianity and then Islam thereafter. Has struggled with internal conflict for a long time  sudan history -

Planet DebianDirk Eddelbuettel: RPushbullet 0.3.1

RPpushbullet demo

A new release 0.3.1 of the RPushbullet package, following the recent 0.3.0 release is now on CRAN. RPushbullet is interfacing the neat Pushbullet service for inter-device messaging, communication, and more. It lets you easily send alerts like the one to the to your browser, phone, tablet, ... -- or all at once.

This release owes once again a lot to Seth Wenchel who helped to update and extend a number of features. We fixed one more small bug stemming from the RJSONIO to jsonlite transition, and added a few more helpers. We also enabled Travis testing and with it covr-based coverage analysis using pretty much the same setup I described in this recent blog post.

Changes in version 0.3.1 (2017-02-17)

  • The target device designation was corrected (#39).

  • Three new (unexported) helper functions test the validity of the api key, device and channel (Seth in #41).

  • The summary method for the pbDevices class was corrected (Seth in #43).

  • New helper functions pbValidateConf, pbGetUser, pbGetChannelInfo were added (Seth in #44 closing #40).

  • New classes pbUser and pbChannelInfo were added (Seth in #44).

  • Travis CI tests (and covr coverage analysis) are now enabled via an encrypted config file (#45).

Courtesy of CRANberries, there is also a diffstat report for this release.

More details about the package are at the RPushbullet webpage and the RPushbullet GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianIngo Juergensmann: Migrating from Owncloud 7 on Debian to Nextcloud 11

These days I got a mail by my hosting provider stating that my Owncloud instance is unsecure, because the online scan from mailed them. However the scan seemed quite bogus: it reported some issues that were listed as already solved in Debians changelog file. But unfortunately the last entry in changelog was on January 5th, 2016. So, there has been more than a whole year without security updates for Owncloud in Debian stable.

In an discussion with the Nextcloud team I complained a little bit that the scan/check is not appropriate. The Nextcloud team replied very helpful with additional information, such as two bug reports in Debian to clarify that the Owncloud package will most likely be removed in the next release: #816376 and #822681.

So, as there is no nextcloud package in Debian unstable as of now, there was no other way to manually upgrade & migrate to Nextcloud. This went fairly well:

ownCloud 7 -> ownCloud 8.0 -> ownCloud 8.1 -> ownCloud 8.2 -> ownCloud 9.0 -> ownCloud 9.1 -> Nextcloud 10 -> Nextcloud 11

There were some smaller caveats:

  1. When migrating from OC 9.0 to OC 9.1 you need to migrate your addressbooks and calendars as described in the OC 9.0 Release Notes
  2. When migrating from OC 9.1 to Nextcloud 10, the OC 9.1 is higher than expected by the Mextcloud upgrade script, so it warns about that you can't downgrade your installation. The fix was simply to change the OC version in the config.php
  3. The Documents App of OC 7 is no longer available in Nextcloud 11 and is replaced by Collabora App, which is way more complex to setup

The installation and setup of the Docker image for collabora/code was the main issue, because I wanted to be able to edit documents in my cloud. For some reason Nextcloud couldn't connect to my docker installation. After some web searches I found "Can't connect to Collabora Online" which led me to the next entry in the Nextcloud support forum. But in the end it was this posting that finally made it work for me. So, in short I needed to add...


to /etc/default/docker.

So, in the end everything worked out well and my cloud instance is secure again. :-)

UPDATE 2016-02-18 10:52:
Sadly with that working Collabora Online container from Docker I now face this issue of zombie processes for loolforkit inside of that container.


CryptogramFriday Squid Blogging: The Strawberry Squid's Lopsided Eyes

The evolutionary reasons why the strawberry squid has two different eyes. Additional articles.

Original paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Krebs on SecurityMen Who Sent Swat Team, Heroin to My Home Sentenced

It’s been a remarkable week for cyber justice. On Thursday, a Ukrainian man who hatched a plan in 2013 to send heroin to my home and then call the cops when the drugs arrived was sentenced to 41 months in prison for unrelated cybercrime charges. Separately, a 19-year-old American who admitted to being part of a hacker group that sent a heavily-armed police force to my home in 2013 was sentenced to three years probation.

Sergei "Fly" Vovnenko, in an undated photo. In a letter to this author following his arrest, Vovnenko said he forgave me for "doxing" him -- printing his real name and image -- on my site.

Sergei “Fly” Vovnenko, in an undated photo. In a letter to this author following his arrest, Vovnenko said he forgave me for “doxing” him — printing his real name and image — on my site.

Sergey Vovnenko, a.k.a. “Fly,” “Flycracker” and “MUXACC1,” pleaded guilty last year to aggravated identity theft and conspiracy to commit wire fraud. Prosecutors said Vovnenko operated a network of more than 13,000 hacked computers, using them to harvest credit card numbers and other sensitive information.

When I first became acquainted with Vovnenko in 2013, I knew him only by his many hacker names, including “Fly” and “Flycracker,” among others. At the time, Fly was the administrator of the fraud forum “thecc[dot]bz,” an exclusive and closely guarded Russian language board dedicated to financial fraud and identity theft.

After I secretly gained access to his forum, I learned he’d hatched a plot to have heroin sent to my home and to have one of his forum lackeys call the police when the drugs arrived.

I explained this whole ordeal in great detail in 2015, when Vovnenko initially was extradited from Italy to face charges here in the United States. In short, the antics didn’t end when I foiled his plot to get me arrested for drug possession, and those antics likely contributed to his arrest and to this guilty plea.

Vovnenko contested his extradition from Italy, and in so doing spent roughly 15 months in arguably Italy’s worst prison. During that time, he seemed to have turned his life around, sending me postcards at Christmas time and even an apparently heartfelt apology letter.

Seasons greetings from my pen pal, Flycracker.

Seasons greetings from my pen pal, Flycracker.

On Thursday, a judge in New Jersey sentenced Vovnenko to 41 months in prison, three years of supervised released and ordered him to pay restitution of $83,368.

Separately, a judge in Washington, D.C. handed down a sentence of three year’s probation to Eric Taylor, a hacker probably better known by his handle “Cosmo the God.”

Taylor was among several men involved in making a false report to my local police department at the time about a supposed hostage situation at our Virginia home. In response, a heavily-armed police force surrounded my home and put me in handcuffs at gunpoint before the police realized it was all a dangerous hoax known as “swatting.”

CosmoTheGod rocketed to Internet infamy in 2013 when he and a number of other hackers set up the Web site exposed[dot]su, which “doxed” dozens of public officials and celebrities by publishing the address, Social Security numbers and other personal information on the former First Lady Michelle Obama, the then-director of the FBI and the U.S. attorney general, among others. The group also swatted many of the people they doxed.

Exposed[dot]su was built with the help of identity information obtained and/or stolen from ssndob[dot]ru.

Exposed[dot]su was built with the help of identity information obtained and/or stolen from ssndob[dot]ru.

Taylor and his co-conspirators were able to dox so many celebrities and public officials because they hacked a Russian identity theft service called ssndob[dot]ru. That service in turn relied upon compromised user accounts at data broker giant LexisNexis to pull personal and financial data on millions of Americans.

At least two other young men connected to the exposed[dot]su conspiracy have already been sentenced to prison.

Eric "CosmoTheGod" Taylor.

Eric “CosmoTheGod” Taylor, in a recent selfie posted to his Twitter profile.

Among them was Mir Islam, a 22-year-old Brooklyn man who was sentenced last year to two years in prison for doxing and swatting, and for cyberstalking a young woman whom he also admitted to swatting. Because he served almost a year of detention prior to his sentencing, Islam was only expected to spend roughly a year in prison, although it appears he was released before even serving the entire year.

Hours after his sentencing, Taylor reached out to KrebsOnSecurity via Facetime to apologize for his actions. Taylor, a California native, said he is trying to turn his life around, and that he has even started his own cybersecurity consultancy.

“I live in New York City now, have a baby on the way and am really trying to get my shit together finally,” Taylor said.

If Taylor’s physical appearance is any indication, he is indeed turning over a new leaf. At the time he was involved in publishing exposed[dot]su, the six-foot, seven-inch CosmoTheGod was easily a hundred pounds heavier than he is now.

Unfortunately, not everyone in Taylor’s former crew is making changes for the better. According to Taylor, his former co-conspirator Islam was recently re-arrested after allegedly cyberstalking Taylor’s girlfriend. That stalking claim could not be independently confirmed, however court documents show that Islam was indeed re-arrested and incarcerated last month in New York.

Mir Islam, at his sentencing hearing today. Sketches copyright by Hennessy /

Mir Islam, at his sentencing hearing last year. Sketches copyright by Hennessy /

CryptogramIoT Attack Against a University Network

Verizon's Data Brief Digest 2017 describes an attack against an unnamed university by attackers who hacked a variety of IoT devices and had them spam network targets and slow them down:

Analysis of the university firewall identified over 5,000 devices making hundreds of Domain Name Service (DNS) look-ups every 15 minutes, slowing the institution's entire network and restricting access to the majority of internet services.

In this instance, all of the DNS requests were attempting to look up seafood restaurants -- and it wasn't because thousands of students all had an overwhelming urge to eat fish -- but because devices on the network had been instructed to repeatedly carry out this request.

"We identified that this was coming from their IoT network, their vending machines and their light sensors were actually looking for seafood domains; 5,000 discreet systems and they were nearly all in the IoT infrastructure," says Laurance Dine, managing principal of investigative response at Verizon.

The actual Verizon document doesn't appear to be available online yet, but there is an advance version that only discusses the incident above, available here.

Sociological ImagesSTI Transmission: Wives, Whores, and the Invisible Man

Flashback Friday.

Monica C. sent along images of a pamphlet, from 1920, warning soldiers of the dangers of sexually transmitted infections (STIs). In the lower right hand corner (close up below), the text warns that “most” “prostitutes (whores) and easy women” “are diseased.” In contrast, in the upper left corner, we see imagery of the pure woman that a man’s good behavior is designed to protect (also below).  “For the sake of your family,” it reads, “learn the truth about venereal diseases.”

The contrast, between those women who give men STIs (prostitutes and easy women) and those who receive them from men (wives) is a reproduction of the virgin/whore dichotomy (women come in only two kinds: good, pure, and worthy of respect and bad, dirty, and deserving of abuse).  It also does a great job of making invisible the fact that women with an STI likely got it from a man and women who have an STI, regardless of how they got one, can give it away.  The men’s role in all this, that is, is erased in favor of demonizing “bad” girls.

See also these great examples of the demonization of the “good time Charlotte” during World War II (skull faces and all) and follow this post to a 1917 film urging Canadian soldiers to refrain from sex with prostitutes (no antibiotics back then, you know).

This post was originally shared in August 2010.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at

Planet DebianMichal Čihař: What's coming in Weblate 2.12

Weblate should be released by end of February, so it's now pretty much clear what will be there. So let's look at some of the upcoming features.

There were many improvements in search related features. They got performance improvements (this is especially noticeable on site wide search). Additionally you can search for strings within translation project. On related topic, search and replace is now available for component or project wide operations, what can help you in case of massive renaming in your translations.

We have worked on improving machine translations as well, this time we've added support for Yandex. In case you know some machine translation service which we do not yet support, please submit that to our issue tracker.

Biggest improvement so far comes for visual context feature - it allows you to upload screenshots which are later shown to translators to give them better idea where and in which context the translation is used. So far you had to manually upload screenshot for every source string, what was far from being easy to use. With Weblate 2.12 (and this is already available on Hosted Weblate right now) the screenshots management got way better.

There is now separate interface to manage screenshots (see screenshots for Weblate as an example), you can assign every screenshot to multiple source strings, however you can also let Weblate automatically recognize texts on the screenshots using OCR and suggest strings to assign. This can save you quite a lot of effort, especially with screenshots with lot of strings. This feature is still in early phase, so the suggestions are not always 100% matching, but we're working to improve it further.

There will be some more features as well, you can look at our 2.12 milestone at GitHub to follow the process.

Filed under: Debian English SUSE Weblate | 0 comments

Worse Than FailureError'd: Taking Things a Little Too Literally

"Web design pro-tip: If it takes a while to load data, just put an 'Animated loading spinner' on the screen," wrote Stuart L.


Jeremy W. writes, "Not what you'd expect to see on Microsoft's site and especially not what you want to see when trying to install an IIS extension."


"I think someone needs to fidget with their calculator instead," writes Al.


"Yeah, I logged into Credit Karma Tax's site around that time, but I'm pretty sure that it wasn't with the computer I had back in 1998," Shawn A. writes.


Jay C. wrote, "Apparently the special ingredient is HTML."


"When you change your logoff sound to 'Bohemian Rhapsody', you and your computer are going to wait a while to shut down," Geoff O. wrote.


"Gosh, I could play it safe and get the this drawer in the color 'small' to match my decor, or throw caution to the wind, live a little and order a 'medium' color instead!" writes Mike S.


[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

CryptogramDuqu Malware Techniques Used by Cybercriminals

Duqu 2.0 is a really impressive piece of malware, related to Stuxnet and probably written by the NSA. One of its security features is that it stays resident in its host's memory without ever writing persistent files to the system's drives. Now, this same technique is being used by criminals:

Now, fileless malware is going mainstream, as financially motivated criminal hackers mimic their nation-sponsored counterparts. According to research Kaspersky Lab plans to publish Wednesday, networks belonging to at least 140 banks and other enterprises have been infected by malware that relies on the same in-memory design to remain nearly invisible. Because infections are so hard to spot, the actual number is likely much higher. Another trait that makes the infections hard to detect is the use of legitimate and widely used system administrative and security tools­ -- including PowerShell, Metasploit, and Mimikatz -- ­to inject the malware into computer memory.


The researchers first discovered the malware late last year, when a bank's security team found a copy of Meterpreter -- ­an in-memory component of Metasploit -- ­residing inside the physical memory of a Microsoft domain controller. After conducting a forensic analysis, the researchers found that the Meterpreter code was downloaded and injected into memory using PowerShell commands. The infected machine also used Microsoft's NETSH networking tool to transport data to attacker-controlled servers. To obtain the administrative privileges necessary to do these things, the attackers also relied on Mimikatz. To reduce the evidence left in logs or hard drives, the attackers stashed the PowerShell commands into the Windows registry.

BoingBoing post.

TEDA night to talk about redemption: TEDNYC Rebirth

Jon Ronson speaks at TEDNYC - Rebirth, February 15, 2017, New York, NY. Photo: Dian Lofton / TED

Journalist and documentary filmmaker Jon Ronson curated and hosted a night of talks about moving toward greater certainty and stable ground. (Photo: Dian Lofton / TED)

How do we make sense of the tumult around us? How can we grapple with the confusion and alarm so many of us are feeling? In a special session of talks curated and hosted by Jon Ronson at TED HQ on Wednesday night, six speakers looked not at the ruin that follows hardship but the recovery. That’s why we called the session “Rebirth” — because it was a night to talk about redemption.

Whether it’s the crushing grief of losing a child, the manipulation of an electorate or the fear of public humiliation, each speaker has encountered trauma in one form or another. And as they shared their narratives, they offered useful mechanisms for getting a new purchase on reality.

First up was Mona Chalabi, data editor for Guardian US.

Mona Chalabi speaks at TEDNYC - Rebirth, February 15, 2017, New York, NY. (Photo: Jasmina Tomic / TED)

“We’re living in a world of alternative facts, where people don’t find statistics to be a common ground or a starting point for debate,” says data editor Mona Chalabi. “This is a problem.” (Photo: Jasmina Tomic / TED)

How to paint with numbers. In the current age of distrust and alternative facts, people have begun to question the reliability of data from even the most trusted institutions, like the Bureau of Labor Statistics. Once a source of common ground between individuals, government numbers now provide a starting point for contentious debate. There’s even a bill in Congress that argues against the collection of data related to racial inequality. Without this data, “how can we observe discrimination, let alone fix it?” asks Mona Chalabi. This isn’t just about discrimination: think about how much harder it would be to have a public debate about health care if we don’t have numbers on health and poverty. Or how hard it would be to legislate on immigration if we can’t agree on how many people are entering and leaving the country. In an illustrated talk full of her signature hand-drawn data visualizations, Chalabi offers advice on how to distinguish good numbers from bad ones. As she explains, if we give up on government numbers altogether, “we’ll be making public policy decisions in the dark, using nothing but private interests to guide us.”

A story of hope in the shadow of death. When writer/comedian Amy Green’s 12-month-old son was diagnosed with a rare brain tumor, she began to tell her children bedtime stories in order to teach them about cancer. What resulted was a video game called “That Dragon, Cancer,” in which a brave knight named Joel fights an evil dragon. In the game, the autobiographical story of Joel’s terminal illness, players discover that although they desperately want to win and want Joel to beat cancer, they never can. What do you value when you can’t win? In a beautiful talk about coping with loss, Green brings joy and play into tragedy. “We made a game that’s hard to play,” Green says. “People have to prepare themselves to invest emotionally in a story that they know will break their hearts, but when our hearts break they heal a little differently. My broken heart has healed with a new and deeper compassion.”

Emmy the Great speaks and performs at TEDNYC - Rebirth, February 15, 2017, New York, NY. Photo: Jasmina Tomic / TED

With speech and song, Emmy the Great shares her story about standing out, fitting in and finding her identity through music. (Photo: Jasmina Tomic / TED)

Where East meets West. Emmy the Great grew up wrestling the East and West within herself — the East of her Chinese mother, the city of Hong Kong where she was born, and the West of her English father, her British peers, and the UK, where she grew up. But her 30th birthday blessed her with a unique coming-of-age moment, and she finally decided to claim her intersectional identity. She plays two lulling songs on a quiet electric guitar, “Swimming Pool” and “Soho,” with lyrics that swing gently between English and Cantonese.

Finding certainty in an uncertain world. At a time when the world feels like it’s been turned upside down and the only constant is chaos, it’s easy to slip out of reality and question your sanity. This phenomenon has a name: gaslighting. It’s a tool of manipulation familiar to author Ariel Leve. Leve grew up in a Manhattan penthouse, the daughter of a glamorous poet and artist, surrounded by interesting and artistic people. Her mother’s raucous weeknight dinner parties were a mainstay of her childhood, as was a tendency for her mother to tell her that what she thought had happened hadn’t actually happened. Facts were routinely batted away, and Leve was sprayed with words of contempt, which her mother would invariably deny. “One of the most insidious things about gaslighting is the denial of reality, being denied what you have seen with your own eyes,” Leve says. “It can make you crazy. But you are not crazy.” Leve shares a few strategies, including remaining defiant, letting go of a wish for things to be different and writing things down, that helped her survive and validate her reality.

Megan Phelps-Roper speaks and performs at TEDNYC - Rebirth, February 15, 2017, New York, NY. (Photo: Jasmina Tomic / TED)

“Escalating disgust and intractable conflict are not what we want for ourselves, our country or our next generation,” says Megan Phelps-Roper. (Photo: Jasmina Tomic / TED)

End of the spiral of rage and blame. When she was was five years old, Megan Phelps-Roper joined her family on the picket line for the first time, her tiny fists clutching a sign she couldn’t yet read: “Gays Are Worthy of Death.” As a member of Westboro Baptist Church, Phelps-Roper grew up trekking across the country with her family, from baseball games to military funerals, with neon protest signs in hand to tell others exactly how “unclean” they were, and why they were headed for damnation. In 2009, her zeal brought her to Twitter where, amid the digital brawl, she found a surprising thing: civil, sometimes even friendly conversation. Soon these conversations bled into the real world, as people she sparred with online came to visit and talk with her at protests. These conversations planted seeds of doubt, and in time she found that she could no longer justify Westboro’s actions — especially their cruel practice of protesting funerals and celebrating human tragedy. Phelps-Roper left Westboro in 2012, and after a period of turmoil she found herself letting go of the harsh judgments that instinctively ran through her mind. Now, she sees that same “us” vs. “them” impulse in our public discourse, where compromise of any kind has become anathema. “That isn’t who we want to be,” she says. “We can resist.” She offers four small, powerful steps to employ in difficult, disagreeable conversations: stop assuming ill motives in others, ask questions, stay calm in disagreement and make the case for your beliefs with generosity and compassion. “The end of the spiral of rage and blame begins with one person who refuses to give in to destructive, seductive impulses,” she says. “We just have to decide that it’s going to start with us.”

Planet DebianJoey Hess: Presenting at LibrePlanet 2017

I've gotten in the habit of going to the FSF's LibrePlanet conference in Boston. It's a very special conference, much wider ranging than a typical technology conference, solidly grounded in software freedom, and full of extraordinary people. (And the only conference I've ever taken my Mom to!)

After attending for four years, I finally thought it was time to perhaps speak at it.

Four keynote speakers will anchor the event. Kade Crockford, director of the Technology for Liberty program of the American Civil Liberties Union of Massachusetts, will kick things off on Saturday morning by sharing how technologists can enlist in the growing fight for civil liberties. On Saturday night, Free Software Foundation president Richard Stallman will present the  Free Software Awards and discuss pressing threats and important opportunities for software freedom.

Day two will begin with Cory Doctorow, science fiction author and special consultant to the Electronic Frontier Foundation, revealing how to eradicate all Digital Restrictions Management (DRM) in a decade. The conference will draw to a close with Sumana Harihareswara, leader, speaker, and advocate for free software and communities, giving a talk entitled "Lessons, Myths, and Lenses: What I Wish I'd Known in 1998."

That's not all. We'll hear about the GNU philosophy from Marianne Corvellec of the French free software organization April, Joey Hess will touch on encryption with a talk about backing up your GPG keys, and Denver Gingerich will update us on a crucial free software need: the mobile phone.

Others will look at ways to grow the free software movement: through cross-pollination with other activist movements, removal of barriers to free software use and contribution, and new ideas for free software as paid work.

-- Here's a sneak peek at LibrePlanet 2017: Register today!

I'll be giving some varient of the keysafe talk from Linux.Conf.Au. By the way, videos of my keysafe and propellor talks at Linux.Conf.Au are now available, see the talks page.

Planet DebianDirk Eddelbuettel: littler 0.3.2

max-heap image

The third release of littler as a CRAN package is now available, following in the now more than ten-year history as a package started by Jeff in the summer of 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. It is still faster, and in my very biased eyes better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It prefers to live on Linux and Unix, has its difficulties on OS X due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems where a good idea?) and simply does not exist on Windows (yet -- the build system could be extended -- see RInside for an existence proof, and volunteers welcome!).

This release brings several new examples script to run package checks, use the extraordinary R Hub, download RStudio daily builds, and more -- see below for details. No internals were changed.

The NEWS file entry is below.

Changes in littler version 0.3.2 (2017-02-14)

  • Changes in examples

    • New scripts getRStudioServer.r and getRStudioDesktop.r to download daily packages, currently defaults to Ubuntu amd64

    • New script c4c.r calling rhub::check_for_cran()

    • New script rd2md.r to convert Rd to markdown.

    • New script build.r to create a source tarball.

    • The installGitHub.r script now use package remotes (PR #44, #46)

Courtesy of CRANberries, there is a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs off my littler page and the local directory here -- and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Google AdsenseDiscover the next generation of programmatic advertising technology

Forget what you think you know about programmatic advertising, these tools are game changers. For the ninth part of the #SuccessStack series, we’re revisiting the topic of programmatic advertising and looking at some of the latest technology that is starting to shape it.

1) Matched content provides a more intelligent user experience
Matched content units allow publishers to suggest further content that might interest the reader, based on what they have looked at already. This means that someone on a sports site reading an article about football can have further football content suggested to them. This can result in an increase in pageviews and time spent on the site.

AdSense users who are eligible to use matched content units for ads have an additional advantage as they are able to synchronize their ads by topic also. This means that someone on a sports site reading an article about football can be shown native-styled ads most likely to appeal to football fans. This in turn can potentially increase ad engagement and quality clicks on the ads.

2) New programmatic deal structures redefine the space
Programmatic Direct is an innovative solution that bridges the gap between traditional ad sales and programmatic selling. It offers three flexible deal structures that bring the best of direct and programmatic together:
  1. Private Auctions: Negotiated minimum price. Invitation only auctions. Non-guaranteed volumes. 
  2. Preferred Deals: Fixed price. One-to-one deals. Non-guaranteed volumes.
  3. Guaranteed Deals: Fixed price. One-to-one deals. Guaranteed volumes. 
The technology combines the power of real-time bidding (RTB) infrastructure with access to brand safe, reserved publisher inventory. This shortens the time it takes to book and execute high quality reservations type deals.

Using this tool, you can lock in revenue through reservations, forecast against programmatic deals, and enjoy the ease of automated billing and collections. All that without the need to email tags, worry about creative controls, resolve discrepancies, or fax orders back and forth. Check out this story from Televisa and Unilever to see how it could work for your business.

3) Advanced inventory management tools allow greater flexibility
You no longer have to make a choice between selling your inventory programmatically or direct. Modern ad management tools like DoubleClick for Publishers make it very easy to do both at the same time, with less coding and manual work. They also give you a clear picture of your inventory availability in real-time, so you are less likely to make conflicted bookings or leave inventory unsold.

See how German media company G&J doubled their programmatic revenue and increased their mobile revenue 10x with a smart implementation of Doubleclick for Publishers and Doubleclick AdExchange.

Or, if you’re already using DoubleClick for Publishers, watch this video from The Economist who integrated DoubleClick for Publishers with DoubleClick Bid Manager, surpassing all their revenue and growth goals for the campaign.

Next steps

We want to help you get the right tools in place so you’re set up for success and potential ad earnings growth. To check whether your setup that suits your plans for your site/s, have a conversation with one of our experts. They can offer a personalized consultation to help you make smart choices for your business. Book a time to speak with an expert.

Worse Than FailureSoftware on the Rocks: Rolling for Dollars

Today, we present our second installment of Software on the Rocks, complete with new features, like an actually readable transcript done by a professional transcriber. Isn’t that amazing?

In today’s episode, Alex and Remy host a special guest, Justin Reese, founder of Code & Supply, one of the largest developer community organizations out there, with a nearly constant stream of events. In this episode, we discuss what building a community is like, when is it fair to really tear into bad code, and that time Alex made 10,000 people late for work.

This episode of Software on the Rocks is brought to you by Atalasoft.


Web Player

Tune in two weeks, when we’ll have Jane Bailey, one of our writers, to discuss working for the site and the perils of “Programmer Anarchy”. Follow future episodes here on the site, or subscribe to our podcast.


Alex: I guess this is another podcast we’re doing.

Remy: We are doing this again, and you know what? Not only are we doing this again, Alex, but we have brought a friend. This is Remy Porter, editor of The Daily WTF. We’ve got Alex.

Alex: Hello, everyone. Hello, Remy.

Remy: And we have with us Justin Reese. So, uh, Justin, why don’t you tell us a little bit about yourself?

Justin: Oh. Oh, no. Remy met me through Code & Supply, which is an organization that I started to kind of foster a strong software community here in Pittsburgh.

Alex: What sort of things do you do?

Justin: Well, so, Code & Supply started because there were lots of disparate meet-ups around the city and just holding events. And there were some really small ones that had great ideas and great people involved but they were really small and they were susceptible to one person failing to do something. And the whole community around a language would die in the city. So, I wanted to make a stronger organization, bring on sponsorship money, and pay to do really cool things, and it just kind of –

Alex: And so, this is all Pittsburgh-based, all Pittsburgh local. It sounds like a pretty interesting idea.

Justin: Yeah, along with something like 8 to 12 events every month, we held Abstractions this year. You know, it was a very, very large conference.

Alex: Oh, that was your conference?

Remy: That was his conference. I was an attendee. I just showed up. This was an amazing conference.

Justin: What’s really amazing is, like, we had a lot of people there. We’re really focused on creating connections between people, so we – The whole focus was building a bridge between, like, a design community, the ops community, the development community into one thing. But we ended up doing some really amazing things, like Larry Wall, the inventor of Perl, for the first time ever, meeting Joe Armstrong, the inventor of Erlang.

Alex: Now, Justin, is Abstractions a not-for-profit or – This whole Code & Supply thing, like, is this someone’s full-time job? I mean, it sounds almost hobby-like, but then, you know, a conference – that’s, like, gone beyond a hobby. You know, there’s some real risks you have to take on to do that.

Justin: That’s kind of the reason behind Code & Supply, though, is so that the risk is minimized a little bit. I signed contracts with venues and things that would have financially ruined me if something went wrong, but, you know, having Code & Supply as an organization at least protects me, personally, a little bit.

Alex: And just to be clear, when you say, “huge financial risks and liabilities,” you know, this is the thing with conferences that always amazes me, right, is that, you know, we’re talking, like, hundreds of thousands of dollars. And why take on all that liability for, effectively, a hobby?

Justin: Alex, you have a pretty good point. It is –You know, I mean, it is, to a point, fun, and I’d love to get to a point where I can make it my full-time job. It’s not quite there yet, but, you know, all the money we’ve made so far has been re-invested into our community. So, we took that Abstractions money that we made and we signed up for a really long lease for a community center so that we can have a place to hold events in perpetuity. And most of the time, I’m making decisions based on how to make the community more welcoming, ‘cause growth is kind of what makes Code & Supply awesome, is its incredible size to do really fantastic events. To grow, you got to be welcoming and get all the people involved.

Alex: Now, Justin, I’m actually in the midst of organizing my own conference here. Well, I shouldn’t say “my own.” It’s DevOpsDays Tokyo. And this is one of the same issues that we’re facing is this whole notion of being open and being welcoming. The things that I see a lot are the elitism.

Justin: Alex, I think that combating that feeling of elitism is kind of important. You know, we’re a polyglot community, and that means a lot of language, so people make choices for different reasons, and that “best tool for the job” mentality really goes a long way.

Remy: And so, Justin, what would you say to the – there’s this one kind of tribe of elitists I see. They’ll grab code samples from other people’s codebases, usually offered by a disgruntled co-worker, and then they’ll post these code samples on a website. They, like, have this “Code Sample of the Day,” and they do these very critical code reviews. I don’t know. I feel like these people could be part of the problem.

Justin: Remy, are you talking about something specific?

Remy: I’m talking about The Daily WTF.

Justin: Yeah, yeah, yeah, yeah. That’s –

Alex: Oh, hey. That’s –
[All laughing ]
That’s the podcast we’re doing today, isn’t it?

Remy: Yeah. Wow, yeah.

Alex: The pieces come together. Well, I guess –

Justin: I know it’s a podcast, but I’m pulling my collar nervously.

Alex: Well, well, um, there really is a difference between this hobbyist code – you know, the stuff you’re gonna go – now, obviously, it’ll be the stuff you post on GitHub or things like that, right? Code that you’re doing either for a hobby or trying to learn versus “This is your job and you are failing so bad at it that you should not be accepting money to do this.”

Justin: When you work at a company of a certain size, you kind of lose these excuses. I’ve seen shaming of open-source code, and that’s sad and never – never right. But I’ve seen production products from companies that are gigantic and have huge software teams on really simple ideas and core features to their product, and I have no problem calling that out, because if FedEx has a help link that leads to a 404 page, that’s a problem.

Remy: That’s actually a good segue into one of the things I wanted to bring up. This may come as a shock to everyone, but I am a bit of a giant nerd, and as a giant nerd, that does mean I play D&D-type campaigns, right, role-playing games of all the different stripes. And one of the groups I play with, we meet weekly online using a service called Roll20. Roll20 was a big Kickstarter success story. They wanted to build an online tabletop. They did a Kickstarter. They got umpty-ump millions of dollars to do this, and this is where, kind of, the wheels sort of come off, because it is an incredibly small team, from what I understand, and the team that they’ve built doesn’t really have the talent to do what they need to do. Just today, they actually did an announcement, and this is partially a piece of good news. They’ve enabled a new feature. They call it QuantumRoll.

Alex: Ooh, that sounds cool.

Remy: Ooh, isn’t that fancy? They now utilize a, quote, “true random” source of entropy. But it’s the thing that goes before that announcement that really got my attention, ‘cause we’re talking about certain things you don’t do and certain signs of bad software quality. And their announcement actually starts off, quote, “Rather than relying on client-side pseudo-random number generation,” client, in this case, being your web browser –

Alex: Oh, actually, that’s a nice system, because, now, what if you want to roll a natural 20?

Remy: That’s right.

Alex: What if you want to roll a 14? This is perfect. So, you know, and I think that’s a good differentiator. If this was just some hobby website where we can excuse it because it’s just, you know, their fun little side thing and who cares, right? This is literally their job. And, yeah, I think this is absolutely a case where you should call them out on it. Yeah, absolutely. That is – I don’t think, at all, it’s elitist to call that, you know, what it is – just crap software.

Justin: The moment you start to take money for something is the moment that someone has the right to start complaining about that thing.

Alex: Now, Justin, that’s a great point. Now, just, I got to ask, though – garage-sale rule. When does that apply?

Justin: Hmm.

Remy: What’s the software equivalent of that, though? ‘Cause there’s no garage sale of software, except maybe Good Old Games, right?

Justin: I would say that open source is kind of the garage sale software.

Remy: Well, you could always go and, uh, start open-sourcing BuildMaster.

Alex: You know, so, that’s a great point, Remy. So, Inedo’s products – we have open-sourced all of our extensions. But our core product code is not open source, and there’s actually a very, very good reason for it that I don’t think is that obvious. When we started, there was no GitHub. There was no community portal. We have everything inside of this ancient, source-control tool called SourceGear Vault. SourceGear Vault, I believe, still today is marketed as a better replacement for SourceSafe.

Remy: That’s what I wanted – I was gonna ask that. I was like, “How does this compare to SourceSafe? ‘Cause it sounds like it’s similar.”

Alex: Yeah. Yeah. It is, but better, so…

Remy: Well, that’s not hard.

Alex: No, no. And to be fair, it really is a lot better than SourceSafe. That’s actually all I can say about it. But, you know, for what it was, it’s fine. But now just picture – We’ve got this giant legacy of code. We’re a small team. What week do we spend completely getting all of the code outside of SourceGear? It would be nice to do, but it just – Everything that we’re doing right now works, and there just hasn’t been enough of a demand to do it.

Justin: Uh, guys, I just Googled SourceSafe, so that shows you how much –

Remy: Oh, no! Oh, he’s so young, so naïve. Oh, my goodness.

Justin: The last stable release happened when I was in high school, so…

Remy: When I left PPG, my last job – I left them in 2014 – they had just, just gotten their last SourceSafes migrated into TFS.

Justin: You know what, Remy? Interesting fact about me – I am probably one of the few people that has been paid to do both COBOL and Ruby. So, you know, just talking about, you know, dead languages, some of these languages will never die.

Remy: COBOL will outlive everyone listening to this podcast, hands down.

Alex: That’s a really scary thought. But true. So, you might find this particularly interesting, Justin. So, currently, I’m in Japan, and COBOL is a surprising part of I.T. A lot of this not-wanting-to-change is endemic in their culture because the change has nothing to do with driving business requirements.

Remy: And that segues, really, into an interesting topic, because one of the things – There’s not just a cost of changing, right? There’s also a risk, because that change might fail.

Alex: Yeah, really, it comes down to an unknown amount of work. It’s a significant risk. And it would be awfully nice if there was an easier way for us to identify or sort of manage these sorts of risks.

Remy: We brought up, and the end of our last episode, that we wanted to talk about, you know, risk management, risk mitigation, and how that impacts working in I.T. in ways that I don’t think a lot of developers think about, and we had some homework to come up with some buzz words.

Justin: How about YoloOps?

Alex: Oh, my God. I love it.

Remy: Ooh.

Alex: Are we talking, like – Did you say YOLO or yellow?

Justin: Yeah, YOLO. YOLO.

Alex: Oh, my God. Those are actually both really good, because neither of them makes sense, kind of a lot like, you know, uh, DevOps, but I like it.

Remy: Yeah, ‘cause over –like, the first time you said it, I honestly thought you said YellowOps, and I actually was kind of excited about YellowOps, ‘cause, you know, you think about what does every manager want to see, right? They want that dashboard with a stop light on it that’s red, yellow, or green. Right? YellowOps, man. “We want to get out of the yellow and into the green. How do we do that? That’s what we need.”

Alex: Well, Justin, I think we may actually have a term now, and that could be YellowOps. Either “Yellow” or “YOLO.” Say it fast enough, doesn’t really matter. But I really like where you were going with it, Remy, of the red, yellow, green stop light. How could we work YOLO into Ops?

Justin: So, YoloOps – Hold on, hold on. YoloOps – There are three circles in YoloOps, and turn it sideways, boom.

Remy: Oh, my gosh. It’s the stoplight!

Alex: Whoa, whoa.

Remy: Oh, that is so good.

Justin: I just made a logo on a podcast.

Alex: And, you know, Remy, please tell me more about what YOLOPS or YoloOps is and how it can help my organization.

Remy: All right. So, you know how you’ve got an organization with a legacy project and they just can’t change it? Or when there’s a new process that you could employ, like, say, DevOps, and management just can’t get behind it? YoloOps understands that the reason these things happen isn’t because management is just hidebound. It isn’t because people are just lazy. It isn’t because people are afraid of new things. It’s because they’re trying to estimate how much risk they’re taking on and reacting based on that risk. Let’s say you’re working at a company with a craptacular process, right? It’s just awful. They’re changing code in production, and they got all of these problems. But they’ve been delivering software that way for the past five years. Well, folks, you shouldn’t be writing changes into production. You’re now proposing a change, and there is risk. There’s risk to making the change. There’s risk that it is the wrong change. And they have to now measure this against the process that they’re already doing, which, while it’s got its flaws, it works.

Alex: Okay. I’m starting to see this. I’m gonna rate that as a solid 6.5 out of 10 on the “It Made Me Excited About Learning About This Topic,” but what I feel is missing from the explanation is tying it back together to the absurd name.

Justin: So, Alex, I’m gonna cut straight to the point here. Um, you know, people really just drive fastest through the yellow lights anyway, so we’re really just trying to always live in the yellow lights. So, do you want your business to live on a red light, when everyone’s stopped, on a green light, when everyone’s just taking their time moseying around, or do you want to live on a yellow light, where everyone’s dashing through the intersection, trying not to die?

Alex: This is brilliant. No, I think YellowOps, YOLOPS, YoloOps – I still am not clear on what the term is, but you know what? As a manager, I generally don’t understand most of the terms you developers are throwing at me, so I like it. What I want to know is when is this gonna be the keynote speech at Abstractions, and how many developers do I send there to learn about this topic so we can implement it in our organization?

Justin: I mean, I still don’t really know what it is, but I am already selling consulting on it, so, um, I would say by the end of the month I will have a full conference on it, and you should send your entire team.
[ Laughter ]

Alex: You know, so it’s not quite traffic/driving related, but, you know, you might appreciate this thing. So, Remy, I don’t know if you’ve been to Japan. Justin, have you been to Japan before?

Justin: I have not.

Alex: Oh. Anyway, I strongly recommend it. Wonderful place. You know, everything you’ve seen about it, all the stereotypes, everything you’ve seen on TV – completely true. You know, I was out cycling the other day, and, you know, there’s, of course, railroads. You know railroad crossings where the little hand comes – not the hand. What the hell is that thing? A railroad crossing where the divider thing comes down and it’s like, “Oh, a train is coming.” I cycled up. The thing came down. “Oh, now it’s time to just take a quick break.” I lean my bike against the arm that’s sitting there. I notice a train is slowing down ahead of the intersection. Okay. Fine. “Oh, this train – Why is it slowing? I don’t get it.” So, then the train stops. Dude runs out of the train, in his uniform, right, apologizes to me, and then tells me to please take the bike off of the little arm, and then runs back to the train and starts it again. I – Mortified is not exactly a strong enough word. Traffic was lined up. I literally caused a train delay because I leaned my –

Remy: You made, like, 10,000 people late for work. I want to, first off, thank Justin for coming along with us on this little podcast adventure. Before we wrap up, you mentioned this community center.

Justin: Our events have gotten a bit too focused on lecture-style content over the past year, and I felt that there was a really strong need to make things a bit more social again, and what better way to do that than get space above a bar with 70 beers on tap? I’m really excited because it’s going to be something that we can really optimize for what we do. We can, you know, put a camera in the ceiling so we can have better live-streaming. We can keep programming books on hand, have lounges that are optimized for people showing off their code to some other people. You know, the permanency of it is just really its main advantage, and I’m really excited for being able to create this home. And, you know, during the day, we’re gonna actually rent out desk space to all the people that work remotely in the city and…

Alex: So, Justin, that sounds really interesting. So, basically, you know, if I had to sum up the Code & Supply space, right, it is like a co-working space, but primarily for coders, that happens to be above a really cool bar.

Justin: During the daytime.

Alex: During the daytime, of course.

Justin: During the daytime, yeah, yeah. But, you know, once it’s night, we’re gonna be this super awesome place for people to come learn for free with our community events.

Alex: Yeah, by and large, that sounds like a pretty interesting place. I mean, I’ve certainly seen the co-working spaces. I’ve certainly seen, you know, the companies that try to put on events here and there, but I really like the combination of the two. And, you know, based on all the stuff that you’re trying to do with getting this community built, I think it’s a step in the right direction.

Remy: I do want to give a shout-out to our sponsors – Atalasoft. Again, Atalasoft is sponsoring this podcast. They make an SDK for doing document management, document scanning, and things like that. You can go check them out. We’ve mentioned that this co-working space is above a bar. The bar in question is a Pittsburgh chain called the Sharp Edge. They specialize in Belgian beers. They have a huge draft list. We are not sponsored by Sharp Edge, but if they wanted to sponsor us by donating beer to Software on the Rocks, I would love that. I’m 100% in favor of that.

Justin: Or even just, you know, take a little bit off of my rent or something this month.

Remy: Well…

Alex: I would actually prefer the beer, to be perfectly honest.

Remy: Yeah.

Alex: Yeah, I think that would be a better system for all involved.

Automated Machine Transcription

Welcome to software on the rocks and a leader you to has brought you buy a possible.
So i guess what this is another part of gas we’re doing.
We are doing this again and you know that not only are we doing this again.
It’s but we have brought a friend this is room border at to the daily desert we’ve got alex and load every won’t.
So ready and we have with a just in reach so just in one it tells a little bit about yourself.
For me that to me through caused by which is an organization that i started to kind of foster strong software commanding here in pittsburgh he’ll sort of things to do.
So while kids by started because there were on onto.
The spread me no choice around the city and just holding the advanced there some really small ones that had great ideas and great people nepal.
But they are really small on it they were susceptible to one person failing to do some.
[BREATH] and then the whole commuting around a language but die in the cities i wanted to.
So stronger organization i’m in sponsorship me.
[UM] and paved to do really cool things and at risk.
So this is all hit surveys style expert local it sounds a pretty interesting idea.
You know along with hosting something like a the toilet months every month we have.
This options this share it was a very very large coffee and.
This was this so that was discovered this was as an attendee i know i just showed up this was an amazing calm rock and that.
And once what’s really amazing as we have a lot of people there but we are really folks.
They started creating connections between people and so the whole focus was getting.
For between one day desire to me the opposite community that development meaning to one thing but we ended up doing some really amazing things like.
Larry well the mineral for the first time ever meeting i’m showing the into her low.
Now just in his abstractions.
Not for profit this whole code and supply thing because this is like someone’s.
Before can child i mean how do you it sounds almost have.
[UM] then at a conference that that’s like on beyond how you have is something cross to take on view.
That’s kind of the reasons behind coins pie though is so that the risk is minimized a little bit by side.
Contracts with then use and things that would have fun actually reminded me of something that wrong.
But you to having to inspire as an organization at least protects me personally in a little bit and just interest to declare you know we use.
A huge financial risks and liabilities you know this is a thing with conference is that always amazes me right is that we’re talking like.
Hundreds of thousands of dollars white taken all that i ability for effectively a hot.
So to have a pretty good point is i mean [BREATH] two points on and i’d like to [SMACK] and make it my fault and that’s not quite but all the mind we’ve made so far it and reinvested into.
Our community so that a facts and find that we made and we signed up for a but on these for our communities and [COUGH] so that we can do have a place the whole.
But events for ink opportunity and most that’s i don’t making decisions based on how to make that can be more work.
Human because growth these kind of me scans for awesome is it’s incredible size the do really fantastic events to grow you’ve got to be welcoming and get all the people of.
Now just an actually in the midst of organizing my own conference here well i shouldn’t said my old it’s death upstages tokyo.
And this is one of the same issues that we’re facing is this whole notion.
And of being opened the welcoming the things that i see a or the disease.
And so i think the combating that feeling believers i’m is the of.
For the polyglot.
The community and that means a lot of language has so people make choices for different reasons and that this talk for the job mentality really goes along.
Coming and so just in what would you say so there’s this one kind of tribe of a leaders i see a failed grab.
The code samples from other people’s code bases usually offered by a disgruntled coworker and the post.
These code samples on a website they have this code sample of the day and they.
These these very critical code reviews i don’t know i feel that these people could be part of the problem.
I’m talking about the daily and know.
[SMACK] that’s the palm cast reducing to be.
The [BREATH] just cutting well i guess this is a point cast but themba my color deliriously [BREATH] well.
You know there are really is that the difference between you know this hobbyist.
So you know the stuff that you’re going to go now obviously it’ll be this eighty percent cape higher were things.
Very called that you’re doing by her for a hobby are trying to learn verses.
This is your job and your failing silk.
By that is that this you should not accepting money to do this when you working a company.
Of a certain site he can’t use these excuses.
I’ve seen shaming of open source code and that said.
Who never never rain.
I’ve seen production products.
From companies that are gigantic can have huge software teams on the really simple idea core cores.
The features to the product and i look becoming the because if fenix has a health.
The work that leads to for four page that is a problem.
That that’s actually get segue into one of the things that i wanted to bring up this make it was shocked everyone but i am a bit of a joint.
I heard and as a giant heard that does mean.
[SMACK] did detroit role play games of all the difference.
Sites and one of the groups i play with we meet weekly online using.
The third is called role twenty group twenty was a big kickstarter success story.
They want to build an online tabletop they did a picture or they the millions of dollars to do this and this is where kind of.
The wheel sort of come off because it is an incredibly small team from.
I understand and the team that they’ve built doesn’t really have the talent to do what they need to do just today they actually.
Get an announcement and this is partly a piece of good news.
People the new feature they call it quantum rowlands hulu does that can see they now utilize equals true random source of entropy but it’s the thing that goes both.
For to that an announcement that really got my attention is we’re talking about.
Things you don’t do and sort of signs of bad software quality and their announcement actually starts off quote rather than.
Relying on clients side pseudo random number generation client in this case being your web browser.
I’ll actually that’s that’s a nice system because.
Now what if you want to roll a national twenty had re want to roll fourteen this is perfect.
And i think that’s a good differentiate if this was just some hobby website.
An excuse it because it’s just you know they’re there fund little side thing and who cares right this is a little their shop and yeah i think this is absolutely.
Is where you should call them or not.
Absolutely that is i don’t think at all his leaders to call that.
You know what it is just crap software the moment he stood.
To take money for something [BREATH] case is the moment that someone has the right to start complaining about that they just and that’s a great point not just.
[UM] flu.
On sale rule wind is at a fire.
What’s the software equivalent of that those because you’re not going to and there’s no garage sail of software it’s that may be good old games rate.
I would say that i hope and so it’s kind of grow growth still song.
Well you can always go and start open source and built master.
You know so that’s a great point randy so it needs products we’ve open source all.
Over extensions but take your product how does not open source.
And there’s actually a very very good reason for.
That i don’t think is that obvious when we started there was no how there was no community coral we have everything inside of this ancients.
He was controlled to is called source years old source years old.
Really still today is marketed as a better replacement for source is.
And here i am but when i was not a ask that i was how does this compared to source it because it sounds like it.
[BREATH] as it is but better so well that’s not hard.
I know to be.
They’re really is a lot better can source safe.
That’s actually all we can say about that.
You know for what it was it’s fine but not just picture we’ve got this giant legacy of.
So we’re a small team what we do we spend.
Completely getting all the code outside of.
For hear it would be nice to do but in just.
Everything that we’re doing right now it works and that hasn’t been enough of a demand.
To you.
Ok i just saw safe to show how much.
And he so young so not eve oh my goodness the lines.
They were really happened when i was in high school so life.
As my last by [BREATH] thousand fourteen they had.
But you’re still lost.
Map in there are lost source saves migrated into.
And what worries teaching fact about me i probably one of the few people those been paid to do both coal.
And grubby so you know just talking about languages some of these like just never.
And coal outlive everyone listening to this spot gas hands down that’s a really scary thought but.
So this is my friend this really an interesting just.
Heart in an inch and coal is a surprising part of it.
A lot of this not one thing to changes.
Endemic in their culture because the change has nothing to do a driving business acquirers and that.
No raises segues is really into an interesting topic because one of the things there’s not just a cost of change.
And right there’s also a risk because that changed might fail.
Only it comes down to a unknown and unknown.
I’m work it’s it’s a significant risk and it would be awfully and if there was an easier way for us to identify sort of manage these sorts of risks and we.
Brought up at the end of our last episode that we want to talk about you know.
Risks management risk mitigation and how that impacts working in it in ways that i don’t think a lot of developers think about and we had some home.
Mark to come up with some buzzwords.
[UH] yell it looks and you’re little i go home.
Why are we talking [BREATH] she said you’re lol word.
I had no lol you go home.
Those are actually both really good because neither to make sense kind.
You won’t lee you know designs board [UH].
[UH] because the first time he said i am mostly but you said yellow ups and to is kind of excited about yellow because you know you think about.
What does it reminded you want to see right they want that dashboard with a stop way on it that’s.
Red yellow were green right yell alarm and we want to get out of the yellow and into the green how do we do that that’s what we need.
Well is that i think we may actually have a time.
And that could be yo when other yellow yellow.
The state fast enough doesn’t really matter that i really are going to me of the red yellow.
So green stoplight how could be work.
Joel going into into.
So so that yellow ops on put on your whole arms there are three circles in your little ops and turn out ways do.
Actually my gosh it’s this stop the world.
That i just made a little gone.
His ready please tell me more about what you up is or yellow ups and and how it you hoped my organs.
It’s so you know how you’ve got an organization with a legacy.
The product and they just can’t change it.
Bl or when there’s a new process that you could employed like say deaf ups and management just can’t get behind it you’ll go up.
From her stands that the reason these things happen isn’t because management is you as high bad.
This is because people are just all easy it is because people.
You’re afraid of new things it’s because they’re trying to estimate how much risk they’re taking on and reacting.
Based on that risk and let’s say you’re working at a company with a crap tack you’ll or process it’s just off.
People they are changing code and production and it got all of these problems but they’ve been delivering software that way for the past.
Five years well folks these should be right in changes into production you’re now propose.
You take a chain and there is rips there’s was to making the change there’s risk.
That it is the wrong change and they have to know measure.
[SMACK] this against the process and that they’re already doing which while it’s got it’s flaws it works ok i’m.
Learning to see this i think i’m going to rate that as a solid sixpoint fire about ten on the it made me excited about learning about this topic but what i feel is missing from the excellent.
Nation is time it back together to year certain name.
So how it’s on the can straight to the point here you know people.
Really just dr fast as through the elites anyway so we really just.
And to always live in the other leads so you want your business.
A live on a red light when everyone’s stopped on a green light when i was just take it in their time amazing around put one live on that a yellow late where it ones to actually through the intersection try and of today this is really now.
Yellow ops you’ll lots you as i still them not clear what the term as but you know as a manager i generally don’t understand in most of the terms you develop presenter and i get me so.
I like what i said what is is going to be the keynote speech at abstractions and.
How many developers do i send her to learn about this topic so we can incremental in our musician.
I mean i still don’t really know what it is but i’m already selling consulting on it so.
I would say by the end of the month i will help full conference on it and you should send your entire team.
No it’s not quite traffic.
Rising related but you might have.
We this things so sorry i don’t know if you’ve been the japan.
From the recommended wonderful place all everything you’ve seen about are all the serious.
[SMACK] everything you’ve seen on tv completely true you know i was out cycling the other day and you know.
Now there’s of course railroad gene of real with crossing that the little little hand comes at the hand of the hell’s i think but we’re element crossing where this.
The divider thing comes down is like a chinese coming.
Michael up the thinking down known now time to just take a quick great i leave might buy against the or that that’s sitting there.
I know this trained as rolling down the head of the.
Russian find this trained wise is swelling i don’t get it so then the prince stops do runs.
[SMACK] out of the train then you uniform i apologize is to me and then tells me to please take the by.
And off of the little arm runs to the train and starts in again i mortified.
Is not an artist drug worth traffic was lined up and literally cause a train dull because it you a.
As people later i want to first thank just.
And for for coming along with us on this little podcast of.
Nature before we wrap up to mention this community center.
Once have gotten to focused on let you style content over the past year and.
Now that there was a really strong need to make things a bit more.
Social again and better way to the then.
Kids space of a borrow with seventy peers on top i’m really excited because it’s going to something that we can.
And really optimize for what we do can.
And play.
I have the same one so we can have a better lives streaming we can keep.
Our booming books on hands have lounges that have been or optimized.
For people showing off their coat to some other people.
You know the prominent so that was just we’re really it’s a man advantage and.
Really excited for being able to create this has been during that.
They were going to actually when you’re in out for as space to all the people that work remotely in the city and so.
Just in the essentially the interesting so basically if the some of the code and supplies.
This space right it is a black coworking facebook primary for colors that happens to be above a really cool bar.
From the daytime [SMACK] showing the daytime.
Here but you know when once.
This like we’re going to be this super and place for people to come and learned for free from both are can you.
That [BREATH] that something.
On the interesting place of enough starting the seen the cotton spaces i certainly scenes.
You know the companies that try to put on events here and there but i really like the combination of the two and.
You know based on all the stuff that you’re trying to do with getting this community build i think it’s a step in the right direction.
How do you want to get to shed out to our sponsors a tablet soft again until a soft is sponsoring this podcasts.
They make a a for doing document management document scanning and things like that the them how we’ve mentioned.
This coworking spaces above labor the bar question is is a pittsburgh chain called the sharp as these special.
Lies in melting beers they have a huge tracked list we are not sponsored by a sharp edge but they wanted to sponsor us by dominating beer to software of the rocks.
I would love that [UH] hundred percent in favor of that.
For you ingesting take a little bit off my iran or something that’s found team.
We actually prefer appear to be perfectly on and i think that you’d better system for all would fall.
The british has been cloud [UH] uses for some give commonplace [UH].
The problem.

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Planet Linux AustraliaOpenSTEM: New Research on Our Little Cousins to the North!

Homo floresiensis

Last year, several research papers were published on the ongoing excavations and analysis of material from the island of Flores in Indonesia, where evidence of very small stature hominins was found in the cave of Liang Bua, in 2003. The initial dates dated these little people to between 50,000 and about 14,000 years ago, which would have meant that they lived side-by-side with anatomically modern humans in Indonesia, in the late Ice Age. The hominins, dubbed Homo floresiensis, after the island on which they were found, stood about 1m tall – smaller than any group of modern humans known. Their tiny size included a tiny brain – more in the range of 4 million year old Australopithecus than anything else. However, critical areas of higher order thinking in their brains were on par with modern humans.

Baffled by the seeming wealth of contradictions, these little people raised, researchers returned to the island, and the cave of Liang Bua, determined to check all of their findings in even more detail. Last year, they reported that they had in fact made some mistakes, the first time around. Very, very subtle changes in the sediments of the deposits, revealed that the Homo floresiensis bones belonged to some remnant older deposits, which had been eroded away in other parts of the cave, and replaced by much younger layers. Despite the samples for dating having been taken from close to the hominin bones, as luck would have it, they were all in the younger deposits! New dates, run on the actual sediments containing the bones, gave ages of between 190,000 to 60,000 years. Dates from close to the stone tools found with the hominins gave dates down to 50,000 years ago, but no later.

Liang Bua. Image by Rosino

The researchers – demonstrating a high level of ethics and absolutely correct scientific procedure, published the amended stratigraphy and dates, showing how the errors had occurred. At another site, Mata Menge, they had also found some ancestral hominins – very similar in body type to the ones from Liang Bua, dated to 700,000 years ago. Palaeoanthropologists were able to find similarities linking these hominins to the early Homo erectus found on Java and dated to about 1.2 million years ago, leading researchers to suggest that Homo floresiensis was a parallel evolution to modern humans, out of early Homo erectus in Indonesia, making them a fairly distant cousin on the grand family tree.

Careful examination of the deposits has now also called in to question whether Homo floresiensis could control fire. We know that they made stone tools – of a type pretty much unchanged over more than 600,000 years, and they used these tools to help them hunt Stegodon – an Ice Age dwarf elephant, which was as small as 1.5m at the shoulder. However, researchers now think that evidence of controlled fire is only in layers associated with modern humans. It is this cross-over between Homo floresiensis and modern humans, arriving about 60,000 – 50,000 years ago, that is a focus of current research – including that of teams working there now. At the moment, it looks as if Homo floresiensis disappears at about the same time that modern humans arrive, which sadly, is a not totally unlikely pattern.

Stegodon. Image by I, Vjdchauhan.

What does this have to do with Australia? Well, it’s always interesting to get information about our immediate neighbours and their history (and prehistory). But beyond that – we know that the ancestors of Aboriginal people (modern humans) were in Australia by about 60,000 – 50,000 years ago, so understanding how they arrived is part of understanding our own story. For more case studies on interesting topics in archaeology and palaeontology see our Archaeology Textbook resources for Year 11 students.

Planet DebianCraig Sanders: New D&D Cantrip

Name: Alternative Fact
Level: 0
School: EN
Time: 1 action
Range: global, contagious
Components: V, S, M (one racial, cultural or religious minority to blame)
Duration: Permanent (irrevocable)
Classes: Cleric, (Grand) Wizard, Con-man Politician

The caster can tell any lie, no matter how absurd or outrageous (in fact, the more outrageous the better), and anyone hearing it (or hearing about it later) with an INT of 10 or less will believe it instantly, with no saving throw. They will defend their new belief to the death – theirs or yours.

This belief can not be disbelieved, nor can it be defeated by any form of education, logic, evidence, or reason. It is completely incurable. Dispel Magic does not work against it, and Remove Curse is also ineffectual.

New D&D Cantrip is a post from: Errata

TEDA personal memory of Hans Rosling, from TED’s founding director of video

Hans Rosling. Credit: TED / James Duncan Davidson

Hans Rosling. Credit: TED / James Duncan Davidson

I was there when Hans Rosling first shook the room at TED, and transformed tiresome medical statistics into an action-packed, live performance about real people’s lives on the line.

He’s since been namechecked by Bill Gates. And he outlasted Fidel Castro – twice. Not merely mortally. In an interview on the TED Blog, Hans recounts an all-night argument with the Cuban dictator that upended the country’s healthcare system.

At our second encounter, shortly after the launch of TED Talks, Hans pulled up a chair and sat down by my side to ask about the instant replay I’d inserted into his presentation. He listened to my answer attentively, then shared his greatest secret as a speaker, a secret he had refined over years of teaching with a sportscaster’s exuberance: “I must strike a delicate balance,” he explained, “Too much data, and I become boring. But too much humor, and I am a clown.” He drew diagrams.

His ideas spilled forth with lucidity, seemingly effortlessly, because he loved what he did, and he worked with the people he loved. He couldn’t wait to share his latest revelations with everybody at every opportunity: He evaded bribery in some of the more corrupt corners of the world by showing off printouts of his data, page by page, until local interlocutors would release him, either out of inspiration or sheer exhaustion — but never confusion.

Years after we first met, Hans showed up at my door one night with a toothbrush and a laptop full of data visualizations, announcing himself as my roommate, staying up all hours to work out the particulars of his latest presentation. And I told him that my favorite part of any of his TED Talks, the bit that gives me chills to this day, was something that could easily go unnoticed, from that very first speech comparing the developed world to the developing world, when, about four minutes in, he leans in to take us on our first journey through time, and he says, “Let’s see,” to an unsuspecting audience, “WE START THE WORLD.”
— Jason Wishnow

TED's founding director of film + video, Jason Wishnow, gives Hans Rosling some presentation tips backstage at TED in Long Beach. Image courtesy M ss ng P eces and Jason Wishnow

TED’s founding director of film + video, Jason Wishnow, gives Hans Rosling some presentation tips backstage at TED in Long Beach. Image courtesy M ss ng P eces and Jason Wishnow


Planet Linux AustraliaChris Neugebauer: Two Weeks’ Notice

Last week, a rather heavy document envelope showed up in the mail.

Inside I found a heavy buff-coloured envelope, along with my passport — now containing a sticker featuring an impressive collection of words, numbers, and imagery of landmarks from the United States of America. I’m reliably informed that sticker is the valid US visa that I’ve spent the last few months applying for.

Having that visa issued has unblocked a fairly important step in my path to moving in with Josh (as well as eventually getting married, but that’s another story). I’m very very excited about making the move, though very sad to be leaving the city I’ve grown up in and come to love, for the time being.

Unrelatedly, I happened to have a trip planned to Montréal to attend ConFoo in March. Since I’ll already be in the area, I’m using that trip as my opportunity to move.

My last day in Hobart will be Thursday 2 March. Following that, I’ll be spending a day in Abu Dhabi (yes, there is a good reason for this), followed by a week in Montréal for ConFoo.

After that, I’ll be moving in with Josh in Petaluma, California on Saturday 11 March.

But until then, I definitely want to enjoy what remaining time I have in Hobart, and catch up with many many people.

Over the next two weeks I’ll be:

  • Attending, and presenting a talk at WD42 — my talk will be one of my pieces for ConFoo, and is entirely new material. Get along!
  • Having a farewell do, *probably* on Tuesday 28 February (but that’s not confirmed yet). I’ll post details about where and when that’ll be in the near future (once I’ve made plans)
  • Madly packing and making sure that that I use up as close to 100% of my luggage allowance as possible

If you want to find some time to catch up over the next couple of weeks, before I disappear for quite some time, do let me know.


Harald WelteCellular re-broadcast over satellite

I've recently attended a seminar that (among other topics) also covered RF interference hunting. The speaker was talking about various real-world cases of RF interference and illustrating them in detail.

Of course everyone who has any interest in RF or cellular will know about fundamental issues of radio frequency interference. To the biggest part, you have

  • cells of the same operator interfering with each other due to too frequent frequency re-use, adjacent channel interference, etc.
  • cells of different operators interfering with each other due to intermodulation products and the like
  • cells interfering with cable TV, terrestrial TV
  • DECT interfering with cells
  • cells or microwave links interfering with SAT-TV reception
  • all types of general EMC problems

But what the speaker of this seminar covered was actually a cellular base-station being re-broadcast all over Europe via a commercial satellite (!).

It is a well-known fact that most satellites in the sky are basically just "bent pipes", i.e. they consist of a RF receiver on one frequency, a mixer to shift the frequency, and a power amplifier. So basically whatever is sent up on one frequency to the satellite gets re-transmitted back down to earth on another frequency. This is abused by "satellite hijacking" or "transponder hijacking" and has been covered for decades in various publications.

Ok, but how does cellular relate to this? Well, apparently some people are running VSAT terminals (bi-directional satellite terminals) with improperly shielded or broken cables/connectors. In that case, the RF emitted from a nearby cellular base station leaks into that cable, and will get amplified + up-converted by the block up-converter of that VSAT terminal.

The bent-pipe satellite subsequently picks this signal up and re-transmits it all over its coverage area!

I've tried to find some public documents about this, an there's surprisingly little public information about this phenomenon.

However, I could find a slide set from SES, presented at a Satellite Interference Reduction Group: Identifying Rebroadcast (GSM)

It describes a surprisingly manual and low-tech approach at hunting down the source of the interference by using an old nokia net-monitor phone to display the MCC/MNC/LAC/CID of the cell. Even in 2011 there were already open source projects such as airprobe that could have done the job based on sampled IF data. And I'm not even starting to consider proprietary tools.

It should be relatively simple to have a SDR that you can tune to a given satellite transponder, and which then would look for any GSM/UMTS/LTE carrier within its spectrum and dump their identities in a fully automatic way.

But then, maybe it really doesn't happen all that often after all to rectify such a development...

Planet Linux AustraliaDavid Rowe: FreeDV 700C

Over the past month the FreeDV 700C mode has been developed, integrated into the FreeDV GUI program version 1.2, and tested. Windows versions (64 and 32 bit) of this program can be downloaded from Thanks Richard Shaw for all your hard work on the release and installers.

FreeDV 700C uses the Codec 2 700C vocoder with the COHPSK modem. Some early results:

  • The US test team report 700C contacts over 2500km at SNRs down to -2dB, in conditions where SSB cannot be heard.
  • My own experience: the 700C speech quality is not quite as good as FreeDV 1600, but usable for conversation. That’s OK – it’s very early days for the 700C codec, and hey, it’s half the bit rate of 1600. I’m actually quite excited that 700C can be used conversationally at this early stage! I experienced a low SNR channel where FreeDV 700C didn’t work but SSB did, however 700C certainly works at much lower SNRs than 1600.
  • Some testers in Europe report 700C falling over at relatively high SNRs (e.g. 8dB). I also experienced this on a 1500km contact. Suspect this is a bug or corner case we can fix, especially in light of the US teams results.

Tony, K2MO, has put together this fine video demonstrating the various FreeDV modes over a simulated HF channel:

It’s early days for 700C, and there are mixed reports. However it’s looking promising. My next steps are to further explore the real world operation of FreeDV 700C, and work on improving the low SNR performance further.

TEDTED and Star India greenlight “TED Talks India: Nayi Soch” TV series, hosted by Shah Rukh Khan

Today we confirmed some exciting news about TED’s most ambitious television project yet: a major network series in India hosted by megawatt Bollywood film star Shah Rukh Khan.

The program will air on Star India, one of India’s largest media conglomerates and our partner in production. TED Talks India: Nayi Soch, which translates to “new thinking,” marks the first time TED is collaborating with a major network to produce a TV series featuring original TED Talks in a language other than English—Hindi.

“It’s incredibly exciting to be bringing TED to India in this form,” TED curator Chris Anderson tells us. “The country is teeming with imagination and innovation, and we believe this series will tap into that spirit and bring insight and inspiration to many new minds.”

“The sheer size of Star TV’s audience, with more than 650 million viewers, makes this a significant milestone in TED’s ongoing effort to bring big ideas to curious minds,” added Juliet Blake, head of TV at TED and executive producer of the series. “Global television is opening up a new frontier for TED.”

Shah Rukh Khan says the show is a concept he “connected with instantly, as I believe that the media is perhaps the single most powerful vehicle to inspire change. I am looking forward to working with TED and Star India, and truly hope that together, we are able to inspire young minds across India and the world.”

More on this unique initiative will be announced at TED2017 in Vancouver and in the coming months. Stay tuned!

TEDWatch: Yuval Harari in conversation with TED’s Chris Anderson

In this one-hour video from TED headquarters, curator Chris Anderson sits down with historian and social critic Yuval Noah Harari to talk about this extraordinary moment in time — when nationalism is pitted against globalism, while the world is facing a job-loss crisis most of us are not even expecting. Rewatch this Facebook Live video:

Krebs on SecurityWho Ran

Late last month, multiple news outlets reported that unspecified law enforcement officials had seized the servers for, perhaps the largest online collection of usernames and passwords leaked or stolen in some of the worst data breaches — including billions of credentials for accounts at top sites like LinkedIn and Myspace.

In a development that could turn out to be deeply ironic, it seems that the real-life identity of LeakedSource’s principal owner may have been exposed by many of the same stolen databases he’s been peddling.

The now-defunct Leakedsource service.

The now-defunct LeakedSource service.

LeakedSource in October 2015 began selling access to passwords stolen in high-profile breaches. Enter any email address on the site’s search page and it would tell you if it had a password corresponding to that address. However, users had to select a payment plan before viewing any passwords.

LeakedSource was a curiosity to many, and for some journalists a potential source of news about new breaches. But unlike services such as BreachAlarm and — which force users to verify that they can access a given account or inbox before the site displays whether it has found a password associated with the account in question — LeakedSource did nothing to validate users. This fact, critics charged, showed that the proprietors of LeakedSource were purely interested in making money and helping others pillage accounts.

I also was curious about LeakedSource, but for a different reason. I wanted to chase down something I’d heard from multiple sources: That one of the administrators of LeakedSource also was the admin of abusewith[dot]us, a site unabashedly dedicated to helping people hack email and online gaming accounts.

Abusewith[dot]us began in September 2013 as a forum for learning and teaching how to hack accounts at Runescape, a massively multiplayer online role-playing game (MMORPG) set in a medieval fantasy realm where players battle for kingdoms and riches.

The currency with which Runescape players buy and sell weapons, potions and other in-game items are virtual gold coins, and many of Abusewith[dot]us’s early members traded in a handful of commodities: Phishing kits and exploits that could be used to steal Runescape usernames and passwords from fellow players; virtual gold plundered from hacked accounts; and databases from hacked forums and Web sites related to Runescape and other online games.

The administrator of Abusewith[dot]us is a hacker who uses the nickname “Xerx3s.” The avatar attached to Xerx3s’s account suggests the name is taken from Xerxes the Great, a Persian king who lived during the fifth century BC.

Xerx3s the hacker appears to be especially good at breaking into discussion forums and accounts dedicated to Runescape and online gaming. Xerx3s also is a major seller of Runescape gold — often sold to other players at steep discounts and presumably harvested from hacked accounts.

Xerx3s's administrator account profile at

Xerx3s’s administrator account profile at

I didn’t start looking into who might be responsible for LeakedSource until July 2016, when I sought an interview by reaching out to the email listed on the site ( Soon after, I received a Jabber chat invite from the address “”

The entirety of that brief interview is archived here. I wanted to know whether the proprietors of the service believed they were doing anything wrong (we’ll explore more about the legal aspects of LeakedSource’s offerings later in this piece).  Also, I wanted to learn whether the rumors of LeakedSource arising out of Abusewith[us] were true.

“After many of the big breaches of 2015, we noticed a common public trend…’Where can I search it to see if I was affected?’,” wrote the anonymous person hiding behind the account. “And thus, the idea was born to fill that need, not rising out of anything. We are however going to terminate the interview as it does seem to be more of a witch hunt instead of journalism. Thank you for your time.”

Nearly two weeks after that chat with the LeakedSource administrator, I got a note from a source who keeps fairly close tabs on the major players in the English-speaking cybercrime underground. My source told me he’d recently chatted with Xerx3s using the Jabber address Xerx3s has long used prior to the creation of LeakedSource —

Xerx3s told my source in great detail about my conversation with the Leakedsource administrator, suggesting that either Xerx3s was the same person I spoke with in my brief interview with LeakedSource, or that the LeakedSource admin had shared a transcript of our chat with Xerx3s.

Although his username on Abusewith[dot]us was Xerx3s, many of Xerx3s’s closest associates on the forum referred to him as “Wade” in their forum postings. This is in reference to a pseudonym Xerx3s frequently used, “Jeremy Wade.”

An associate of Xerx3s tells another abusewith[dot]us user that Xerx3s is the owner of LeakedSource. That comment was later deleted from the discussion thread pictured here.

An associate of Xerx3s tells another abusewith[dot]us user that Xerx3s is the owner of LeakedSource. That comment was later deleted from the discussion thread pictured here.

One email address this Jeremy Wade identity used pseudonymously was According to a “reverse WHOIS” record search ordered through, that email address is tied to two domain names registered in 2015: abusing[dot]rs, and cyberpay[dot]info. The original registration records for each site included the name “Secure Gaming LLC.” [Full disclosure: Domaintools is an advertiser on this blog].

The “Jeremy Wade” pseudonym shows up in a number of hacked forum databases that were posted to both Abusewith[dot]us and LeakedSource, including several other sites related to hacking and password abuse.

For example, the user database stolen and leaked from the DDoS-for-hire service “panic-stresser[dot]xyz” shows that a PayPal account tied to the email address paid $5 to cover a subscription for a user named “jeremywade;” The leaked Panicstresser database shows the Jeremywade account was tied to the email address, and that the account was created in July 2012.

The leaked Panicstresser database also showed that the first login for that Jeremywade account came from the Internet address, which is a dynamic Internet address assigned to residential customers of Comcast Communications in Michigan.

According to a large number of forum postings, it appears that whoever used the address also created several variations on that address, including,,, as well as

The Gmail account was used to register at least four domain names almost six years ago in 2011. Two of those domains — and — were originally registered to a “Nick Davros” at 3757 Dunes Parkway, Muskegon, Mich. The other two were registered to a Nick or Alex Davros at 868 W. Hile Rd., Muskegon, Mich. All four domain registration records included the phone number +12313430295.

I took that Internet address that the leaked Panicstresser database said was tied to the account and ran an Internet search on it. The address turned up in yet another compromised hacker forum database — this time in the leaked user database for sinister[dot]ly, ironically another site where users frequently post databases plundered from other sites and forums.

The leaked sinister[dot]ly forum database shows that a user by the name of “Jwade” who registered under the email address trpkisaiah@gmailcom first logged into the forum from the same Comcast Internet address tied to the account at Panicstresser.

I also checked that Michigan Comcast address with Farsight Security, a security firm which runs a paid service that tracks the historic linkages between Internet addresses and domain names. Farsight reported that between 2012 and 2014, the Internet address was tied to, popular “dynamic IP” service. and other dynamic IP address services are usually free services that allow users to have Web sites hosted on servers that frequently change their Internet addresses. This type of service is useful for people who want to host a Web site on a home-based Internet address that may change from time to time, because services like can be used to easily map the domain name to the user’s new Internet address whenever it happens to change.

Unfortunately, these dynamic IP providers are extremely popular in the attacker community, because they allow bad guys to keep their malware and scam sites up even when researchers manage to track the attacking IP address and convince the ISP responsible for that address to disconnect the malefactor. In such cases, dynamic IP services allow the owner of the attacking domain to simply re-route the attack site to another Internet address that he controls.

Farsight reports that the address maps back to three different dynamic IP domains, including “,” “,” and “” That first dynamic address — — was included among several hundred others in a list published by the Federal Bureau of Investigation as tied to the distribution of Blackshades, a popular malware strain that was used as a password-stealing trojan by hundreds of paying customers prior to May 2014.


In January 2017, when news of the alleged raid on LeakedSource began circulating in the media, I began going through my notes and emails searching for key accounts known to be tied to Xerx3s and the administrator of Abusewith[dot]us.

Somehow, in the previous three months I’d managed to overlook an anonymous message I received in mid-September from a reader who claimed to have hacked the email account, one of several addresses my research suggested was tied to Xerx3s.

The anonymous source didn’t say exactly how he hacked this account, but judging from the passwords tied to Xerx3s’s other known accounts that were included in the various forum database leaks listed above it may well have been because Xerx3s in some cases re-used the same password across multiple accounts. 

My anonymous source shared almost a dozen screenshots of his access to, which indicate the name attached to the account was “Alex Davros.” The screenshots also show this user received thousands of dollars in Paypal payments from over a fairly short period in 2015.

The screenshots also showed that was tied to a PayPal account assigned to a Secured Gaming LLC. Recall that this is the same company name included in the Web site registration records back in 2011 for and

A screenshot shared with me in Sept. 2016 by an anonymous source who said he'd hacked the Gmail address "".

A screenshot shared with me in Sept. 2016 by an anonymous source who said he’d hacked the Gmail address “”.

In addition, the screenshot above and others shared by my source indicate that the same Paypal account tied to was habitually used to pay a monthly bill from, a company that provides DDoS protection and hosting and which has long been the provider used by Abusewith[dot]us.

Finally, the anonymous hacker shared screenshots suggesting he had also hacked into the email account, an account apparently connected to a young lady in Michigan named Desi Parker. The screenshots for Ms. Parker suggest her hacked Gmail account was tied to an Apple iTunes account billed to a MasterCard ending in 7055 and issued to an Alexander Davros at 868 W. Hile, Muskegon, Mich.

The screenshots show the address is associated with an Instagram account for a woman by the same name from Muskegon, Mich. (note that the address given in the WHOIS records for Alex Davros’s and also was Muskegon, Mich).

Desi Parker’s Instagram lists her “spouse” as Alex Davros, and says her phone number is 231-343-0295. Recall that this is the same phone number included in the Alex Davros domain registration records for and That phone number is currently not in service.

Desi Parker’s Facebook account indeed says she is currently in a relationship with Alexander Marcus Davros, and the page links to this Facebook account for Alex Davros.

Alex’s Facebook profile is fairly sparse (at least the public version of it), but there is a singular notation in his entire profile that stands out: Beneath the “Other Names” heading under the “Details about Alexander” tab, Alex lists “TheKing.” Parker’s Instagram account includes a photo of an illustration she made including her beau’s first name with a crown on top.

Interestingly, two email addresses connected to domains associated with the Jeremy Wade alias — and — are tied to Facebook accounts for Michigan residents who both list Alex Davros among their Facebook friends.

Below is a rough mind map I created which attempts to show the connections between the various aliases, email addresses, phone numbers and Internet addresses mentioned above. At a minimum, they strongly indicate that Xerx3s is indeed an administrator of LeakedSource.

iamxerbutlsnoI managed to reach Davros through Twitter, and asked him to follow me so that we could exchange direct messages. Within maybe 60 seconds of my sending that tweet, Davros followed me on Twitter and politely requested via direct message that I remove my public Twitter messages asking him to follow me.

After I did as requested, Davros’s only response initially was, “Wow, impressive but I can honestly tell you I am not behind the service.” However, when pressed to be more specific, he admitted to being Xerx3s but claimed he had no involvement in LeakedSource.

“I am xer yes but LS no,” Davros said. He stopped answering my questions after that, saying he was busy “doing a couple things IRL.” IRL is Internet slang for “in real life.” Presumably these other things he was doing while I was firing off more questions had nothing to do with activities like deleting profiles or contacting an attorney.

Even if Davros is telling the truth, the preponderance of clues here and the myriad connections between them suggest that he at least has close ties to some of those who are involved in running LeakedSource.

A "mind map" I created to illustrate the apparent relationships between various addresses and pseudonyms referenced in this story.

A “mind map” I created to illustrate the apparent relationships between various addresses and pseudonyms referenced in this story.


On the surface, the rationale that LeakedSource’s proprietors have used to justify their service may seem somewhat reasonable: The service merely catalogs information that is already stolen from companies and that has been leaked in some form online.

But legal experts I spoke with saw things differently, saying LeakedSource’s owners could face criminal charges if prosecutors could show LeakedSource intended for the passwords that are for sale on the site to be used in the furtherance of a crime.

Orin Kerr, director of the Cybersecurity Law Initiative at The George Washington University, said trafficking in passwords is clearly a crime under the Computer Fraud and Abuse Act (CFAA).

Specifically, Section A6 of the CFAA, which makes it a crime to “knowingly and with intent to defraud traffic in any password or similar information through which a computer may be accessed without authorization, if…such trafficking affects interstate or foreign commerce.”

“CFAA quite clearly punishes password trafficking,” Kerr said. “The statute says the [accused] must be trafficking in passwords knowingly and with intent to defraud, or trying to further unauthorized access.”

Judith Germano, a senior fellow at the Center on Law and Security at New York University’s School of Law, said LeakedSource might have a veneer of legitimacy if it made an effort to check whether users already have access to the accounts for which they’re seeking passwords.

“If they’re not properly verifying that when the user goes to the site to get passwords then I think that’s where their mask of credibility falls,” Germano said.

LeakedSource may be culpable also because at one point the site offered to crack hashed or encrypted passwords for a fee. In addition, it seems clear that the people who ran the service also advocated the use of stolen passwords for financial gain.

Planet DebianAntoine Beaupré: A look at password managers

As we noted in an earlier article, passwords are a liability and we'd prefer to get rid of them, but the current reality is that we do use a plethora of passwords in our daily lives. This problem is especially acute for technology professionals, particularly system administrators, who have to manage a lot of different machines. But it also affects regular users who still use a large number of passwords, from their online bank to their favorite social-networking site. Despite the remarkable memory capacity of the human brain, humans are actually terrible at recalling even short sets of arbitrary characters with the precision needed for passwords.

Therefore humans reuse passwords, make them trivial or guessable, write them down on little paper notes and stick them on their screens, or just reset them by email every time. Our memory is undeniably failing us and we need help, which is where password managers come in. Password managers allow users to store an arbitrary number of passwords and just remember a single password to unlock them all.

But there is a large variety of password managers out there, so which one should we be using? At my previous job, an inventory was done of about 40 different free-software password managers in different stages of development and of varying quality. So, obviously, this article will not be exhaustive, but instead focus on a smaller set of some well-known options that may be interesting to readers.

KeePass: the popular alternative

The most commonly used password-manager design pattern is to store passwords in a file that is encrypted and password-protected. The most popular free-software password manager of this kind is probably KeePass.

An important feature of KeePass is the ability to auto-type passwords in forms, most notably in web browsers. This feature makes KeePass really easy to use, especially considering it also supports global key bindings to access passwords. KeePass databases are designed for simultaneous access by multiple users, for example, using a shared network drive.

KeePass has a graphical interface written in C#, so it uses the Mono framework on Linux. A separate project, called KeePassX is a clean-room implementation written in C++ using the Qt framework. Both support the AES and Twofish encryption algorithms, although KeePass recently added support for the ChaCha20 cipher. AES key derivation is used to generate the actual encryption key for the database, but the latest release of KeePass also added using Argon2, which was the winner of the July 2015 password-hashing competition. Both programs are more or less equivalent, although the original KeePass seem to have more features in general.

The KeePassX project has recently been forked into another project now called KeePassXC that implements a set of new features that are present in KeePass but missing from KeePassX like:

  • auto-type on Linux, Mac OS, and Windows
  • database merging — which allows multi-user support
  • using the web site's favicon in the interface

So far, the maintainers of KeePassXC seem to be open to re-merging the project "if the original maintainer of KeePassX in the future will be more active and will accept our merge and changes". I can confirm that, at the time of writing, the original KeePassX project now has 79 pending pull requests and only one pull request was merged since the last release, which was 2.0.3 in September 2016.

While KeePass and derivatives allow multiple users to access the same database through the merging process, they do not support multi-party access to a single database. This may be a limiting factor for larger organizations, where you may need, for example, a different password set for different technical support team levels. The solution in this case is to use separate databases for each team, with each team using a different shared secret.

Pass: the standard password manager?

I am currently using password-store, or pass, as a password manager. It aims to be "the standard Unix password manager". Pass is a GnuPG-based password manager that features a surprising number of features given its small size:

  • copy-paste support
  • Git integration
  • multi-user/group support
  • pluggable extensions (in the upcoming 1.7 release)

The command-line interface is simple to use and intuitive. The following, will, for example, create a pass repository, a 20 character password for your LWN account and copy it to the clipboard:

    $ pass init
    $ pass generate -c lwn 20

The main issue with pass is that it doesn't encrypt the name of those entries: if someone were to compromise my machine, they could easily see which sites I have access to simply by listing the passwords stored in ~/.password-store. This is a deliberate design decision by the upstream project, as stated by a mailing list participant, Allan Odgaard:

Using a single file per item has the advantage of shell completion, using version control, browse, move and rename the items in a file browser, edit them in a regular editor (that does GPG, or manually run GPG first), etc.

Odgaard goes on to point out that there are alternatives that do encrypt the entire database (including the site names) if users really need that feature.

Furthermore, there is a tomb plugin for pass that encrypts the password store in a LUKS container (called a "tomb"), although it requires explicitly opening and closing the container, which makes it only marginally better than using full disk encryption system-wide. One could also argue that password file names do not hold secret information, only the site name and username, perhaps, and that doesn't require secrecy. I do believe those should be kept secret, however, as they could be used to discover (or prove) which sites you have access to and then used to perform other attacks. One could draw a parallel with the SSH known_hosts file, which used to be plain text but is now hashed so that hosts are more difficult to discover.

Also, sharing a database for multi-user support will require some sort of file-sharing mechanism. Given the integrated Git support, this will likely involve setting up a private Git repository for your team, something which may not be accessible to the average Linux user. Nothing keeps you, however, from sharing the ~/.password-store directory through another file sharing mechanism like (say) Syncthing or Dropbox).

You can use multiple distinct databases easily using the PASSWORD_STORE_DIR environment variable. For example, you could have a shell alias to use a different repository for your work passwords with:

    alias work-pass="PASSWORD_STORE_DIR=~/work-passwords pass"

Group support comes from a clever use of the GnuPG multiple-recipient encryption support. You simply have to specify multiple OpenPGP identities when initializing the repository, which also works in subdirectories:

    $ pass init -p Ateam
    mkdir: created directory '/home/me/.password-store/Ateam'
    Password store initialized for,
    [master 0e3dbe7] Set GPG id to,
     1 file changed, 2 insertions(+)
     create mode 100644 Ateam/.gpg-id

The above will configure pass to encrypt the passwords in the Ateam directory for and Pass depends on GnuPG to do the right thing when encrypting files and how those identities are treated is entirely delegated to GnuPG's default configuration. This could lead to problems if arbitrary keys can be injected into your key ring, which could confuse GnuPG. I would therefore recommend using full key fingerprints instead of user identifiers.

Regarding the actual encryption algorithms used, in my tests, GnuPG 1.4.18 and 2.1.18 seemed to default to 256-bit AES for encryption, but that has not always been the case. The chosen encryption algorithm actually depends on the recipient's key preferences, which may vary wildly: older keys and versions may use anything from 128-bit AES to CAST5 or Triple DES. To figure out which algorithm GnuPG chose, you may want to try this pipeline:

    $ echo test | gpg -e -r | gpg -d -v
    gpg: encrypted with 2048-bit RSA key, ID XXXXXXX, created XXXXX
      "You Person You <>"
    gpg: AES256 encrypted data
    gpg: original file name=''

As you can see, pass is primarily a command-line application, which may make it less accessible to regular users. The community has produced different graphical interfaces that are either using pass directly or operate on the storage with their own GnuPG integration. I personally use pass in combination with Rofi to get quick access to my passwords, but less savvy users may want to try the QtPass interface, which should be more user-friendly. QtPass doesn't actually depend on pass and can use GnuPG directly to interact with the pass database; it is available for Linux, BSD, OS X, and Windows.

Browser password managers

Most users are probably already using a password manager through their web browser's "remember password" functionality. For example, Chromium will ask if you want it to remember passwords and encrypt them with your operating system's facilities. For Windows, this encrypts the passwords with your login password and, for GNOME, it will store the passwords in the gnome-keyring storage. If you synchronize your Chromium settings with your Google account, Chromium will store those passwords on Google's servers, encrypted with a key that is stored in the Google Account itself. So your passwords are then only as safe as your Google account. Note that this was covered here in 2010, although back then Chromium didn't synchronize with the Google cloud or encrypt with the system-level key rings. That facility was only added in 2013.

In Firefox, there's an optional, profile-specific master password that unlocks all passwords. In this case, the issue is that browsers are generally always open, so the vault is always unlocked. And this is for users that actually do pick a master password; users are often completely unaware that they should set one.

The unlocking mechanism is a typical convenience-security trade-off: either users need to constantly input their master passwords to login or they don't, and the passwords are available in the clear. In this case, Chromium's approach of actually asking users to unlock their vault seems preferable, even though the developers actually refused to implement the feature for years.

Overall, I would recommend against using a browser-based password manager. Even if it is not used for critical sites, you will end up with hundreds of such passwords that are vulnerable while the browser is running (in the case of Firefox) or at the whim of Google (in the case of Chromium). Furthermore, the "auto-fill" feature that is often coupled with browser-based password managers is often vulnerable to serious attacks, which is mentioned below.

Finally, because browser-based managers generally lack a proper password generator, users may fail to use properly generated passwords, so they can then be easily broken. A password generator has been requested for Firefox, according to this feature request opened in 2007, and there is a password generator in Chrome, but it is disabled by default and hidden in the mysterious chrome://flags URL.

Other notable password managers

Another alternative password manager, briefly mentioned in the previous article, is the minimalistic Assword password manager that, despite its questionable name, is also interesting. Its main advantage over pass is that it uses a single encrypted JSON file for storage, and therefore doesn't leak the name of the entries by default. In addition to copy/paste, Assword also supports automatically entering passphrases in fields using the xdo library. Like pass, it uses GnuPG to encrypt passphrases. According to Assword maintainer Daniel Kahn Gillmor in email, the main issue with Assword is "interaction between generated passwords and insane password policies". He gave the example of the Time-Warner Cable registration form that requires, among other things, "letters and numbers, between 8 and 16 characters and not repeat the same characters 3 times in a row".

Another well-known password manager is the commercial LastPass service which released a free-software command-line client called lastpass-cli about three years ago. Unfortunately, the server software of the service is still proprietary. And given that LastPass has had at least two serious security breaches since that release, one could legitimately question whether this is a viable solution for storing important secrets.

In general, web-based password managers expose a whole new attack surface that is not present in regular password managers. A 2014 study by University of California researchers showed that, out of five password managers studied, every one of them was vulnerable to at least one of the vulnerabilities studied. LastPass was, in particular, vulnerable to a cross-site request forgery (CSRF) attack that allowed an attacker to bypass account authentication and access the encrypted database.

Problems with password managers

When you share a password database within a team, how do you remove access to a member of the team? While you can, for example, re-encrypt a pass database with new keys (thereby removing or adding certain accesses) or change the password on a KeePass database, a hostile party could have made a backup of the database before the revocation. Indeed, in the case of pass, older entries are still in the Git history. So access revocation is a problematic issue found with all shared password managers, as it may actually mean going through every password and changing them online.

This fundamental problem with shared secrets can be better addressed with a tool like Vault or SFLvault. Those tools aim to provide teams with easy ways to store dynamic tokens like API keys or service passwords and share them not only with other humans, but also make them accessible to machines. The general idea of those projects is to store secrets in a central server and send them directly to relevant services without human intervention. This way, passwords are not actually shared anymore, which is similar in spirit to the approach taken by centralized authentication systems like Kerberos). If you are looking at password management for teams, those projects may be worth a look.

Furthermore, some password managers that support auto-typing were found to be vulnerable to HTML injection attacks: if some third-party ad or content is able to successfully hijack the parent DOM content, it masquerades as a form that could fool auto-typing software as demonstrated by this paper that was submitted at USENIX 2014. Fortunately, KeePass was not vulnerable according to the security researchers, but LastPass was, again, vulnerable.

Future of password managers?

All of the solutions discussed here assume you have a trusted computer you regularly have access to, which is a usage pattern that seems to be disappearing with a majority of the population. You could consider your phone to be that trusted device, yet a phone can be lost or stolen more easily than a traditional workstation or even a laptop. And while KeePass has Android and iOS ports, those do not resolve the question of how to share the password storage among those devices or how to back them up.

Password managers are fundamentally file-based, and the "file" concept seems to be quickly disappearing, faster than we technologists sometimes like to admit. Looking at some relatives' use of computers, I notice it is less about "files" than images, videos, recipes, and various abstract objects that are stored in the "cloud". They do not use local storage so much anymore. In that environment, password managers lose their primary advantage, which is a local, somewhat offline file storage that is not directly accessible to attackers. Therefore certain password managers are specifically designed for the cloud, like LastPass or web browser profile synchronization features, without necessarily addressing the inherent issues with cloud storage and opening up huge privacy and security issues that we absolutely need to address.

This is where the "password hasher" design comes in. Also known as "stateless" or "deterministic" password managers, password hashers are emerging as a convenient solution that could possibly replace traditional password managers as users switch from generic computing platforms to cloud-based infrastructure. We will cover password hashers and the major security challenges they pose in a future article.

Note: this article first appeared in the Linux Weekly News.

Planet DebianAntoine Beaupré: A look at password managers

As we noted in an earlier article, passwords are a liability and we'd prefer to get rid of them, but the current reality is that we do use a plethora of passwords in our daily lives. This problem is especially acute for technology professionals, particularly system administrators, who have to manage a lot of different machines. But it also affects regular users who still use a large number of passwords, from their online bank to their favorite social-networking site. Despite the remarkable memory capacity of the human brain, humans are actually terrible at recalling even short sets of arbitrary characters with the precision needed for passwords.

Therefore humans reuse passwords, make them trivial or guessable, write them down on little paper notes and stick them on their screens, or just reset them by email every time. Our memory is undeniably failing us and we need help, which is where password managers come in. Password managers allow users to store an arbitrary number of passwords and just remember a single password to unlock them all.

But there is a large variety of password managers out there, so which one should we be using? At my previous job, an inventory was done of about 40 different free-software password managers in different stages of development and of varying quality. So, obviously, this article will not be exhaustive, but instead focus on a smaller set of some well-known options that may be interesting to readers.

KeePass: the popular alternative

The most commonly used password-manager design pattern is to store passwords in a file that is encrypted and password-protected. The most popular free-software password manager of this kind is probably KeePass.

An important feature of KeePass is the ability to auto-type passwords in forms, most notably in web browsers. This feature makes KeePass really easy to use, especially considering it also supports global key bindings to access passwords. KeePass databases are designed for simultaneous access by multiple users, for example, using a shared network drive.

KeePass has a graphical interface written in C#, so it uses the Mono framework on Linux. A separate project, called KeePassX is a clean-room implementation written in C++ using the Qt framework. Both support the AES and Twofish encryption algorithms, although KeePass recently added support for the ChaCha20 cipher. AES key derivation is used to generate the actual encryption key for the database, but the latest release of KeePass also added using Argon2, which was the winner of the July 2015 password-hashing competition. Both programs are more or less equivalent, although the original KeePass seem to have more features in general.

The KeePassX project has recently been forked into another project now called KeePassXC that implements a set of new features that are present in KeePass but missing from KeePassX like:

  • auto-type on Linux, Mac OS, and Windows
  • database merging — which allows multi-user support
  • using the web site's favicon in the interface

So far, the maintainers of KeePassXC seem to be open to re-merging the project "if the original maintainer of KeePassX in the future will be more active and will accept our merge and changes". I can confirm that, at the time of writing, the original KeePassX project now has 79 pending pull requests and only one pull request was merged since the last release, which was 2.0.3 in September 2016.

While KeePass and derivatives allow multiple users to access the same database through the merging process, they do not support multi-party access to a single database. This may be a limiting factor for larger organizations, where you may need, for example, a different password set for different technical support team levels. The solution in this case is to use separate databases for each team, with each team using a different shared secret.

Pass: the standard password manager?

I am currently using password-store, or pass, as a password manager. It aims to be "the standard Unix password manager". Pass is a GnuPG-based password manager that features a surprising number of features given its small size:

  • copy-paste support
  • Git integration
  • multi-user/group support
  • pluggable extensions (in the upcoming 1.7 release)

The command-line interface is simple to use and intuitive. The following, will, for example, create a pass repository, a 20 character password for your LWN account and copy it to the clipboard:

    $ pass init
    $ pass generate -c lwn 20

The main issue with pass is that it doesn't encrypt the name of those entries: if someone were to compromise my machine, they could easily see which sites I have access to simply by listing the passwords stored in ~/.password-store. This is a deliberate design decision by the upstream project, as stated by a mailing list participant, Allan Odgaard:

Using a single file per item has the advantage of shell completion, using version control, browse, move and rename the items in a file browser, edit them in a regular editor (that does GPG, or manually run GPG first), etc.

Odgaard goes on to point out that there are alternatives that do encrypt the entire database (including the site names) if users really need that feature.

Furthermore, there is a tomb plugin for pass that encrypts the password store in a LUKS container (called a "tomb"), although it requires explicitly opening and closing the container, which makes it only marginally better than using full disk encryption system-wide. One could also argue that password file names do not hold secret information, only the site name and username, perhaps, and that doesn't require secrecy. I do believe those should be kept secret, however, as they could be used to discover (or prove) which sites you have access to and then used to perform other attacks. One could draw a parallel with the SSH known_hosts file, which used to be plain text but is now hashed so that hosts are more difficult to discover.

Also, sharing a database for multi-user support will require some sort of file-sharing mechanism. Given the integrated Git support, this will likely involve setting up a private Git repository for your team, something which may not be accessible to the average Linux user. Nothing keeps you, however, from sharing the ~/.password-store directory through another file sharing mechanism like (say) Syncthing or Dropbox.

You can use multiple distinct databases easily using the PASSWORD_STORE_DIR environment variable. For example, you could have a shell alias to use a different repository for your work passwords with:

    alias work-pass="PASSWORD_STORE_DIR=~/work-passwords pass"

Group support comes from a clever use of the GnuPG multiple-recipient encryption support. You simply have to specify multiple OpenPGP identities when initializing the repository, which also works in subdirectories:

    $ pass init -p Ateam
    mkdir: created directory '/home/me/.password-store/Ateam'
    Password store initialized for,
    [master 0e3dbe7] Set GPG id to,
     1 file changed, 2 insertions(+)
     create mode 100644 Ateam/.gpg-id

The above will configure pass to encrypt the passwords in the Ateam directory for and Pass depends on GnuPG to do the right thing when encrypting files and how those identities are treated is entirely delegated to GnuPG's default configuration. This could lead to problems if arbitrary keys can be injected into your key ring, which could confuse GnuPG. I would therefore recommend using full key fingerprints instead of user identifiers.

Regarding the actual encryption algorithms used, in my tests, GnuPG 1.4.18 and 2.1.18 seemed to default to 256-bit AES for encryption, but that has not always been the case. The chosen encryption algorithm actually depends on the recipient's key preferences, which may vary wildly: older keys and versions may use anything from 128-bit AES to CAST5 or Triple DES. To figure out which algorithm GnuPG chose, you may want to try this pipeline:

    $ echo test | gpg -e -r | gpg -d -v
    gpg: encrypted with 2048-bit RSA key, ID XXXXXXX, created XXXXX
      "You Person You <>"
    gpg: AES256 encrypted data
    gpg: original file name=''

As you can see, pass is primarily a command-line application, which may make it less accessible to regular users. The community has produced different graphical interfaces that are either using pass directly or operate on the storage with their own GnuPG integration. I personally use pass in combination with Rofi to get quick access to my passwords, but less savvy users may want to try the QtPass interface, which should be more user-friendly. QtPass doesn't actually depend on pass and can use GnuPG directly to interact with the pass database; it is available for Linux, BSD, OS X, and Windows.

Browser password managers

Most users are probably already using a password manager through their web browser's "remember password" functionality. For example, Chromium will ask if you want it to remember passwords and encrypt them with your operating system's facilities. For Windows, this encrypts the passwords with your login password and, for GNOME, it will store the passwords in the gnome-keyring storage. If you synchronize your Chromium settings with your Google account, Chromium will store those passwords on Google's servers, encrypted with a key that is stored in the Google Account itself. So your passwords are then only as safe as your Google account. Note that this was covered here in 2010, although back then Chromium didn't synchronize with the Google cloud or encrypt with the system-level key rings. That facility was only added in 2013.

In Firefox, there's an optional, profile-specific master password that unlocks all passwords. In this case, the issue is that browsers are generally always open, so the vault is always unlocked. And this is for users that actually do pick a master password; users are often completely unaware that they should set one.

The unlocking mechanism is a typical convenience-security trade-off: either users need to constantly input their master passwords to login or they don't, and the passwords are available in the clear. In this case, Chromium's approach of actually asking users to unlock their vault seems preferable, even though the developers actually refused to implement the feature for years.

Overall, I would recommend against using a browser-based password manager. Even if it is not used for critical sites, you will end up with hundreds of such passwords that are vulnerable while the browser is running (in the case of Firefox) or at the whim of Google (in the case of Chromium). Furthermore, the "auto-fill" feature that is often coupled with browser-based password managers is often vulnerable to serious attacks, which is mentioned below.

Finally, because browser-based managers generally lack a proper password generator, users may fail to use properly generated passwords, so they can then be easily broken. A password generator has been requested for Firefox, according to this feature request opened in 2007, and there is a password generator in Chrome, but it is disabled by default and hidden in the mysterious chrome://flags URL.

Other notable password managers

Another alternative password manager, briefly mentioned in the previous article, is the minimalistic Assword password manager that, despite its questionable name, is also interesting. Its main advantage over pass is that it uses a single encrypted JSON file for storage, and therefore doesn't leak the name of the entries by default. In addition to copy/paste, Assword also supports automatically entering passphrases in fields using the xdo library. Like pass, it uses GnuPG to encrypt passphrases. According to Assword maintainer Daniel Kahn Gillmor in email, the main issue with Assword is "interaction between generated passwords and insane password policies". He gave the example of the Time-Warner Cable registration form that requires, among other things, "letters and numbers, between 8 and 16 characters and not repeat the same characters 3 times in a row".

Another well-known password manager is the commercial LastPass service which released a free-software command-line client called lastpass-cli about three years ago. Unfortunately, the server software of the service is still proprietary. And given that LastPass has had at least two serious security breaches since that release, one could legitimately question whether this is a viable solution for storing important secrets.

In general, web-based password managers expose a whole new attack surface that is not present in regular password managers. A 2014 study by University of California researchers showed that, out of five password managers studied, every one of them was vulnerable to at least one of the vulnerabilities studied. LastPass was, in particular, vulnerable to a cross-site request forgery (CSRF) attack that allowed an attacker to bypass account authentication and access the encrypted database.

Problems with password managers

When you share a password database within a team, how do you remove access to a member of the team? While you can, for example, re-encrypt a pass database with new keys (thereby removing or adding certain accesses) or change the password on a KeePass database, a hostile party could have made a backup of the database before the revocation. Indeed, in the case of pass, older entries are still in the Git history. So access revocation is a problematic issue found with all shared password managers, as it may actually mean going through every password and changing them online.

This fundamental problem with shared secrets can be better addressed with a tool like Vault or SFLvault. Those tools aim to provide teams with easy ways to store dynamic tokens like API keys or service passwords and share them not only with other humans, but also make them accessible to machines. The general idea of those projects is to store secrets in a central server and send them directly to relevant services without human intervention. This way, passwords are not actually shared anymore, which is similar in spirit to the approach taken by centralized authentication systems like Kerberos. If you are looking at password management for teams, those projects may be worth a look.

Furthermore, some password managers that support auto-typing were found to be vulnerable to HTML injection attacks: if some third-party ad or content is able to successfully hijack the parent DOM content, it masquerades as a form that could fool auto-typing software as demonstrated by this paper that was submitted at USENIX 2014. Fortunately, KeePass was not vulnerable according to the security researchers, but LastPass was, again, vulnerable.

Future of password managers?

All of the solutions discussed here assume you have a trusted computer you regularly have access to, which is a usage pattern that seems to be disappearing with a majority of the population. You could consider your phone to be that trusted device, yet a phone can be lost or stolen more easily than a traditional workstation or even a laptop. And while KeePass has Android and iOS ports, those do not resolve the question of how to share the password storage among those devices or how to back them up.

Password managers are fundamentally file-based, and the "file" concept seems to be quickly disappearing, faster than we technologists sometimes like to admit. Looking at some relatives' use of computers, I notice it is less about "files" than images, videos, recipes, and various abstract objects that are stored in the "cloud". They do not use local storage so much anymore. In that environment, password managers lose their primary advantage, which is a local, somewhat offline file storage that is not directly accessible to attackers. Therefore certain password managers are specifically designed for the cloud, like LastPass or web browser profile synchronization features, without necessarily addressing the inherent issues with cloud storage and opening up huge privacy and security issues that we absolutely need to address.

This is where the "password hasher" design comes in. Also known as "stateless" or "deterministic" password managers, password hashers are emerging as a convenient solution that could possibly replace traditional password managers as users switch from generic computing platforms to cloud-based infrastructure. We will cover password hashers and the major security challenges they pose in a future article.

Note: this article first appeared in the Linux Weekly News.

TED‘Armchair archaeologists’ search 5 million tiles of Peru

Morning fog and clouds reveal Machu Picchu, the ancient lost city of the Incas, one of Peru's top tourist destinations. Photo Credit: Design Pics Inc/National Geographic Creative

Morning clouds reveal Machu Picchu, ancient city of the Incas. Peru is home to many archaeological sites — and citizen scientists are mapping the country with GlobalXplorer. Photo: Design Pics Inc./National Geographic Creative

GlobalXplorer, the citizen science platform for archaeology, launched two weeks ago. It’s the culmination of Sarah Parcak’s TED Prize wish and, already, more than 32,000 curious minds from around the world have started their training, learning to spot signs of ancient sites threatened by looters. Working together, the GlobalXplorer community has just finished searching the 5 millionth tile in Peru, the first country the platform is mapping.

“I’m thrilled,” said Parcak. “I had no idea we’d complete this many tiles so soon.”

“Expedition Peru” has users searching more than 250,000 square kilometers of highlands and desert, captured in high-resolution satellite imagery provided by DigitalGlobe. This large search area has been divided into nearly 120 million tiles, each about the size of a few city blocks. Users look at tiles one at a time, and mark whether they see anything in the image that could be a looting pit. When 5–6 users flag a site as containing potential looting, Parcak’s team will step in to study it in more detail. “So far, the community has flagged numerous potential looting sites,” said Parcak. “We’ll be taking a look at each one and further investigating.”

GlobalXplorer volunteers are searching Peru, one tile at a time, looking for signs of looting. Each tile shows an area the size of a few city blocks. Photo: Courtesy of GlobalXplorer

GlobalXplorer volunteers are searching Peru, one tile at a time, looking for signs of looting. Each tile shows an area the size of a few city blocks. Photo: Courtesy of GlobalXplorer

When GlobalXplorer launched, The Guardian described its users as “armchair archaeologists.” As this growing community searches for signs of looting, it’s unlocking articles and videos from National Geographic’s archives that give greater context to the expedition. So far, four chapters are available — including one on the explorers whose work has shed light on the mysteries of Peru, and one on the Chavín culture known for its psychedelic religious rituals.

“Everyone will find things on GlobalXplorer,” said Parcak. “All users are making a real difference. I’ve had photos from my friends showing their kids working together to find sites, and emails from retirees who always wanted to be archaeologists but never could. It’s really heartwarming to see this work.”

Expedition Peru draws to a close on March 15, 2017. Start searching »

Planet Linux AustraliaBinh Nguyen: Life in Libya, Going off the Grid, and More

On Libya: - Berber tribes are indigenous people. For most of its history, Libya has been subjected to varying degrees of foreign control, from Europe, Asia, and Africa. The modern history of independent Libya began in 1951. The history of Libya comprises six distinct periods: Ancient Libya, the Roman era, the Islamic era, Ottoman rule, Italian rule, and the Modern era. Very small population of

CryptogramResearch into the Root Causes of Terrorism

Interesting article in Science discussing field research on how people are radicalized to become terrorists.

The potential for research that can overcome existing constraints can be seen in recent advances in understanding violent extremism and, partly, in interdiction and prevention. Most notable is waning interest in simplistic root-cause explanations of why individuals become violent extremists (e.g., poverty, lack of education, marginalization, foreign occupation, and religious fervor), which cannot accommodate the richness and diversity of situations that breed terrorism or support meaningful interventions. A more tractable line of inquiry is how people actually become involved in terror networks (e.g., how they radicalize and are recruited, move to action, or come to abandon cause and comrades).

Reports from the The Soufan Group, International Center for the Study of Radicalisation (King's College London), and the Combating Terrorism Center (U.S. Military Academy) indicate that approximately three-fourths of those who join the Islamic State or al-Qaeda do so in groups. These groups often involve preexisting social networks and typically cluster in particular towns and neighborhoods.. This suggests that much recruitment does not need direct personal appeals by organization agents or individual exposure to social media (which would entail a more dispersed recruitment pattern). Fieldwork is needed to identify the specific conditions under which these processes play out. Natural growth models of terrorist networks then might be based on an epidemiology of radical ideas in host social networks rather than built in the abstract then fitted to data and would allow for a public health, rather than strictly criminal, approach to violent extremism.

Such considerations have implications for countering terrorist recruitment. The present USG focus is on "counternarratives," intended as alternative to the "ideologies" held to motivate terrorists. This strategy treats ideas as disembodied from the human conditions in which they are embedded and given life as animators of social groups. In their stead, research and policy might better focus on personalized "counterengagement," addressing and harnessing the fellowship, passion, and purpose of people within specific social contexts, as ISIS and al-Qaeda often do. This focus stands in sharp contrast to reliance on negative mass messaging and sting operations to dissuade young people in doubt through entrapment and punishment (the most common practice used in U.S. law enforcement) rather than through positive persuasion and channeling into productive life paths. At the very least, we need field research in communities that is capable of capturing evidence to reveal which strategies are working, failing, or backfiring.

Sociological ImagesWhy did millions march? A view from the many women’s marches

Why did people march on January 21, 2017? As a team of sociologists interested in social movements, we know there are many possible answers to this seemingly simple question.

As a team of sociologists we have developed a multi-method, multi-site research project, Mobilizing Millions: Engendering Protest Across the Globe.* We want to understand why people participate in a march of this scale, at a critical historical juncture in our political landscape. Within weeks of discussion of the first march, there were already “sister” march pages national and internationally. While it is beyond the scope of this post to discuss all of the project findings thus far, the predictability of the racial tensions visible in social media or the role of men, local opportunities and challenges we do offer some early findings.

In the project’s first phase, we had team members on the ground in Austin, TX; Boston, MA; Los Angeles, CA; New York, NY; Philadelphia, PA;  Portland, OR; Santa Barbara, CA and St. Louis, MO. We are currently conducting a survey about the motivations and experiences that brought millions of people to the marches worldwide. We recruited respondents from marches in the aforementioned cities, and online. This has resulted in responses from around the world. Our preliminary findings from the observations and survey highlight that 1) there were a range of reasons people attended marches and 2) across and within sites, there were varying experiences of “the” march in any location.

One striking similarity we observed across sites was the limited visible presence of social movement organizations (SMOs). For sure, SMOs became visible in social media leading up to the event (particularly for the DC march). Unlike at social movement gatherings such as the US Social Forum or conservative equivalents, the sheer number of unaffiliated people dwarfed any delegations or representatives from SMOs. Of our almost 60-member nation-wide team across sites only a handful had encountered anyone handing out organizational material, as we would see at other protest. This is perhaps what brought many people to the march—an opportunity to be an individual connecting with other individuals. However, this is an empirical question as is what this means for the future of social movement organizing. We hope others join us in answering.

Second, while the energy was palpable at all of the marches so was the confusion. As various media sources reported, attendance at all sites far exceeded projections, sometimes by 10 times. Consequently, the physical presence of the expanded beyond organizers’ expectations, which in many places required a schedule shifted. At all marches there were points where participants in central areas could not move and most people could not hear scheduled speakers even if they were physically close to a stage.  Across the sites, we also observed how this challenge stimulated different responses. In multiple locations, people gathering spontaneously created their own sub-marches out of excitement as happened in DC when a band started playing on Madison street and people followed. Or, while waiting, waiting participants chanted “march, march.” Still, in many locations, once the official march started, people created sub-marches out of necessity because the pre-planned march route was impassable. When faced with standing for an hour to wait their “turn” to walk or create an alternative, they chose the latter.

Creativity was visible in artistic forms as well. While there were professionally printed signs (and T-shirts), there was a wealth of handmade signs at the marches. As expected, a slew that referenced phrases the president-elect had said noting, for example, “this pussy grabs back.” Yet there was also a range of other signs ranging from simple text to complicated storyboards (see below).

Across sites, we also saw many differences: including which types of organizations sponsored (or “supported” or “ were affiliated with”) that march.

At the Austin, Texas march, marchers’ signs and chants reflected a wide variety of concerns, including women’s reproductive health care, Black Lives Matter, and environmental justice. The emotional tenor was frequently celebratory, though it varied from one point in the march to another across a crowd reported to be more than 40,000. Many speeches at the rally immediately following the march connected the actions of the Texas state legislature–on whose front steps the march began and ended–to the broader national context.

Photo of Austin, TX by Anna Chatillon-Reed.

The Los Angeles March numbers suggest it exceeded DC participation. There was a noticeable presence of signs about immigration and in Spanish, which is not surprising considering the local and state demographics.

Photo of Los Angeles by Fátima Suarez.
Photo of Los Angeles by Fátima Suarez

The Philadelphia, PA march was close to bigger cities of in New York and DC. Some participants noted that due to the location it was  “competing” for marchers.

Philadelphia photo by Alex Kulick.
Philadelphia photo by Alex Kulick.

The Portland, OR protest also exceeded attendance expectations as marchers withstood hours of pouring rain. Holding the “sister” marches on the same day worldwide emphasized the magnitude and assists in building collective identity. Yet it also meant organizers in different locations faced vastly different challenges. Factors such as weather that might not have existed if organizers had been scheduling based solely on local norms and contexts.

Portland photo by Kelsy Kretschmer.
Portland photo by Kelsy Kretschmer.

To help provide a preliminary sense of the motivations and continued engagement of marchers, we examined a sample of the ~40,000 tweets posted over two months. The analysis continues.

In the coming month, we are launching a separate survey to better understand a group social movement scholars are sometimes less inclined to study: people who do not participate in marches on January 21 (there are exceptions to this of course). As social movement scholars know, mobilization is actually a rare occurrence when we consider the range of grievances present in any society at any given moment. For a second phase of the project, we will conduct interviews with select survey participants.

Understanding the range of responses to grievances is critical as we move into this new era. If the first month of Trump’s presidency is any indication of the years to come, scholars and activists across the political spectrum will have many opportunities to engage these questions.


*The team Faculty collaborators are Zakiya Luna, PhD (Principal Investigator, California, DC, LA,PH and TX coordinator); Kristen Barber, PhD (St. Louis Lead); Selina Gallo-Cruz, PhD (Boston Lead); Kelsy Kretschmer, PhD (Portland Lead). The site leadership was provided by Anna Chatillon (Austin, TX); Fátima Suarez (Los Angeles, CA); Alex Kulick (Philadelphia, PA & social media); Chandra Russo, PhD (DC co-lead). We are also grateful to many volunteer research assistants.

Dr. Zakiya Luna is an Assistant Professor of Sociology at University of California, Santa Barbara. Her research focused on social movements, human rights and reproduction with an emphasis on the effects of intersecting inequalities within and across these sites. She has published multiple articles on activism, feminism and reproductive justice. For more information on her research and teaching, see

Alex Kulick, MA, is a doctoral student in sociology at the University of California, Santa Barbara and trainee in the National Science Foundation network science IGERT program. Their research investigates social processes of inequality and resistance with an emphasis on sexuality, gender, and race.

Anna Chatillon-Reed is a doctoral student in sociology at the University of California, Santa Barbara. She is currently completing her MA, which investigates the relationship between the Black Lives Matter movement and feminist organizations.

(View original at

Worse Than FailureCodeSOD: Notted Up

There’s an old saying, that if your code is so unclear it needs comments to explain it, you should probably rewrite it. Dan found this code in a production system, which invents a bizarre inversion of that principle:

static BOOLEAN UpdateFileStoreTemplates ()
  BOOLEAN NotResult = FALSE;

  NotResult |= !UpdateFileStoreTemplate (DC_EMAIL_TEMPLATE); // Not-ing a fail makes it true, so if Not result is True we've had a failure
  NotResult |= !UpdateFileStoreTemplate (DC_TABLE_HEADER_TEMPLATE); // Not-ing a fail makes it true, so if Not result is True we've had a failure
  NotResult |= !UpdateFileStoreTemplate (DC_TABLE_ROW_TEMPLATE); // Not-ing a fail makes it true, so if Not result is True we've had a failure
  NotResult |= !UpdateFileStoreTemplate (DC_TABLE_FOOTER_TEMPLATE); // Not-ing a fail makes it true, so if Not result is True we've had a failure
  NotResult |= !UpdateFileStoreTemplate (WS_EMAIL_TEMPLATE); // Not-ing a fail makes it true, so if Not result is True we've had a failure
  NotResult |= !UpdateFileStoreTemplate (WS_TABLE_HEADER_TEMPLATE); // Not-ing a fail makes it true, so if Not result is True we've had a failure
  NotResult |= !UpdateFileStoreTemplate (WS_TABLE_ROW_TEMPLATE); // Not-ing a fail makes it true, so if Not result is True we've had a failure
  NotResult |= !UpdateFileStoreTemplate (WS_TABLE_FOOTER_TEMPLATE); // Not-ing a fail makes it true, so if Not result is True we've had a failure

  return !NotResult;

Here, the code is clear enough that I don’t need comments, but the comments are so unclear I’m glad the code is there to explain them.

Not-ing a fail certainly does not make a win.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianHolger Levsen: Debian has installer images with non-free firmware included

Even though they are impossible to find without using a search engine or bookmarks, they exist.

Bookmark them now. Or use a search engine later ;-)

Planet DebianJamie McClelland: Re-thinking Web App Security

An organizer friend interested in activating a rapid response network to counter Trump-era ICE raids on immigrants asked me about any existing simple and easy tools that could send out emergency SMS/text message alerts.

I thought about it and ended up writing my first pouchdb web application to accomplish the task. For the curious, you can see it in action and browse the source code. To use it to send SMS, you have to register for a Twilio account - you can get a free account that has very restricted SMS sending capability or pay for full functionality.

The project is unlike anything I have done before.

I chose pouchdb because it stores all of your contacts in your browser not on a server somewhere in the so-called cloud. (You can also choose to sync to a server, a feature I have not yet implemented.)

The implications of storing your data locally are quite profound.

Classic Web App

Let's first consider the more common web application: You visit a web site (the same web site that your colleagues visit, or in the case of a massive application like, the same web site that everyone in the world visits). Then, you login with your own unique username and password, which grants you access to the portion the database that you are suppose to have access to.

For most use-cases, this model is fairly ideal:

  • If you have a technically competent host, your data is backed up regularly and the database is available nearly 100% of the time
  • If you have a politically trust-worthy host, your host will notify you and put up a fight before turning any of your data over to a government agent
  • If you drop your phone in the toilet you can always login from another computer to access your data
  • If you save your password in your browser and your laptop is stolen, you can always change your password to prevent the thief from accessing your data
  • You can easily share your data with others by creating new usernames and passwords

However, there are some downsides:

  • If your host is not technically competent or polically trust-worthy, you could lose all of your data to a hard drive crash or subpoena
  • Even if your host is competent, all of your data is one previously undiscovered vulnerability away from being hacked
  • Even if your host is politically trust-worthy, you cannot always stop a subpoena, particularly given the legal escalations of tools like national security letters

pouchdb no sync

Assuming you are accessing your database on a device with an encrypted disk and you manage your own backups, pouchdb without synchoronizing provides the most privacy and autonomy. You have complete control of your data and you are not dependent on any server operator.

However, the trade-offs are harsh:

  • Availability: if you lose your device, you would need to restore from backup - which is much more difficult than simply logging in from another device
  • Collaboration: you simply can't share this data with anyone else

It seems this model is fairly useless except in very tight corner cases.

pouchdb that synchronizes to a server

With this model, you avoid the trade-offs of the no sync model (hooray!). However, you also lose all of the privacy benefits, and it's even worse: your data can be compromised either via a server breach or via a compromise of any of the devices you are using. If any of these devices lack encrypted disks, then it's borderline reckless.

On the other hand, you gain a huge benefit in terms of reliability. If the server goes down, loses your data, fails to backup or is taken offline by a legal order, you can still function perfectly well and can optionally choose to sync to a different host.


Ultimately, we need to better evaluate the trade-offs between privacy and availability for each given use of a database and try to make the best decision.

And... keep working on new models. For example, it seems an ideal middle ground would be to sync in a peer-to-peer fashion with our colleagues (see PeerPouch) or sync to a server under your control in your office.

Planet Linux AustraliaDavid Rowe: Modems for HF Digital Voice Part 2

In the previous post I argued that pushing bits through a HF channel involves much wailing and gnashing of teeth. Now we shall apply numbers and graphs to the problem, which is – in a nutshell – Engineering.

QPSK Modem Simulation

I have worked up a GNU Octave modem simulation called hf_modem_curves.m. This operates at 1 sample/symbol, i.e. the sample rate is the symbol rate. So we takes some random bits, map them to QPSK symbols, add noise, then turn the noisy symbols back into bits and count errors:

The simulation ignores a few real world details like timing and phase synchronisation, so is a best case model. That’s OK for now. QPSK uses symbols that each carry 2 bits of information, here is the symbol set or “constellation”:

Four different points, each representing a different 2 bit combination. For example the bits ’00’ would be the cross at 45 degrees, ’10’ at 135 degrees etc. The plot above shows all possible symbols, but we just send one at a time. However it’s useful to plot all of the received symbols like this, as an indication of received signal quality. If the channel is playing nice, we receive something like this:

Each cross is now a fuzzy dot, as noise has been added by the channel. No bit errors yet – a bit error happens when we get enough noise to move received symbols into another quadrant. This sort of channel is called Additive White Gaussian Noise (AWGN). Line of site UHF radio is a good example of a real world AWGN channel – all you have to worry about is additive noise.

With a fading or multipath channel like HF we end up with something like:

In a fading channel the received symbol amplitudes bounce up and down as the channel fades in and out. Sometimes the symbols dip down into the noise and we get lots of bit errors. Sometimes the signal is reinforced, and the symbol amplitude gets bigger.

The simulation used for the multipath or HF channel uses a two path model, with additive noise as per the AWGN simulation:

Graphs and Modem Performance

Turns out there are some surprisingly good models to help us work out the expected Bit Error Rate (BER) for a modem. By “model” I mean people have worked out the maths to describe the Bit Error Rate (BER) for a QPSK Modem. This graph shows us how to work out the BER for QPSK (and BPSK):

So the red line shows us the BER given Eb/No (E-B on N-naught), which is a normalised form of Signal to Noise Ratio (SNR). Think about Eb/No as a modem running at 1 bit per second, with the noise power measured in 1 Hz of bandwidth. It’s a useful scale for comparing modems and modulation schemes.

Looking at the black lines, we can see that for an Eb/No or 4dB, we can expect a BER of 1E-2 or 0.01 or 1% of our bits will be received in error over an AWGN channel. This curve is for QPSK or BPSK, different curves would be used for other modems like FSK.

Given Eb/No you can work out the SNR if you know the bit rate and noise bandwidth:

    SNR = S/N = EbRb/NoB

or in dB:

    SNR(dB) = Eb/No(dB) + 10log10(Rb/B)

For example at Rb = 1600 bit/s and a noise bandwidth B = 3000 Hz:

    SNR(dB) = 4 + 10log10(1600/3000) = 1.27 dB

OK, so that was for ideal QPSK. Lets add a few more curves to our graph:

We have added the experimental results for our QPSK simulation (green), and for Differential QPSK (DQPSK – blue). Our QPSK modem simulation (green) is right on top of the theoretical QPSK curve (red) – this is good and shows our simulation is working really well.

DQPSK was discussed in Part 1. Phase differences are sent, which helps with phase errors in the channels but costs us extra bit errors. This is evident on the curves – at the 1E-2 BER line, DQPSK requires 7dB Eb/No, 3dB more (double the power) of QPSK.

Now lets look at modem performance for HF (multipath) channels, on this rather busy graph (click for larger version):

Wow, HF sucks. Looking at the theoretical HF QPSK performance (straight red line) to achieve a BER of 1E-2, we need 14dB of Eb/No. That’s 10dB worse than QPSK on the AWGN channel. With DQPSK, we need about 16dB.

For HF, a lot of extra power is required to make a small difference in BER.

Some of the kinks in the HF curves (e.g. green QPSK HF simulated just under red QPSK HF theory) are due to not enough simulation points – it’s not actually possible to do better than theory!

Estimated Performance of FreeDV Modes

Now we have the tools to estimate the performance of FreeDV modes. FreeDV 1600 uses Codec 2 at 1300 bit/s, plus a little FEC at 300 bit/s to give a total of 1600 bit/s. With the FEC, lets say we can get reasonable voice quality at 4% BER. FreeDV 1600 uses a DQPSK modem.

On an AWGN channel, that’s an Eb/No of 4.4dB for DQPSK, and a SNR of:

    SNR(dB) = 4.4 + 10log10(1600/3000) = 1.7 dB

On a multipath channel, that’s an Eb/No of 11dB for DQPSK, and a SNR of:

    SNR(dB) = 11 + 10log10(1600/3000) = 8.3 dB

As discussed in Part 1, FreeDV 700C uses diversity and coherent QPSK, and has a multipath (HF) performance curve plotted in cyan above, and close to ideal QPSK on AWGN channels. The payload data rate is 700 bit/s, however we have an overhead of two pilot symbols for every 4 data symbols. This means we effectively need a bit rate of Rb = 700*(4+2)/4 = 1050 bit/s to pump 700 bits/s through the channel. It doesn’t have any FEC (yet, anyway), so we need a BER of a little lower than FreeDV 1600, about 2%. Running the numbers:

On an AWGN channel, for 2% BER we need an Eb/No of 3dB for QPSK, and a SNR of:

    SNR(dB) = 3 + 10log10(1050/3000) = -1.5 dB

On a multipath channel, diversity (cyan line) helps a lot, that’s an Eb/No of 8dB, and a SNR of:

    SNR(dB) = 8 + 10log10(1050/3000) = 3.4 dB

The diversity model in the simulation uses two carriers. The amplitudes of each carrier after passing through the multipath model are plotted below:

Often when one carrier is faded, the other is not faded, so when we recombine them at the receiver we get an average that is closer to AWGN performance. However diversity is not perfect, occasionally both carriers are wiped out at the same time by a fade.

So we can see FreeDV 700C is about 4 dB in front of FreeDV 1600, which matches the best reports from early adopters. I’ve had reports of FreeDV 700C operating at as low as -2dB , which is presumably on channels that don’t have heavy fading and are more like AWGN. Also some reports of 700C falling over at high SNRs (around like 8dB)! However that is probably a bug, e.g. a sync issue or something else we can track down in time.

Real world channels can vary. The multipath model above doesn’t take into account fast or slow fading, it just calculates the average bit errors rate. In practice, slow fading is hard to handle in digital voice applications, as the whole channel might be wiped out for a few seconds.

Now that we have a reasonable 700 bit/s codec – we can also consider other schemes, such as a more powerful FEC code rather than diversity. Like diversity, FEC codes provide “coding gain”, moving our operating point to the left. Really good codes operate at 10% BER, right over on the Eb/No = 2dB region of the curve. No free lunch of course – such codes may require long latency (seconds) or be expensive to decode.

Next Steps

I’d like to “instrument” FreeDV 700C and work with the 700C early adopters to find out how well it’s working, why and how it falls over, and work through any obvious bugs. Then start experimenting with ways to make it operate at lower SNRs, such as more powerful FEC codes or even non-redundant techniques like Trellis decoding.

Now we have shown Codec 700C has sufficient quality for conversations over the air, I’m planning another iteration of the Codec 2 700C vocoder design to see if we can improve speech quality.


Modems for HF Digital Voice Part 1.

More Eb/No to SNR worked examples.

Similar modem calculations were used to develop a 100 kbit/s telemetry system to send HD images from High Altitude Balloons.

TEDComment(s) of the week, Jan. 4, 2017: Let’s talk about faith

This week’s comments were posted on Rabbi Sharon Brous’ talk, which has sparked quite the conversation.

The first poster is Paul Watson, who is exactly the type of community member I’d hoped to highlight when we began this project. Paul’s comment is thoughtful, speaking from his particular area of interest/expertise, and looking at the larger picture. I think it’s this zoomed-out view that I’m most intrigued by. He takes a broader look at religion — across time, through the lens of biology, evolutionarily — than many other commenters have, understandably so. When I listen to a talk like Sharon’s, I tend to think about myself, my views on the topic, and the people in my life and their views. Reading Paul’s comment reminds me to check for my own blind spots; to revisit Rabbi Brous’ talk with a wide open mind.

Paul Watson writes: "We need to deeply understand the evolutionary psychology of religiosity (instinctual) / religion (cultural) to use it for more than it has been routinely used for in the past..."

Paul Watson writes: “We need to deeply understand the evolutionary psychology of religiosity (instinctual) / religion (cultural) to use it for more than it has been routinely used for in the past…”

The second poster is Allan Hayes, another community member who, like Paul, is entirely worthy of being highlighted. He’s also expressed a larger commitment to TED, in both the personal and TEDx organizer kind of way, so I’m excited to encourage his continued participation.

Allan Hayes writes: "State schools in the UK are required to teach RE (Religious Education). I am the BHA (British Humanist Association) representative on the committee that sets the “agreed syllabus” for Leicester ... I visit schools along with Muslim, Hindu, Sikh, Christian, Buddhist, Baha’i, Jain, Pagan . . . representatives. We set up our individual displays and talk to kids from 5 to 18, and to teachers, governors, and sometimes to parents. My aim in these visits is not so much to explain Humanism as to talk about Humanity, how we have learned to live together and how religions have come about -– to stimulate curiosity. ..."

Allan Hayes writes: “State schools in the UK are required to teach RE (Religious Education). I am the BHA (British Humanist Association) representative on the committee that sets the “agreed syllabus” for Leicester … I visit schools along with Muslim, Hindu, Sikh, Christian, Buddhist, Baha’i, Jain, Pagan . . . representatives. We set up our individual displays and talk to kids from 5 to 18, and to teachers, governors, and sometimes to parents. My aim in these visits is not so much to explain Humanism as to talk about Humanity, how we have learned to live together and how religions have come about -– to stimulate curiosity. …”

Allan’s comment is great because I’ve learned so much from reading it. I now know a little more about how religion is taught in UK schools, and that its teaching is a requirement. I’ve also learned a great way to discuss my religious beliefs with others — no matter what they are — in a way that doesn’t imply superiority of any kind; “we get on well together” :) Lastly, like any good lesson, it made me think. Particularly, it made me think about how something like that would look in the country I live in now, and the one I lived in before. How would the parents, and children, in these communities respond? What changes could be made to tailor the lessons Allan is teaching to these communities? How can we best “tell our children a story that brings us together and that they can feel part of”?

I hope you enjoyed Paul and Allan’s comments as much as I did!

TED“Could threat data be anonymized?”: Comment of the week, Feb. 1, 2017

This week’s comment comes from Joe, who has enough understanding of the topic of Caleb Barlow’s talk, “Where is cybercrime really coming from?” to pose a great question to the speaker. Particularly on topics I’m less familiar with, it’s great to come to the comments and hear from people who have much more experience with the ideas presented.
Joe asks: "Could the threat data be anonymized? That is, could companies and individuals send this data through an 'opaque' slot to one or more aggregation sites for meta-analysis (along the lines of what I assume exists for whistleblowers contacting sites like wikileaks)? Caleb's reply: "Joe - that's an excellent point and actually something that is being widely discussed. The challenge is that you need to maintain the REPUTATION of the source will also hiding the true identity. Without also maintaining the reputation you run the risk of bad buys flooding you with bogus data. There are several emerging theories on how this could be accomplished."

Joe asks: “Could the threat data be anonymized? That is, could companies and individuals send this data through an ‘opaque’ slot to one or more aggregation sites for meta-analysis (along the lines of what I assume exists for whistleblowers contacting sites like wikileaks)?
Caleb’s reply: “Joe – that’s an excellent point and actually something that is being widely discussed. The challenge is that you need to maintain the REPUTATION of the source will also hiding the true identity. Without also maintaining the reputation you run the risk of bad buys flooding you with bogus data. There are several emerging theories on how this could be accomplished.”

At the end of a talk like Caleb’s, I find myself wondering how likely it is that the solutions he mentioned will be implemented, or how quickly they could be. However, it’s tough to answer my own questions because I know very little about the liabilities involved, or the logistical feats that are likely required to pull this off. Where I’d normally be quite discouraged by my own “ignorance overload,” both Joe’s comment and Caleb’s reply give me a push in the right direction. Joe asks a question I wish I’d known to ask myself, and with the information shared in the thread — anonymizing data, establishing a source’s reputation, and the fact that this method is already being discussed — I have somewhere to start.
It can be hard to have a deep understanding of the solutions in more complicated and technical talks, but with a comment community you can ask questions of, and a starting line to walk towards, hopefully we all will keep trying. The nuances of implementing the idea can be just as fascinating as the ideas themselves, and if something sparks your interest, I’d say you owe it to yourself to keep digging. Or at least to scroll down to the comments :)
Thanks for asking, Joe!

TED“I cried as you told your story”: Comment of the week, Feb. 8, 2017

This week’s comment was posted on Sue Klebold’s talk, “My son was a Columbine shooter. This is my story.” Many times, a comment section represents the worst of our collective thoughts, but in this instance, on this platform, there is so much compassion. I was impressed with the level of respect and understanding shown to Sue by our entire community, but Heidi’s comment stands out because of her personal experience with the Columbine shooting.
Heidi writes: "I was in the library during the Columbine shooting. I cried as you told your story, and my heart really just ached. I admire your courage to stand up and speak about this and have found healing in your words."

Heidi writes: “I was in the library during the Columbine shooting. I cried as you told your story, and my heart really just ached. I admire your courage to stand up and speak about this and have found healing in your words.”

Heidi’s sentiments are mirrored in her community members’ comments, but her proximity to the shooting adds a specific weight to her words. As an outsider, it’s comforting to read Heidi’s comments and, if the upvotes are any indication, I’d imagine I’m not the only person who feels this way.
Sue’s bravery in both her talk and advocacy work is so important, and I hope the comments from the community can stand to show just how impactful it is too.
Here’s to another week of powerful community interactions!

Planet DebianClint Adams: Tom's birthday happens every year

“Sure,” she said, while having a drink for breakfast at the post office.

Posted on 2017-02-15
Tags: mintings

Planet DebianDaniel Stender: APT programming snippets for Debian system maintenance

The Python API for the Debian package manager APT is useful for writing practical system maintenance scripts, which are going beyond shell scripting capabilities. There are Python2 and Python3 libraries for that available as packages, as well as a documentation in the package python-apt-doc. If that’s also installed, the documentation then could be found in /usr/share/doc/python-apt-doc/html/index.html, and there are also a couple of example scripts shipped into /usr/share/doc/python-apt-doc/examples. The libraries mainly consists of Python bindings for the libapt-inst and libapt-pkg C++ core libraries of the APT package manager, which makes it processing very fast. Debugging symbols are also available as packages (python{,3}-apt-dbg). The module apt_inst provides features like reading from binary packages, while apt_pkg resembles the functions of the package manager. There is also the apt abstraction layer which provides more convenient access to the library, like apt.cache.Cache() could be used to behave like apt-get:

from apt.cache import Cache
mycache = Cache()
mycache.update()                   # apt-get update                     # re-open
mycache.upgrade(dist_upgrade=True) # apt-get dist-upgrade
mycache.commit()                   # apply

boil out selections

As widely known, there is a feature of dpkg which helps to move a package inventory from one installation to another by just using a text file with a list of installed packages. A selections list containing all installed package could be easily generated with $ dpkg --get-selections > selections.txt. The resulting file then looks something similar like this:

$ cat selections.txt
0ad                                 install
0ad-data                            install
0ad-data-common                     install
a2ps                                install
abi-compliance-checker              install
abi-dumper                          install
abigail-tools                       install
accountsservice                     install
acl                                 install
acpi                                install

The counterpart for this operation (--set-selections) could be used to reinstall (add) the complete package inventory on another installation resp. computer (that needs superuser rights), like that’s explained in the manpage dpkg(1). No problem so far.

The problem is, if that list contains a package which couldn’t be found in any of the package inventories which are set up in /etc/apt/sources.list(.d/) on the target system, dpkg stops the whole process:

# dpkg --set-selections < selections.txt
dpkg: warning: package not in database at line 524: google-chrome-beta
dpkg: warning: found unknown packages; this might mean the available database
is outdated, and needs to be updated through a frontend method

Thus, manually downloaded and installed “wild” packages from unofficial package sources are problematic for this approach, because the package installer simply doesn’t know where to get them.

Luckily, dpkg puts out the relevant package names, but instead of having them removed manually with an editor this little Python script for python3-apt automatically deletes any of these packages from a selections file:

#!/usr/bin/env python3
import sys
import apt_pkg

cache = apt_pkg.Cache()

infile = open(sys.argv[1])
outfile_name = sys.argv[1] + '.boiled'
outfile = open(outfile_name, "w")

for line in infile:
    package = line.split()[0]
    if package in cache:


The script takes one argument which is the name of the selections file which has been generated by dpkg. The low level module apt_pkg first has to been initialized with apt_pkg.init(). Then apt_pkg.Cache() can be used to instantiate a cache object (here: cache). That object is iterable, thus it’s easy to not perform something if a package from that list couldn’t be found in the database, like not copying the corresponding line into the outfile (.boiled), while the others are copied.

The result then looks something like this:

$ diff selections.txt selections.txt.boiled 
< python-timemachine   install
< wlan-supercracker    install

That script might be useful also for moving from one distribution resp. derivative to another (like from Ubuntu to Debian). For productive use, open() should be of course secured against FileNotFound and IOError-s to prevent program crashs on such events.

purge rc-s

Like also widely known, deinstalled packages leave stuff like configuration files, maintainer scripts and logs on the computer, to save that if the package gets reinstalled at some point in the future. That happens if dpkg has been used with -r/--remove instead of -P/--purge, which also removes these files which are left otherwise.

These packages are then marked as rc in the package archive, like:

$ dpkg -l | grep ^rc
rc  firebird2.5-common   amd64   common files for firebird 2.5 servers and clients
rc  firebird2.5-server-common   amd64   common files for firebird 2.5 servers
rc  firebird3.0-common   all     common files for firebird 3.0 server, client and utilities
rc  imagemagick-common          8:    all     image manipulation programs -- infrastructure dummy package

It could be purged over them afterwards to completely remove them from the system. There are several shell coding snippets to be found on the net for completing this job automatically, like this one here:

dpkg -l | grep "^rc" | sed ­e "s/^rc //" ­e "s/ .*$//" | \
xargs dpkg ­­purge

The first thing which is needed to handle this by a Python script is the information that in apt_pkg, the package state rc per default is represented by the code 5:

>>> testpackage = cache['firebird2.5-common']
>>> testpackage.current_state

For changing things in the database apt_pkg.DepCache() could be docked onto an cache object to manipulate the installation state of a package within, like marking it to be removed resp. purged:

>>> mydepcache = apt_pkg.DepCache(mycache)
>>> mydepcache.mark_delete(testpackage, True) # True = purge
>>> mydepcache.marked_delete(testpackage)

That’s basically all which is needed for an old package purging maintenance script in Python 3, another iterator as package filter and there you go:

#!/usr/bin/env python3
import sys
import apt_pkg

from apt.progress.text import AcquireProgress
from apt.progress.base import InstallProgress
acquire = AcquireProgress()
install = InstallProgress()

cache = apt_pkg.Cache()
depcache = apt_pkg.DepCache(cache)

for paket in cache.packages:
    if paket.current_state == 5:
        depcache.mark_delete(paket, True)

depcache.commit(acquire, install)

The method DepCache.commit() applies the changes to the package archive at the end, and it needs apt_progress to perform.

Of course this script needs superuser rights to run. It then returns something like this:

$ sudo ./rc-purge 
Reading package lists... Done
Building dependency tree
Reading state information... Done
Fetched 0 B in 0s (0 B/s)
custom fork found
got pid: 17984
got pid: 0
got fd: 4
(Reading database ... 701434 files and directories currently installed.)
Purging configuration files for libmimic0:amd64 (1.0.4-2.3) ...
Purging configuration files for libadns1 (1.5.0~rc1-1) ...
Purging configuration files for libreoffice-sdbc-firebird (1:5.2.2~rc2-2) ...
Purging configuration files for vlc-nox (2.2.4-7) ...
Purging configuration files for librlog5v5 (1.4-4) ...
Purging configuration files for firebird3.0-common ( ...
Purging configuration files for imagemagick-common (8: ...
Purging configuration files for firebird2.5-server-common (

It’s not yet production ready (like there’s an infinite loop if dpkg returns error code 1 like from “can’t remove non empty folder”). But generally, ATTENTION: be very careful with typos and other mistakes if you want to use that code snippet, a false script performing changes on the package database might destroy the integrity of your system, and you don’t want that to happen.

detect “wild” packages

Like said above, installed Debian packages might be called “wild” if they have been downloaded from somewhere on the net and installed manually, like that is done from time to time on many systems. If you want to remove that whole class of packages again for any reason, the question would be how to detect them. A characteristic element is that there is no source connected to such a package, and that could be detected by Python scripting using again the bindings for the APT libraries.

The package object doesn’t have an associated method to query its source, because the origin is always connected to a specific package version, like some specific version might have come from security updates for example. The current version of a package can be queried with DepCache.get_candidate_ver() which returns a complex apt_pkg.Version object:

>>> import apt_pkg
>>> apt_pkg.init()
>>> mycache = apt_pkg.Cache()
Reading package lists... Done
Building dependency tree
Reading state information... Done
>>> mydepcache = apt_pkg.DepCache(mycache)
>>> testpackage = mydepcache.get_candidate_ver(mycache['nano'])
>>> testpackage
<apt_pkg.Version object: Pkg:'nano' Ver:'2.7.4-1' Section:'editors'  Arch:'amd64' Size:484790 ISize:2092032 Hash:33578 ID:31706 Priority:2>

For version objects there is the method file_list available, which returns a list containing PackageFile() objects:

>>> testpackage.file_list
[(<apt_pkg.PackageFile object: filename:'/var/lib/apt/lists/httpredir.debian.org_debian_dists_testing_main_binary-amd64_Packages'  a=testing,c=main,v=,o=Debian,l=Debian arch='amd64' site='' IndexType='Debian Package Index' Size=38943764 ID:0>, 669901L)]

These file objects contain the index files which are associated with a specific package source (a downloaded package index), which could be read out easily (using a for-loop because there could be multiple file objects):

>>> for files in testpackage.file_list:
...     print(files[0].filename)

That explains itself: the nano binary package on this amd64 computer comes from testing main. If a package is “wild” that means it was installed manually, so there is no associated index file to be found, but only /var/lib/dpkg/status (libcudnn5 is not in the official package archives but distributed by Nvidia as a .deb package):

>>> testpackage2 = mydepcache.get_candidate_ver(mycache['libcudnn5'])
>>> for files in testpackage2.file_list:
...     print(files[0].filename)

The simple trick now is to find all packages which have only /var/lib/dpkg/status as associated system file (that doesn’t refer to what packages contain), an not an index file representing a package source. There’s a little pitfall: that’s truth also for virtual packages. But virtual packages commonly don’t have an associated version (python-apt docs: “to check whether a package is virtual; that is, it has no versions and is provided at least once”), and that can be queried by Package.has_versions. A filter to find out any packages that aren’t virtual packages, are solely associated to one system file, and that file is /var/lib/dpkg/status, then goes like this:

for package in cache.packages:
    if package.has_versions:
        version = mydepcache.get_candidate_ver(package)
        if len(version.file_list) == 1:
            if 'dpkg/status' in version.file_list[0][0].filename:

On my Debian testing system this puts out a quite interesting list. It lists all the wild packages like libcudnn5, but also packages which are recently not in testing because they have been temporarily removed by AUTORM due to release critical bugs. Then there’s all the obsolete stuff which have been installed from the package archives once and then being forgotten like old kernel header packages (“obsolete packages” in dselect). So this snippet brings up other stuff, too. Thus, this might be more experimental stuff so far, though.


Planet DebianJulian Andres Klode: moved / backing up

In the past two days, I moved my main web site (and from a very old contract at STRATO over to something else: The domains are registered with INWX and the hosting is handled by Encryption is provided by Let’s Encrypt.

I requested the domain transfer from STRATO on Monday at 16:23, received the auth codes at 20:10 and the .de domain was transferred completely on 20:36 (about 20 minutes if you count my overhead). The .org domain I had to ACK, which I did at 20:46 and at 03:00 I received the notification that the transfer was successful (I think there was some registrar ACKing involved there). So the whole transfer took about 10 1/2 hours, or 7 hours since I retrieved the auth code. I think that’s quite a good time 🙂

And, for those of you who don’t know: uberspace is a shared hoster that basically just gives you an SSH shell account, directories for you to drop files in for the http server, and various tools to add subdomains, certificates, virtual users to the mailserver. You can also run your own custom build software and open ports in their firewall. That’s quite cool.

I’m considering migrating the blog away from wordpress at some point in the future – having a more integrated experience is a bit nicer than having my web presence split over two sites. I’m unsure if I shouldn’t add something like cloudflare there – I don’t want to overload the servers (but I only serve static pages, so how much load is this really going to get?).

in other news: off-site backups

I also recently started doing offsite backups via borg to a server operated by the wonderful For those of you who do not know You basically get SSH to a server where you can upload your backups via common tools like rsync, scp, or you can go crazy and use git-annex, borg, attic; or you could even just plain zfs send your stuff there.

The normal price is $0.08 per GB per month, but there is a special borg price of $0.03 (that price does not include snapshotting or support, really). You can also get a discounted normal account for $0.04 if you find the correct code on Hacker News, or other discounts for open source developers, students, etc. – you just have to send them an email.

Finally, I must say that uberspace and feel similar in spirit. Both heavily emphasise the command line, and don’t really have any fancy click stuff. I like that.

Filed under: General

Planet DebianSteve McIntyre: Start the fans please!

This probably won't mean much to people outside the UK, I'm guessing. Sorry! :-)

The Crystal Maze was an awesome fun game show on TV in the UK in the 1990s. Teams would travel through differently-themed zones, taking on challenges to earn crystals for later rewards in the Crystal Dome. I really enojyed it, as did just about everybody my age that I know of...

A group have started up a new Crystal Maze attraction in London and Manchester, giving some of us a chance of indulging our nostalgia directly in a replica of the show's setup! Neil NcGovern booked a load of tickets and arranged for a large group of people to go along this weekend.

It was amazing! (Sorry!) I ended up captaining one of the 4 teams, and our team ("Failure is always an option!") scored highest in the final game - catching bits of gold foil flying around in the Dome. It was really, really fun and I'd heartily recommend it to other folks who like action games and puzzle solving.

I just missed the biting scorn of the original show presenter, Richard O'Brien, but our "Maze Master" Boudica was great fun and got us all pumped up and working together.

Worse Than FailureAnnouncements: Hired: Salary Trends

You may remember our new sponsor, Hired. To help them match up talent with employers, they’ve created their own proprietary dataset about salary and hiring trends, and have published their annual report about what they’ve found.

There are a few key things in this report. First, as we all know, you don’t need to go to Silicon Valley for a good job in the tech sector- and even though the salaries are among the highest at an average of $134K a year, with cost of living factored in, even the notoriously expensive New York and LA can give you an advantage in purchasing power.

Hired's map of salaries by region

If you are thinking of a move, the hot cities are Austin, Singapore and London. Non-local candidates are getting more offers at higher salaries than anywhere else. Even if you don’t want to go to one of those cities, in 12 of Hired’s 16 markets are offering better salaries to relocators.

Age and race still matter. While African-American candidates are actually more likely to get hired, that may be because they’re being hired at a much lower salary than white candidates. Latino and Asian candidates ask for salaries comparable with white candidates, and are both less likely to get hired.

If you’re between 25 and 30, you’re much more likely to get an “average” job offer for your experience level, but past 45, you’ll start to see a decline.

There’s a lot more in here, including lots of global data. Read the white paper yourself, and if you think you could take advantage of these trends, get on Hired today and get some offers.

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

CryptogramSurvey Data on Americans and Cybersecurity

Pew Research just published their latest research data on Americans and their views on cybersecurity:

This survey finds that a majority of Americans have directly experienced some form of data theft or fraud, that a sizeable share of the public thinks that their personal data have become less secure in recent years, and that many lack confidence in various institutions to keep their personal data safe from misuse. In addition, many Americans are failing to follow digital security best practices in their own personal lives, and a substantial majority expects that major cyberattacks will be a fact of life in the future.

Here's the full report.

Worse Than FailureWhat Bugs Beneath


During the interview, everything was gravy. Celestino Inc. was doing better business than ever before, and they were ready to expand their development center. They had all the keywords Gigi was looking for: Java 8, serving up a SPA for a hot new streaming product, outsourced from a company with money to burn but no developers of their own. She loved the people she'd interviewed with; they were smart people with great ideas. It'd been a long, grueling job hunt, but it'd been worth it. Gigi was eager to work with the technology, not to mention having plenty of budget and a greenfield to work with.

By the time Gigi started work, however, the deal had fallen through. The big name with the budget had gone elsewhere, leaving Celestino in the dust. Gigi's boss seemed surprised to see her. He assigned her to work on bug duty for their 7 year-old Java product instead.

A few minutes of settling in, and Gigi found the Jira for the product. She clicked through to "Unresolved issues" ... and nearly fell out of her chair. There were 8,800 bugs.

That can't be right! she thought. Maybe they're not setting a resolution when they close them?

But there was no such luck. Clicking into a dozen or so of the older ones revealed that they'd been filed years ago and never opened.

Where do I even start? she wondered.

"I'll give you an easy one," said the manager, when asked. "Just fix this one thing, without touching anything else, and we'll get it rolled out to prod in the weekly release. Be careful not to touch anything else! We've had guys in the past who broke stuff they didn't realize was there."

The bug did look easy, at least: if one chose a red background in the app, the text "looked funny". Gigi had no trouble reproducing the effect 100% of the time. All she had to do was find the offending line of code and tweak it. No risk of introducing extra bugs.

And it was a snap—of her patience, not her fingers. The codebase had 133,000 source files implementing 3,600 classes, all just for the backend part of the application. She found a surprising number of places calling TextOut() that could've produced the text she saw, and the code was, of course, awful to read.

Gigi gave up understanding it and tried the old standby: a binary search of the codebase by commenting out half the instances and seeing if the "funny" text still appeared. Six comments were inserted, and the code recompiled, taking 20 minutes. Gigi had been warned that if she tried to run make all, it'd take six hours, but that the build script was reasonably reliable—"reasonably" being the operative word.

Still, it compiled, and she booted up the app. In theory, if the problem was in the six output statements she'd commented out, she'd see no text; if it was in the six she'd left alone, it'd still appear.

She never anticipated a third outcome. The red background was gone, indicating that the output was in the commented half ... but instead of nothing, she saw two copies of the text that'd been hidden behind the red background, both reading the same thing, but in different fonts and different positioning.

Gigi stared at the screen, a sinking feeling in the pit of her stomach. A vivid daydream flashed across her eyes: she envisioned packing up her purse, walking out the doors into the sunshine, and never returning. Maybe, if she were feeling generous, she'd call and explain she'd been deported to Mars or had suffered a violent allergic reaction to the application or something.

It was her first day, after all. Nobody would miss her. Nobody depended on her yet. Nobody ...

She flipped open her cell phone, staring at the lock screen: a picture of her husband and year-old son. They were counting on her. She needed to stick it out at least long enough to look good on her résumé and help her get the next job. She could do this for her family. She had to be brave.

Taking a deep breath, Gigi resumed her herculean task. She commented out a few more text outputs, hoping to figure out which of them was causing the effect. After another build and run, there was no text—but the text box twitched and flashed alarmingly.

There's at least three layers of bug here! How am I meant to solve this without touching any unrelated code?!

Gigi turned back to the Jira, realizing now why so many of these defects were ancient and still unopened as she set the bug back to "todo" status and went to find the manager.

The next defect she was able to resolve, putting her back on solid ground. Not that it'd been easy—the problem was in a button event handler, spread across six source code files. The first file created an object and a thread; the next file scheduled the thread in a private queue. The third class hooked the thread up to the button, while the fourth class caught button clicks and scheduled another thread to handle the event. There were a few more layers of passing data around between files, until finally, another thread was queued to destroy the object. But once she got her head around the architecture, the bug was a simple typo. Easy enough to resolve.

A week later, an email came through: some contractor in Siberia had resolved the first bug ... by upgrading the video drivers. Apparently there was a known issue with layering visible text on top of invisible text, with or without the red background.

Why was the invisible text there? What did it have to do with a red background? What unearthly combination of branches created that particular display?

Gigi didn't know or care. She was too busy filling out applications on Résumé be damned, she needed to escape. Maybe she'd just leave this one entirely off.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Sam VargheseBangladesh should never have got Test status

After Monday’s loss to India in a one-off Test, Bangladesh has now played 98 Tests and won just eight, after being given full Test status in the year 2000.

That is a rather dismal record for any team. They have only beaten Zimbabwe (five times), the West Indies (twice) and England (once). You’d have to ask: why were they ever given Test status?

The answer is rather simple. At that time, the late Jagmohan Dalmiya, a Bengali (from the Indian state of West Bengal), was the chairman of the International Cricket Council. He was the man responsible for the current state of cricket, where meaningless matches are played month after month, ensuring that quantity triumphs over quality.

Dalmiya would never have had any chance of influencing the fortunes of the game had not India won the World Cup in 1983, beating the West Indies in the final. That gave one-day cricket a big fillip in the country, and the very next World Cup was held in the subcontinent, with India and Pakistan jointly hosting the tournament.

In terms of numbers, in terms of fanatical interest, in terms of ensuring crowds for even lowly games, no place is better than the Indian subcontinent. Dalmiya could only press for being given hosting rights after India’s win because with that he could boldly say that there was sufficient interest in the one-day game in his part of the world. Until then, India had rarely been given a chance in the shorter format; one of the more memorable innings by an Indian in one-day cricket was played by Sunil Gavaskar who batted through 60 overs to make 36 not out in a World Cup game.

But after 1983, you could not stop the rise of one-day cricket, with India and Pakistan being pitted against each other whenever possible. This rivalry draws on the historic enmity between the two countries after the partition of the subcontinent in 1947. It is cynical to exploit such feelings, but then Dalmiya was only interested in money.

After Australia hosted the Cup in 1992, the subcontinent got the tournament again in 1996, with Sri Lanka joining to make up a third host. It was after this that the ICC decided that Test teams would play each other in order to be able to declare one team or the other as the top playing nation. Dalmiya was able to push his idea through because with two successful World Cups behind him, he had shown the rest how to really capitalise on the game.

And he could also get a few items on his own agenda through. Bangladesh is East Bengal; it formerly was a part of Pakistan when partition took place. In 1971, Bangladesh became a separate country after a war of liberation. The country has no cricket culture; the game that people there are crazy is about is football.

Bangladeshi cricket officials had good connections to Dalmiya. Hence when it was decided to expand the number of cricket-playing nations with Test status to 10, Bangladesh got the nod ahead of Kenya.

The African nation at that time had a much better team than Bangladesh. And if it had been promoted, many players from South Africa who did not make it to the top would, no doubt, have come over, qualified and played for the country as has happened with Zimbabwe. There are plenty of expatriate Indians in Kenya too.

But ethnic connections take precedence in cricket which had the stink of colonialism for a long, long time. And so Bangladesh made the grade and began to lose Test matches.

To get an idea of the relative merits of teams, look at Sri Lanka. The country was given Test status in 1981. By 1996, it had won the World Cup. That’s because it has a cricketing history, even though it was only a junior member of the cricketing nations. People there are crazy about the game and it is the country’s national sport.

Zimbabwe has fared worse than Bangladesh since it was given Test status in 1992 but then it has suffered badly due to the political instability caused by the dictatorship of Robert Mugabe. In 101 Tests, Zimbabwe has 11 wins and 64 losses; in 98 Tests, Bangladesh has won eight and lost 74.

Cronyism produces mediocrity. The case of Bangladesh is a very good case in point.

Planet DebianSven Hoexter: moto g falcon up and running with LineageOS 14.1 nightly

After a few weeks of running Exodus on my moto g falcon, I've now done again the full wipe and moved on to LineageOS nightly from 20170213. Though that build is no longer online at the moment. It's running smooth so far for myself but there was an issue with the Google Play edition of the phone according to Reddit. Since I don't use gapps anyway I don't care.

The only issue I see so far is that I can not reach the flash menu in the camera app. It's hidden behind a grey bar. Not nice but not a show stopper for me either.

Planet DebianArturo Borrero González: About process limits


The other day I had to deal with an outage in one of our LDAP servers, which is running the old Debian Wheezy (yeah, I know, we should update it).

We are running openldap, the slapd daemon. And after searching the log files, the cause of the outage was obvious:

slapd[7408]: warning: cannot open /etc/hosts.allow: Too many open files
slapd[7408]: warning: cannot open /etc/hosts.deny: Too many open files
slapd[7408]: warning: cannot open /etc/hosts.allow: Too many open files
slapd[7408]: warning: cannot open /etc/hosts.deny: Too many open files
slapd[7408]: warning: cannot open /etc/hosts.allow: Too many open files
slapd[7408]: warning: cannot open /etc/hosts.deny: Too many open files

[Please read “About process limits, round 2” for updated info on this issue]

I couldn’t believe that openldap is using tcp_wrappers (or libwrap), an ancient software piece that hasn’t been updated for years, replaced in many other ways by more powerful tools (like nftables). I was blinded by this and ran to open a Debian bug agains openldap: #854436 (openldap: please don’t use tcp-wrappers with slapd).

The reply from Steve Langasek was clear:

If people are hitting open file limits trying to open two extra files,
disabling features in the codebase is not the correct solution.

Obvoursly, the problem was somewhere else.

I started investigating about system limits, which seems to have 2 main components:

  • system-wide limits (you tune these via sysctl, they live in the kernel)
  • user/group/process limits (via limits.conf, ulimit and prlimit)

According to my searchings, my slapd daemon was being hit by the latter. I reviewed the default system-wide limits and they seemed Ok. So, let’s change the other limits.

Most of the documentantion around the internet points you to a /etc/security/limits.conf file, which is then read by pam_limits. You can check current limits using the ulimit bash builtin.

In the case of my slapd:

arturo@debian:~% sudo su openldap -s /bin/bash
openldap@debian:~% ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 7915
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 2000
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

This seems to suggest that the openldap user is constrained to 1024 openfiles (and some more if we check the hard limit). The 1024 limit seems low for a rather busy service.

According to most of the internet docs, I’m supposed to put this in /etc/security/limits.conf:

#<domain>      <type>  <item>         <value>
openldap	soft	nofile		1000000
openldap	hard	nofile		1000000

I should check as well that pam_limits is loaded, in /etc/pam.d/other:

session		required

After reloading the openldap session, you can check that, indeed, limits are changed as reported by ulimit. But at some point, the slapd daemon starts to drop connections again. Thing start to turn weird here.

The changes we made until now don’t work, probably because when the slapd daemon is spawned at bootup (by root, sysvinit in this case) no pam mechanisms are triggered.

So, I was forced to learn a new thing: process limits.

You can check the limits for a given process this way:

arturo@debian:~% cat /proc/$(pgrep slapd)/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             16000                16000                processes
Max open files            1024                 4096                 files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       16000                16000                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

Good, seems we have some more limits attached to our slapd daemon process.

If we search the internet to know how to change process limits, most of the docs points to a tool known as prlimit. According to the manpage, this is a tool to get and set process resource limits, which is just what I was looking for.

According to the docs, the prlimit system call is supported since Linux 2.6.36, and I’m running 3.2, so no problem here. Things looks promising. But yes, more problems. The prlimit tool is not included in the Debian Wheezy release.

A simple call to a single system call was not going to stop me now, so I searched more the web until I found this useful manpage: getrlimit(2).

There is a sample C code included in the manpage, in which we only need to replace RLIMIT_CPU with RLIMIT_NOFILE:

#define _GNU_SOURCE
#define _FILE_OFFSET_BITS 64
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/resource.h>

#define errExit(msg) do { perror(msg); exit(EXIT_FAILURE); \
                        } while (0)

main(int argc, char *argv[])
    struct rlimit old, new;
    struct rlimit *newp;
    pid_t pid;

    if (!(argc == 2 || argc == 4)) {
        fprintf(stderr, "Usage: %s <pid> [<new-soft-limit> "
                "<new-hard-limit>]\n", argv[0]);

    pid = atoi(argv[1]);        /* PID of target process */

    newp = NULL;
    if (argc == 4) {
        new.rlim_cur = atoi(argv[2]);
        new.rlim_max = atoi(argv[3]);
        newp = &new;

    /* Set CPU time limit of target process; retrieve and display
       previous limit */

    if (prlimit(pid, RLIMIT_NOFILE, newp, &old) == -1)
    printf("Previous limits: soft=%lld; hard=%lld\n",
            (long long) old.rlim_cur, (long long) old.rlim_max);

    /* Retrieve and display new CPU time limit */

    if (prlimit(pid, RLIMIT_NOFILE, NULL, &old) == -1)
    printf("New limits: soft=%lld; hard=%lld\n",
            (long long) old.rlim_cur, (long long) old.rlim_max);


And them compile it like this:

arturo@debian:~% gcc limits.c -o limits

We can then call this new binary like this:

arturo@debian:~% sudo limits $(pgrep slapd) 1000000 1000000

Finally, the limit seems OK:

arturo@debian:~% cat /proc/$(pgrep slapd)/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             16000                16000                processes
Max open files            1000000              1000000              files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       16000                16000                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

Don’t forget to apply this change every time the slapd daemon starts.

Nobody found this issue before? really?

Planet Linux AustraliaStewart Smith: j-core + Numato Spartan 6 board + Fedora 25

A couple of changes to made it easy for me to get going:

  • In order to make ModemManager not try to think it’s a “modem”, create /etc/udev/rules.d/52-numato.rules with the following content:
    # Make ModemManager ignore Numato FPGA board
    ATTRS{idVendor}=="2a19", ATTRS{idProduct}=="1002", ENV{ID_MM_DEVICE_IGNORE}="1"
  • You will need to install python3-pyserial and minicom
  • The minicom command line i used was:
    sudo stty -F /dev/ttyACM0 -crtscts && minicom -b 115200 -D /dev/ttyACM0

and along with the instructions on, I got it to load a known good build.

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 1, week 3

This is a global week in HASS for primary students. Our youngest students are marking countries around the world where they have family members, slightly older students are examining the Mayan calendar, while older students get nearer to Australia, examining how people reached Australia and encountered its unique wildlife.

Foundation to Year 3

Mayan date

Foundation students doing the Me and My Global Family unit (F.1) are working with the world map this week, marking countries where they have family members with coloured sticky dots. Those doing the My Global Family unit (F.6), and students in Years 1 to 3 (Units 1.1; 2.1 and 3.1), are examining the Mayan calendar this week. The Mayan calendar is a good example of an alternative type of calendar, because it is made up of different parts, some of which do not track the seasons, and is cyclical, based on nested circles. The students learn about the 2 main calendars used by the Mayans – a secular and a celebratory sacred calendar, as well as how the Mayans divided time into circles running at different scales – from the day to the millennium and beyond. And no, in case anyone is still wondering – they did not predict the end of the world in 2012, merely the end of one particular long-range cycle, and hence, the beginning of a new one…

Years 3 to 6

Lake Mungo, where people lived at least 40,000 years ago.

Students doing the Exploring Climates unit (3.6), and those in Years 4 to 6 (Units 4.1, 5.1 and 6.1), are examining how people reached Australia during the Ice Age, and what Australia was like when they arrived. People had to cross at least 90km of open sea to reach Australia, even during the height of the Ice Age, and this sea gap led to the relative isolation of animals in Australia from others in Asia. This phenomenon was first recorded by Alfred Wallace, who drew a line on a map marking the change in fauna. This line became known as the Wallace line, as a result. Students will also examine the archaeological evidence, and sites of the first people in Australia, ancestors of Aboriginal people. The range of sites across Australia, with increasingly early dates, amply demonstrate the depth of antiquity of Aboriginal knowledge and experience in Australia.

Planet DebianReproducible builds folks: Reproducible Builds: week 94 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday February 5 and Saturday February 11 2017:

Upcoming events

Patches sent upstream

Packages reviewed and fixed, and bugs filed

Chris Lamb:

Daniel Shahaf:

"Z. Ren":

Reviews of unreproducible packages

83 package reviews have been added, 8 have been updated and 32 have been removed in this week, adding to our knowledge about identified issues.

5 issue types have been added:

1 issue type has been updated:

Weekly QA work

During our reproducibility testing, the following FTBFS bugs have been detected and reported by:

  • Chris Lamb (7)
  • gregory bahde (1)

diffoscope development

diffoscope versions 71, 72, 73, 74 & 75 were uploaded to unstable by Chris Lamb:

strip-nondeterminism development

strip-nondeterminism 0.030-1 was uploaded to unstable by Chris Lamb: development

reproducible-website development


This week's edition was written by Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianElizabeth Ferdman: 10 Week Progress Update for PGP Clean Room

This Valentine’s Day I’m giving everyone the gift of GIFs! Because who wants to stare at a bunch of code? Or read words?! I’ll make this short and snappy since I’m sure you’re looking forward to a romantic night with your terminal.

A script called create-raid already exists in the main repository so I decided to add an activity for that in the main menu.

Here’s what the default activity for creating the master and subkeys will look like:

This activity should make key generation faster and more convenient for the user. The dialog allows the user to enter additional UIDs at the same time as she initially creates the keys (there’s another activity for adding UIDs later). The dialog won’t ask for a comment in the UID, just name and email.

The input boxes come with some defaults that were outlined in the wiki for this project, such as rsa4096 for the master and 1y for the expiry. However the user can still enter her own values for fields like algo and expiry. The user won’t customize usage here, though. There should be separate activities for creating a custom primary and custom subkeys. Here, the user creates a master key [SC], an encryption key [E], and optionally an additional signing [SC], encryption [E], and authentication key [A].

The last three weeks of the internship will consist of implementing more of the frontend dialogs for the activities in the main menu, validating user input, and testing.

Thanks for reading <3


Planet DebianVincent Sanders: The minority yields to the majority!

Deng Xiaoping (who succeeded Mao) expounded this view and obviously did not depend on a minority to succeed. In open source software projects we often find ourselves implementing features of interest to a minority of users to keep our software relevant to a larger audience.

As previously mentioned I contribute to the NetSurf project and the browser natively supports numerous toolkits for numerous platforms. This produces many challenges in development to obtain the benefits of a more diverse user base. As part of the recent NetSurf developer weekend we took the opportunity to review all the frontends to make a decision on their future sustainability.

Each of the nine frontend toolkits were reviewed in turn and the results of that discussion published. This task was greatly eased because we we able to hold the discussion face to face, over time I have come to the conclusion some tasks in open source projects greatly benefit from this form of interaction.

Netsurf running on windows showing this blog post
Coding and day to day discussions around it can be easily accommodated va IRC and email. Decisions affecting a large area of code are much easier with the subtleties of direct interpersonal communication. An example of this is our decision to abandon the cocoa frontend (toolkit used on Mac OS X) against that to keep the windows frontend.

The cocoa frontend was implemented by Sven Weidauer in 2011, unfortunately Sven did not continue contributing to this frontend afterwards and it has become the responsibility of the core team to maintain. Because NetSuf has a comprehensive CI system that compiles the master branch on every commit any changes that negatively affected the cocoa frontend were immediately obvious.

Thus issues with the compilation were fixed promptly but because these fixes were only ever compile tested and at some point the Mac OS X build environments changed resulting in an application that crashes when used. Despite repeatedly asking for assistance to fix the cocoa frontend over the last eighteen months no one had come forward.

And when the topic was discussed amongst the developers it quickly became apparent that no one had any objections to removing the cocoa support. In contrast the windows frontend, which despite having many similar issues to cocoa, we decided to keep. These were almost immediate consensus on the decision, despite each individual prior to the discussion not advocating any position.

This was a single example but it highlights the benefits of a disparate development team having a physical meeting from time to time. However this was not the main point I wanted to discuss, this incident highlights that supporting a feature only useful to a minority of users can have a disproportionate cost.

The cost of a feature for an open source project is usually a collection of several factors:
Developer time
Arguably the greatest resource of a project is the time its developers can devote to it. Unless it is a very large, well supported project like the Kernel or libreoffice almost all developer time is voluntary.
Developer focus
Any given developer is likely to work on an area of code that interests them in preference to one that does not. This means if a developer must do work which does not interest them they may loose focus and not work on the project at all.
Developer skillset
A given developer may not have the skillset necessary to work on a feature, this is especially acute when considering minority platforms which often have very, very few skilled developers available.
Developer access
It should be obvious that software that only requires commodity hardware and software to develop is much cheaper than that which requires special hardware and software. To use our earlier example the cocoa frontend required an apple computer running MAC OS X to compile and test, this resource was very limited and the project only had access to two such systems via remote desktop. These systems also had to serve as CI builders and required physical system administration as they could not be virtualized.
Once a project releases useful software it generally gains users outside of the developers. Supporting users consumes developer time and generally causes them to focus on things other than code that interests them.

While most developers have enough pride in what they produce to fix bugs, users must always remember that the main freedom they get from OSS is they recived the code and can change it themselves, there is no requirement for a developer to do anything for them.
A project requires a website, code repository, wiki, CI systems etc. which must all be paid for. Netsurf for example is fortunate to have Pepperfish look after our website hosting at favorable rates, Mythic beasts provide exceptionally good rates for the CI system virtual machine along with hardware donations (our apple macs were donated by them) and Collabora for providing physical hosting for our virtual machine server.

Despite these incredibly good deals the project still spends around 200gbp (250usd) a year on overheads, these services obviously benefit the whole project including minority platforms but are generally donated by users of the more popular platforms.
The benefits of a feature are similarly varied:
Developer learning
A developer may implement a feature to allow them to learn a new technology or skill
Project diversity
A feature may mean the project gets built in a new environment which reveals issues or opportunities in unconnected code. For example the Debian OS is built on a variety of hardware platforms and sometimes reveals issues in software by compiling it on big endian systems. These issues are often underlying bugs that are causing errors which are simply not observed on a little endian platform.
More users
Gaining users of the software is often a benefit and although most OSS developers are contributing for personal reasons having their work appreciated by others is often a factor. This might be seen as the other side of the support cost.

In the end the maintainers of a project often have to consider all of these factors and more to arrive at a decision about a feature, especially those only useful to a minority of users. Such decisions are rarely taken lightly as they often remove another developers work and the question is often what would I think about my contributions being discarded?

As a postscript, if anyone is willing to pay the costs to maintain the NetSurf cocoa frontend I have not removed the code just yet.

Planet DebianPetter Reinholdtsen: Ruling ignored our objections to the seizure of (#domstolkontroll)

A few days ago, we received the ruling from my day in court. The case in question is a challenge of the seizure of the DNS domain The ruling simply did not mention most of our arguments, and seemed to take everything ØKOKRIM said at face value, ignoring our demonstration and explanations. But it is hard to tell for sure, as we still have not seen most of the documents in the case and thus were unprepared and unable to contradict several of the claims made in court by the opposition. We are considering an appeal, but it is partly a question of funding, as it is costing us quite a bit to pay for our lawyer. If you want to help, please donate to the NUUG defense fund.

The details of the case, as far as we know it, is available in Norwegian from the NUUG blog. This also include the ruling itself.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, January 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, about 159 work hours have been dispatched among 13 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased slightly thanks to Exonet joining us.

The security tracker currently lists 37 packages with a known CVE and the dla-needed.txt file 36. The situation is roughly similar to last month even though the number of open issues increased slightly.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianBen Hutchings: Debian LTS work, January 2017

I was assigned 12.75 hours of work by Freexian's Debian LTS initiative and carried over 5.5 from December. I worked only 3 hours, so I carry over 15.25 hours - but I will probably give up some of those to the general pool.

I spent some time finishing off the linux security update mentioned in my December entry. I also backported the current version of wireless-regdb - not a security update, but an important one anyway - and issued DLA 785-1.

CryptogramHacking Back

There's a really interesting paper from George Washington University on hacking back: "Into the Gray Zone: The Private Sector and Active Defense against Cyber Threats."

I've never been a fan of hacking back. There's a reason we no longer issue letters of marque or allow private entities to commit crimes, and hacking back is a form a vigilante justice. But the paper makes a lot of good points.

Here are three older papers on the topic.

Planet DebianDirk Eddelbuettel: RcppTOML 0.1.1

Following up on the somewhat important RcppTOML 0.1.0 releaseas which brought RcppTOML to Windows, we have a first minor update 0.1.1. Two things changed: once again updated upstream code from Chase Geigle's cpptoml which now supports Date types too, and we added the ability to parse TOML from strings as opposed to only from files.

TOML is a file format that is most suitable for configurations, as it is meant to be edited by humans but read by computers. It emphasizes strong readability for humans while at the same time supporting strong typing as well as immediate and clear error reports. On small typos you get parse errors, rather than silently corrupted garbage. Much preferable to any and all of XML, JSON or YAML -- though sadly these may be too ubiquitous now.
TOML is making good inroads with newer and more flexible projects such as the Hugo static blog compiler, or the Cargo system of Crates (aka "packages") for the Rust language.

Changes in version 0.1.1 (2017-xx-yy)

  • Synchronized multiple times with ccptoml upstream adding support for local datetime and local date and more (PR #9, #10, PR #11)

  • Dates are now first class types, some support for local versus UTC times was added (though it may be adviseable to stick with UTC)

  • Parsing from (R) character variables is now supported as well

  • Output from print.toml no longer prints extra newlines

Courtesy of CRANberries, there is a diffstat report for this release.

More information and examples are on the RcppTOML page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Sociological ImagesThe Problem with Femvertising (or ‘feminist’ advertising)

The 2017 Super Bowl was an intense competition full of unexpected winners and high entertainment value. Alright, I didn’t actually watch the game, nor do I even know what teams were playing. I’m referring to the Super Bowl’s secondary contest, that of advertising. The Super Bowl is when many companies will roll out their most expensive and innovative advertisements. And this year there was a noticeable trend of socially aware advertising. Companies like Budweiser and 84 Lumber made statements on immigration. Airbnb and Coca-Cola celebrated American diversity. This socially conscious advertising is following the current political climate and riding the wave of increasing social movements. However, it also follows the industry’s movement towards social responsibility and activism.

One social activism Super Bowl commercial that created a significant buzz on social media this year was Audi:

The advertisement shows a young girl soapbox racing and a voiceover of her father wondering how to tell her about the difficulties she is bound to face just for being female. This commercial belongs to a form of socially responsible advertising often referred to as femvertising. Femvertising is a term used to describe mainstream commercial advertising that attempts to promote female empowerment or challenge gender stereotypes.

Despite the Internet’s response to this advertisement with a sexist pushback against feminism, this commercial is not exactly feminist. While at its core this advertisement is sending a fundamentally feminist argument of gender equality and fair wages, it feels disempowering to have a man explain sexism. It feels a little like “mansplaining” with moments reminiscent of the male “savior” trope. There is also a very timid relationship between the fight for gender equality and the product being sold. The advertisement is attempting to associate Audi with feminist ideals, but the reality is that with no female board members Audi is not exactly practicing what they preach. There are many reasons why ‘femverstising’ in general is problematic (not including contested relationship between feminism and capitalism). Here I will point out three problems with this new trend of socially ‘responsible’ femvertising.

1. The industry

The advertising industry is not known for its diversity nor is it known for its accurate representation women. So right away the industry doesn’t instill confidence in those hoping for more socially aware and diverse advertising. The way advertising works is to promote a brand identity by drawing on social symbols that make products like Channel perfume a signifier of French sophistication and Marlboro cigarettes an icon of American rugged masculinity. Therefore companies are selling an identity just as much as the product itself, while corporations that employ feminist advertising are instead appropriating feminist ideologies.

They are appropriating not social signifiers of an idealized lifestyle, but rather the whole historical baggage and gendered experiences women. This appropriation at its core is not for social progress and empowerment, but to sell a product. The whole industry functions by using these identities for material gain. As feminism becomes more popular with young women, it then becomes a profitable and desirable identity to implement. The whole concept is against feminist ideology because feminism is not for sale. Using feminist arguments to sell products may be better than perpetuating gender stereotypes but it is still using these ideologies like trying on a new style of dress that can be taken off at night rather than embodying the messages of feminism that they are borrowing. This brings us to the next point.

2. Sometimes it’s the Wrong Solution

Lets consider Dove, the toiletries company that has gained a fair amount of notoriety for their social advertising and small-scale outreach programs for women and girls. Their advertisements are famous for endorsing the body positive movement. But in general the connection between female empowerment and what they actually sell is weak. Which makes it feel insincere and a lot like pandering. Why doesn’t Dove just make products that are more aligned with feminist ideologies in the first place? If feminist consumers are what they want, then make feminist products. Don’t try to just apply feminist concepts as an afterthought in hopes of increasing consumer sales.

Dove is a beauty company that is benefiting from products that are aimed at promoting a very gendered ideal of beauty. The company itself is part of the problem so its femvertising makes me feel like Dove (and their parent company Unileaver) is trying to deny that they are playing a huge role in the creation of these stereotypes that they are claiming to be challenging. If you want to really empower women then don’t just do it in your branding start with the products you are making, examine your business model, and challenge the industry as whole. Feminist concepts should run through the entire core of your business before you try to sell it to your consumers. We don’t need feminist advertising, we need a system that is not actively continuing to increase a gender divide where women are meant to be beautiful and expected to purchase the beauty products that Dove sells. Using feminist inspired advertising doesn’t solve this underlying core problem it just masks it. Femvertising therefore is often the wrong solution, or really not even a solution at all.

3. Femvertising shouldn’t have to be a term

We really shouldn’t be in a situation where all advertising is so un-feminist and so degrading towards women that there is a term for advertising that simply depicts women as powerful. When the bar is set so low we shouldn’t praise companies for doing the minimum required to represent women both accurately and positively. We should be holding our advertising, media, and all other forms of visual representation to much higher standards. Femvertising shouldn’t be a thing because we shouldn’t have to give a term to what responsible advertising agencies should be aiming for when they represent women.

So as to not leave you on a depressing and negative note, here are three advertisements that should be acknowledged for actively challenge the norms:


Nichole Fernández is a PhD candidate in sociology at the University of Edinburgh specializing in visual sociology. Her PhD research explores the representation of the nation in tourism advertisements and can be found at Follow Nichole on twitter here.

(View original at

Planet Linux AustraliaBinh Nguyen: Life in Syria, Why the JSF isn't Worth It, and More

Given what has happened wanted to see what has been happening inside Syria: - complicated colonial history with both British and French. Has had limited conflict with some of it's neighbours including

Worse Than FailureCodeSOD: A Sample of Heck

An email from Andrea Ci arrived in our inbox, with nothing more than some code and a very simple subject line: “VB Conversion: a year in hell”.

A lot of people have that experience when encountering Visual Basic code, especially when it’s VB6, not VB.Net. Even so, could it really be that bad? Well, let’s look at the sample Andrea provided.

Public Sub sort(maxItems As Integer)
If m_Col.Count > maxItems Then
     Dim colNew As Collection: Set colNew = New Collection
     Dim modulo As Integer: modulo = m_Col.Count / maxItems
        Dim I As Integer
     For I = 1 To m_Col.Count
        If I Mod modulo = 0 Then
           colNew.Add m_Col(I)
        End If
     Next I
    Set m_Col = colNew
End If
End Sub

This subroutine is actually not too bad, though thanks to a weird logical approach and a terrible name, it took a solid five minutes of staring at this code before I got what was going on here. First off, this shouldn’t be called sort. That much, I think, is obvious. Try sample or skip. That’s all this does- if you have a collection with 9 items in it, and you want a collection with only 3 items in it, this will take every third item of the original collection.

In the end, there are three key problems with this code that lead to it being posted here. First, is the function name. Talk about misleading. Second is creating a variable called modulo and initializing it by division. Sure, it’s used as part of a modulus expression later, but still.

The real source of confusion, at least for me, arises from what I believe is a lack of language knowledge- they didn’t know how to change the increment on the For loop. Instead of the If I Mod modulo = 0 expression, they could have simply written the for loop thus: For I = 1 to m_Col.Count Step modulo.

So, Andrea, I’m sorry, but, this isn’t Hell. A codebase full of code like this, it’s like a third-ring suburb of Hell. Sure, you’ve got a 85 minute commute into one of the first-ring suburbs for work, the only source of entertainment within twenty miles is the shopping mall, and your HOA just fined you for planting tomatoes in your garden, but at least the schools are probably decent.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux Australiasthbrx - a POWER technical blog: High Power Lustre

(Most of the hard work here was done by fellow blogger Rashmica - I just verified her instructions and wrote up this post.)

Lustre is a high-performance clustered file system. Traditionally the Lustre client and server have run on x86, but both the server and client will also work on Power. Here's how to get them running.


Lustre normally requires a patched 'enterprise' kernel - normally an old RHEL, CentOS or SUSE kernel. We tested with a CentOS 7.3 kernel. We tried to follow the Intel instructions for building the kernel as much as possible - any deviations we had to make are listed below.

Setup quirks

We are told to edit ~/kernel/rpmbuild/SPEC/kernel.spec. This doesn't exist because the directory is SPECS not SPEC: you need to edit ~/kernel/rpmbuild/SPECS/kernel.spec.

I also found there was an extra quote mark in the supplied patch script after -lustre.patch. I removed that and ran this instead:

for patch in $(<"3.10-rhel7.series"); do \
      patch_file="$HOME/lustre-release/lustre/kernel_patches/patches/${patch}" \
      cat "${patch_file}" >> $HOME/lustre-kernel-x86_64-lustre.patch \

The fact that there is 'x86_64' in the patch name doesn't matter as you're about to copy it under a different name to a place where it will be included by the spec file.

Building for ppc64le

Building for ppc64le was reasonably straight-forward. I had one small issue:

[build@dja-centos-guest rpmbuild]$ rpmbuild -bp --target=`uname -m` ./SPECS/kernel.spec
Building target platforms: ppc64le
Building for target ppc64le
error: Failed build dependencies:
       net-tools is needed by kernel-3.10.0-327.36.3.el7.ppc64le

Fixing this was as simple as a yum install net-tools.

This was sufficient to build the kernel RPMs. I installed them and booted to my patched kernel - so far so good!

Building the client packages: CentOS

I then tried to build and install the RPMs from lustre-release. This repository provides the sources required to build the client and utility binaries.

./configure and make succeeded, but when I went to install the packages with rpm, I found I was missing some dependencies:

error: Failed dependencies:
        ldiskfsprogs >= 1.42.7.wc1 is needed by kmod-lustre-osd-ldiskfs-2.9.52_60_g1d2fbad_dirty-1.el7.centos.ppc64le
    sg3_utils is needed by lustre-iokit-2.9.52_60_g1d2fbad_dirty-1.el7.centos.ppc64le
        attr is needed by lustre-tests-2.9.52_60_g1d2fbad_dirty-1.el7.centos.ppc64le
        lsof is needed by lustre-tests-2.9.52_60_g1d2fbad_dirty-1.el7.centos.ppc64le

I was able to install sg3_utils, attr and lsof, but I was still missing ldiskfsprogs.

It seems we need the lustre-patched version of e2fsprogs - I found a mailing list post to that effect.

So, following the instructions on the walkthrough, I grabbed the SRPM and installed the dependencies: yum install -y texinfo libblkid-devel libuuid-devel

I then tried rpmbuild -ba SPECS/e2fsprogs-RHEL-7.spec. This built but failed tests. Some failed because I ran out of disk space - they were using 10s of gigabytes. I found that there were some comments in the spec file about this with suggested tests to disable, so I did that. Even with that fix, I was still failing two tests:

  • f_pgsize_gt_blksize: Intel added this to their fork, and no equivalent exists in the master e2fsprogs branches. This relates to Intel specific assumptions about page sizes which don't hold on Power.
  • f_eofblocks: This may need fixing for large page sizes, see this bug.

I disabled the tests by adding the following two lines to the spec file, just before make %{?_smp_mflags} check.

rm -rf tests/f_pgsize_gt_blksize
rm -rf tests/f_eofblocks

With those tests disabled I was able to build the packages successfully. I installed them with yum localinstall *1.42.13.wc5* (I needed that rather weird pattern to pick up important RPMs that didn't fit the e2fs* pattern - things like libcom_err and libss)

Following that I went back to the lustre-release build products and was able to successfully run yum localinstall *ppc64le.rpm!

Testing the server

After disabling SELinux and rebooting, I ran the test script:

sudo /usr/lib64/lustre/tests/

This spat out one scary warning:

mount.lustre FATAL: unhandled/unloaded fs type 0 'ext3'

The test did seem to succeed overall, and it would seem that is a known problem, so I pressed on undeterred.

I then attached a couple of virtual harddrives for the metadata and object store volumes, and having set them up, proceeded to try to mount my freshly minted lustre volume from some clients.

Testing with a ppc64le client

My first step was to test whether another ppc64le machine would work as a client.

I tried with an existing Ubuntu 16.04 VM that I use for much of my day to day development.

A quick google suggested that I could grab the lustre-release repository and run make debs to get Debian packages for my system.

I needed the following dependencies:

sudo apt install module-assistant debhelper dpatch libsnmp-dev quilt

With those the packages built successfully, and could be easily installed:

dpkg -i lustre-client-modules-4.4.0-57-generic_2.9.52-60-g1d2fbad-dirty-1_ppc64el.deblustre-utils_2.9.52-60-g1d2fbad-dirty-1_ppc64el.deb

I tried to connect to the server:

sudo mount -t lustre $SERVER_IP@tcp:/lustre /lustre/

Initially I wasn't able to connect to the server at all. I remembered that (unlike Ubuntu), CentOS comes with quite an aggressive firewall by default. I ran the following on the server:

systemctl stop firewalld

And voila! I was able to connect, mount the lustre volume, and successfully read and write to it. This is very much an over-the-top hack - I should have poked holes in the firewall to allow just the ports lustre needed. This is left as an exercise for the reader.

Testing with an x86_64 client

I then tried to run make debs on my Ubuntu 16.10 x86_64 laptop.

This did not go well - I got the following error:

liblustreapi.c: In function ‘llapi_get_poollist’:
liblustreapi.c:1201:3: error: ‘readdir_r’ is deprecated [-Werror=deprecated-declarations]

This looks like one of the new errors introduced in recent GCC versions, and is a known bug. To work around it, I found the following stanza in a lustre/autoconf/lustre-core.m4, and removed the -Werror:

AS_IF([test $target_cpu == "i686" -o $target_cpu == "x86_64"],
        [CFLAGS="$CFLAGS -Wall -Werror"])

Even this wasn't enough: I got the following errors:

/home/dja/dev/lustre-release/debian/tmp/modules-deb/usr_src/modules/lustre/lustre/llite/dcache.c:387:22: error: initialization from incompatible pointer type [-Werror=incompatible-pointer-types]
         .d_compare = ll_dcompare,
/home/dja/dev/lustre-release/debian/tmp/modules-deb/usr_src/modules/lustre/lustre/llite/dcache.c:387:22: note: (near initialization for ‘ll_d_ops.d_compare’)

I figured this was probably because Ubuntu 16.10 has a 4.8 kernel, and Ubuntu 16.04 has a 4.4 kernel. Work on supporting 4.8 is ongoing.

Sure enough, when I fired up a 16.04 x86_64 VM with a 4.4 kernel, I was able to build and install fine.

Connecting didn't work first time - the guest failed to mount, but I did get the following helpful error on the server:

LNetError: 2595:0:(acceptor.c:406:lnet_acceptor()) Refusing connection from insecure port 1024

Refusing insecure port 1024 made me thing that perhaps the NATing that qemu was performing for me was interfering - perhaps the server expected to get a connection where the source port was privileged, and qemu wouldn't be able to do that with NAT.

Sure enough, switching NAT to bridging was enough to get the x86 VM to talk to the ppc64le server. I verified that ls, reading and writing all succeeded.

Next steps

The obvious next steps are following up the disabled tests in e2fsprogs, and doing a lot of internal performance and functionality testing.

Happily, it looks like Lustre might be in the mainline kernel before too long - parts have already started to go in to staging. This will make our lives a lot easier: for example, the breakage between 4.4 and 4.8 would probably have already been picked up and fixed if it was the main kernel tree rather than an out-of-tree patch set.

In the long run, we'd like to make Lustre on Power just as easy as Lustre on x86. (And, of course, more performant!) We'll keep you up to date!

(Thanks to fellow bloggers Daniel Black and Andrew Donnellan for useful feedback on this post.)


Harald WelteTowards a real SIGTRAN/SS7 stack in libosmo-sigtran

In the good old days ever since the late 1980ies - and a surprising amount even still today - telecom signaling traffic is still carried over circuit-switched SS7 with its TDM lines as physical layer, and not an IP/Ethernet based transport.

When Holger first created OsmoBSC, the BSC-only version of OpenBSC some 7-8 years ago, he needed to implement a minimal subset of SCCP wrapped in TCP called SCCP Lite. This was due to the simple fact that the MSC to which it should operate implemented this non-standard protocol stacking that was developed + deployed before the IETF SIGTRAN WG specified M3UA or SUA came around. But even after those were specified in 2004, the 3GPP didn't specify how to carry A over IP in a standard way until the end of 2008, when a first A interface over IP study was released.

As time passese, more modern MSCs of course still implement classic circuit-switched SS7, but appear to have dropped SCCPlite in favor of real AoIP as specified by 3GPP meanwhile. So it's time to add this to the osmocom universe and OsmoBSC.

A couple of years ago (2010-2013) implemented both classic SS7 (MTP2/MTP3/SCCP) as well as SIGTRAN stackings (M2PA/M2UA/M3UA/SUA in Erlang. The result has been used in some production deployments, but only with a relatively limited feature set. Unfortunately, this code has nto received any contributions in the time since, and I have to say that as an open source community project, it has failed. Also, while Erlang might be fine for core network equipment, running it on a BSC really is overkill. Keep in miond that we often run OpenBSC on really small ARM926EJS based embedded systems, much more resource constrained than any single smartphone during the late decade.

In the meantime (2015/2016) we also implemented some minimal SUA support for interfacing with UMTS femto/small cells via Iuh (see OsmoHNBGW).

So in order to proceed to implement the required SCCP-over-M3UA-over-SCTP stacking, I originally thought well, take Holgers old SCCP code, remove it from the IPA multiplex below, stack it on top of a new M3UA codebase that is copied partially from SUA.

However, this falls short of the goals in several ways:

  • The application shouldn't care whether it runs on top of SUA or SCCP, it should use a unified interface towards the SCCP Provider. OsmoHNBGW and the SUA code already introduce such an interface baed on the SCCP-User-SAP implemented using Osmocom primitives (osmo_prim). However, the old OsmoBSC/SCCPlite code doesn't have such abstraction.
  • The code should be modular and reusable for other SIGTRAN stackings as required in the future

So I found myself sketching out what needs to be done and I ended up pretty much with a re-implementation of large parts. Not quite fun, but definitely worth it.

The strategy is:

And then finally stack all those bits on top of each other, rendering a fairly clean and modern implementation that can be used with the IuCS of the virtually unmodified OsmmoHNBGW, OsmoCSCN and OsmoSGSN for testing.

Next steps in the direction of the AoIP are:

  • Implementation of the MTP-SAP based on the IPA transport
  • Binding the new SCCP code on top of that
  • Converting OsmoBSC code base to use the SCCP-User-SAP for its signaling connection

From that point onwards, OsmoBSC doesn't care anymore whether it transports the BSSAP/BSSMAP messages of the A interface over SCCP/IPA/TCP/IP (SCCPlite) SCCP/M3UA/SCTP/IP (3GPP AoIP), or even something like SUA/SCTP/IP.

However, the 3GPP AoIP specs (unlike SCCPlite) actually modify the BSSAP/BSSMAP payload. Rather than using Circuit Identifier Codes and then mapping the CICs to UDP ports based on some secret conventions, they actually encapsulate the IP address and UDP port information for the RTP streams. This is of course the cleaner and more flexible approach, but it means we'll have to do some further changes inside the actual BSC code to accommodate this.

Planet DebianShirish Agarwal: Density and accessibility

Around 2 decades back and a bit more I was introduced to computers. The top-line was 386DX which was mainly used as fat servers and some lucky institutions had the 386SX where IF we were lucky we could be able to play some games on it. I was pretty bad at Prince of Persia or most of the games of the era as most of the games depended on lightning reflexes which I didn’t possess. Then 1997 happened and I was introduced to GNU/Linux but my love of/for games still continued even though I was bad at most of them. The only saving grace was turn-based RPG’s (role-playing games) which didn’t have permadeath, so you could plan your next move. Sometimes a wrong decision would lead to getting a place from where it was impossible to move further. As the decision was taken far far break which lead to the tangent, the only recourse was to replay the game which eventually lead to giving most of those kind of games.

Then in/around 2000 Maxis came out with Sims. This was the time where I bought my first Pentium. I had never played a game which had you building stuff, designing stuff, no violence and the whole idea used to be about balancing priorities of trying to get new stuff, go to work, maintain relationships and still make sure you are eating, sleeping, have a good time. While I might have spent probably something close to 500 odd hours in the game or even more so, I spent the least amount of time in building the house. It used to be 4×4 when starting (you don’t have much of in-game money and other stuff you wanted to buy as well) to 8×8 or at the very grand 12×12. It was only the first time I spent time trying to figure out where the bathroom should be, where the kitchen should, where the bedroom should be and after that I could do that blind-folded. The idea behind my house-design used to be simplicity, efficient (for my character). I used to see other people’s grand creations of their houses and couldn’t understand why they made such big houses.

Now few days back, I saw few episodes of a game called ‘Stranded Deep’ . The story, plot is simple. You, the player are going in a chartered plane and suddenly lightning strikes ( game trope as today’s aircrafts are much better able to deal with lightning strikes) and our hero or heroine washes up on a beach with raft with the barest of possessions. Now the whole game is based upon him/her trying to survive, once you get the hang of the basic mechanics and you know what is to be done, you can do it. The only thing the game doesn’t have is farming but as the game has unlimited procedural world, you just paddle or with boat motor go island hopping and take all that what you need.

What was interesting to me was seeing a gamer putting so much time and passion in making a house.

When I was looking at that video, I realized that maybe because I live in a dense environment, even the designs we make either of houses or anything else is more to try to get more and more people rather than making sure that people are happy which leads to my next sharing.

Couple of days back, I read Virali Modi’s account of how she was molested three times when trying to use Indian Railways. She made a petition on

While I do condemn the molestation as it’s an affront against individual rights, freedom, liberty, free movement, dignity.

Few of the root causes as pointed out by her, for instance the inability or non-preference to give differently-abled people the right to board first as well as awaiting to see that everybody’s boarded properly before starting the train are the most minimum steps that Indian Railways could take without spending even a paise. The same could be told/shared about sensitizing people, although I have an idea of why does Indian Railway not employ women porters or women attendants for precisely this job.

I accompanied a blind gentleman friend few times on Indian Railways and let me tell you, it was one of the most unpleasant experiences. The bogies which is given to them is similar or even less than what you see in unreserved compartments. The toilets were/are smelly, the gap between the station and the train was/is considerable for everybody from blind people, differently-abled people, elderly people as well. This is one of the causes of accidents which happen almost every day on Indian Railways. I also learnt that especially for blind people they are ‘looking’ for a sort of low-frequency whistle/noise which tells them the disabled coupe/bogie/coach will come at a specific spot in the Station. In a platform which could have anything between 1500-2000 people navigating it wouldn’t be easy. I don’t know about other places but Indian Railway Stations need to learn a lot to make it a space for differently abled to navigate by themselves.

Pune Station operates (originating or passing through) around 200 odd trains, with exceptions of all the specials and weekly trains that ply through, adding those would probably another 5-10 trains to the mix. Each train carries anywhere between 750-1000 odd people so roughly anywhere between 15-20 million pass through Pune Railway Station daily. Even if we take conservative estimates of around 5% of the public commuting from Pune, it would mean around 750,000 people travelling daily. Pune Railway Station has 6 stations and if I spread them equally it would come to around 100,000 people on one platform in 24 hours. Divide that equally by 24 hours and it comes to 4,160 people per hour.

Now you take those figures and you see the Pune platforms are under severe pressure. I have normalized many figures. For instance, just like airports, even in railways, there are specific timings where more trains come and go. From morning 0500 hrs to late night 2300 hrs. there would be lot many trains, whereas the graveyard shifts would have windows where maintenance of tracks and personnel takes place.

I dunno if people can comprehend 4000 odd people on the platform. Add to that you usually arrive at least an hour or two before a train departs even if you are a healthy person as Indian Railways has a habit of changing platforms of trains at the last minute.

So if you a differently abled person with some luggage for a long-distance train, the problems just multiply.

See drag accidents because of gap between railway bogies and platforms.

The width of the entrance to the bogie is probably between 30-40 inches but the design is such that 5-10 inches are taken on each side. I remembered the last year, our current Prime Minister, Mr. Narendra Modi had launched Accessible Campaign with great fanfare and we didn’t hear anything much after that.

Unfortunately, the site itself has latency and accessibility issues, besides not giving any real advice even if a person wants to know what building norms should one follow if one wants to make an area accessible. This was easily seen by last year’s audit in Delhi as well as other places. A couple of web-searches later, I landed up at a Canadian site to have some idea about the width of the wheelchair itself as well as additional room to manoeuvre.

Unfortunately, the best or the most modern coaches/bogies that Indian Railways has to offer are the LHB Bogies/Coaches.

Now while the Coaches/Bogies by themselves are a big improvement from the ICF Coaches which we still have and use, if you read the advice and directions shared on the Canadian site, the coaches are far from satisfactory for people who are wheel-chair bound. According to Government’s own census records, 0.6% of the population have movement issues. Getting all the differently-abled people together, it comes between 2, 2.5% of the population which is quite a bit. If 2-3 people out of every 100 people are differently-abled then we need to figure out something for them.While I don’t have any ideas as to how we could improve the surroundings, it is clear that we need the change.

While I was thinking,dreaming,understanding some of the nuances inadvertently, my attention/memories shifted to my ‘toilet’ experiences at both Mumbai and the Doha Airport. As had shared then, had been pleasantly surprised to see that both in Mumbai Airport as well as the Doha Airport, the toilets were pretty wide, a part of me was happy and a part of me was seeing the added space as wastefulness. With the understanding of needs of differently-abled people it started to make whole lot of sense. I don’t remember if I had shared then or not. Although am left wondering where they go for loo in the aircraft. The regular toilets are a tight fit for obese people, I am guessing aircrafts have toilets for differently-abled people as well.

Looking back at last year’s conference, we had 2-3 differently-abled people. I am just guessing that it wouldn’t have been a pleasant experience for them. For instance, where we were staying, at UCT it had stairs, no lifts so by default they probably were on ground-floor. Then where we were staying and where most of the talks were about a few hundred metres away and the shortest distance were by stairs, the round-about way was by road but had vehicles around so probably not safe that way as well. I am guessing they had to be dependant on other people to figure out things. There were so many places where there were stairs and no ramps and even if there were ramps they were probably a bit more than the 1:12 which is the advice given.

I have heard that this year’s venue is also a bit challenging in terms of accessibility for differently-abled people. I am clueless as to did differently-able find debconf16 in terms of accessibility or not ? A related query to that one, if a Debconf’s final report mentions issues with accessibility, do the venues make any changes so that at some future date, differently-abled people would have a better time. I know of Indian institutions reluctance to change, to do expenditure, dunno how western countries do it. Any ideas, comments are welcome.

Filed under: Miscellenous Tagged: #386, #accessibility, #air-travel, #Computers, #differently-abled, #Railways, gaming

Planet DebianDirk Eddelbuettel: Letting Travis keep a secret

More and more packages, be it for R or another language, are now interfacing different application programming interfaces (API) which are exposed to the web. And many of these may require an API key, or token, or account and password.

Which traditionally poses a problem in automated tests such as those running on the popular Travis CI service which integrates so well with GitHub. A case in point is the RPushbullet package where Seth Wenchel and I have been making a few recent changes and additions.

And yesterday morning, I finally looked more closely into providing Travis CI with the required API key so that we could in fact run continuous integration with unit tests following each commit. And it turns that it is both easy and quick to do, and yet another great showcase for ad-hoc Docker use.

The rest of this post will give a quick minimal run-down, this time using the gtrendsR package by Philippe Massicotte and myself. Start by glancing at the 'encrypting files' HOWTO from Travis itself.

We assume you have Docker installed, and a suitable base package. We will need Ruby, so any base Linux image will do. In what follows, I use Ubuntu 14.04 but many other Debian, Ubunti, Fedora, ... flavours could be used provided you know how to pick the relevant packages. What is shown here should work on any recent Debian or Ubuntu flavour 'as is'.

We start by firing off the Docker engine in the repo directory for which we want to create an encrypted file. The -v $(pwd):/mnt switch mounts the current directory as /mnt in the Docker instance:

edd@max:~/git/gtrendsr(master)$ docker run --rm -ti -v $(pwd):/mnt ubuntu:trusty
root@38b478356439:/# apt-get update    ## this takes a minute or two
Ign trusty InRelease
Get:1 trusty-updates InRelease [65.9 kB]
Get:2 trusty-security InRelease [65.9 kB]
# ... a dozen+ lines omitted ...
Get:21 trusty/restricted amd64 Packages [16.0 kB]    
Get:22 trusty/universe amd64 Packages [7589 kB]      
Fetched 22.4 MB in 6min 40s (55.8 kB/s)                                        
Reading package lists... Done

We then install what is needed to actually install the travis (Ruby) gem, as well as git which is used by it:

root@38b478356439:/# apt-get install -y ruby ruby-dev gem build-essential git
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
# ... lot of output ommitted ...
Processing triggers for ureadahead (0.100.0-16) ...
Processing triggers for sgml-base (1.26+nmu4ubuntu1) ...

This too may take a few minutes, depending on the networking bandwidth and other factors, and should in general succeed without the need for any intervention. Once it has concluded, we can use the now-complete infrastructure to install the travis command-line client:

root@38b478356439:/# gem install travis
Fetching: multipart-post-2.0.0.gem (100%)
Fetching: faraday-0.11.0.gem (100%)
Fetching: faraday_middleware- (100%)
Fetching: highline-1.7.8.gem (100%)
Fetching: backports-3.6.8.gem (100%)
Fetching: multi_json-1.12.1.gem (100%
# ... many lines omitted ...
Installing RDoc documentation for websocket-1.2.4...
Installing RDoc documentation for json-2.0.3...
Installing RDoc documentation for pusher-client-0.6.2...
Installing RDoc documentation for travis-1.8.6...

This in turn will take a moment.

Once done, we can use the travis client to login into GitHub. In my base this requires a password and a two-factor authentication code. Also note that we switch directories first to be in the actual repo we had mounted when launching docker.

root@38b478356439:/# cd /mnt/    ## change to repo directory
root@38b478356439:/mnt# travis --login
Shell completion not installed. Would you like to install it now? |y| y
We need your GitHub login to identify you.
This information will not be sent to Travis CI, only to
The password will not be displayed.

Try running with --github-token or --auto if you don't want to enter your password anyway.

Username: eddelbuettel
Password for eddelbuettel: ****************
Two-factor authentication code for eddelbuettel: xxxxxx
Successfully logged in as eddelbuettel!

Now the actual work of encrypting. For this particular package, we need a file .Rprofile containing a short option() segment setting a user-id and password:

root@38b478356439:/mnt# travis encrypt-file .Rprofile
Detected repository as PMassicotte/gtrendsR, is this correct? |yes| 
encrypting .Rprofile for PMassicotte/gtrendsR
storing result as .Rprofile.enc
storing secure env variables for decryption

Please add the following to your build script (before_install stage in your .travis.yml, for instance):

    openssl aes-256-cbc -K $encrypted_988d19a907a0_key -iv $encrypted_988d19a907a0_iv -in .Rprofile.enc -out .Rprofile -d

Pro Tip: You can add it automatically by running with --add.

Make sure to add .Rprofile.enc to the git repository.
Make sure not to add .Rprofile to the git repository.
Commit all changes to your .travis.yml.

That's it. Now we just need to follow-through as indicated, committing the .Rprofile.enc file, making sure to not commit its input file .Rprofile, and adding the proper openssl invocation with the keys known only to Travis to the file .travis.yml.

Planet DebianStefano Zacchiroli: Opening the Software Heritage archive

... one API (and one FOSDEM) at a time

[ originally posted on the Software Heritage blog, reposted here with minor adaptations ]

Last Saturday at FOSDEM we have opened up the public API of Software Heritage, allowing to programmatically browse its archive.

We posted this while I was keynoting with Roberto at FOSDEM 2017, to discuss the role Software Heritage plays in preserving the Free Software commons. To accompany the talk we released our first public API, which allows to navigate the entire content of the Software Heritage archive as a graph of connected development objects (e.g., blobs, directories, commits, releases, etc.).

Over the past months we have been busy working on getting source code (with full development history) into the archive, to minimize the risk that important bits of Free/Open Sources Software that are publicly available today disappear forever from the net, due to whatever reason --- crashes, black hat hacking, business decisions, you name it. As a result, our archive is already one of the largest collections of source code in existence, spanning a GitHub mirror, injections of important Free Software collections such as Debian and GNU, and an ongoing import of all Google Code and Gitorious repositories.

Up to now, however, the archive was deposit-only. There was no way for the public to access its content. While there is a lot of value in archival per se, our mission is to Collect, Preserve, and Share all the material we collect with everybody. Plus, we totally get that a deposit-only library is much less exciting than a store-and-retrieve one! Last Saturday we took a first important step towards providing full access to the content of our archive: we released version 1 of our public API, which allows to navigate the Software Heritage archive programmatically.

You can have a look at the API documentation for full details about how it works. But to briefly recap: conceptually, our archive is a giant Merkle DAG connecting together all development-related objects we encounter while crawling public VCS repositories, source code releases, and GNU/Linux distribution packages. Examples of the objects we store are: file contents, directories, commits, releases; as well as their metadata, such as: log messages, author information, permission bits, etc.

The API we have just released allows to pointwise navigate this huge graph. Using the API you can lookup individual objects by their IDs, retrieve their metadata, and jump from one object to another following links --- e.g., from a commit to the corresponding directory or parent commits, from a release to the annotated commit, etc. Additionally, you can retrieve crawling-related information, such as the software origins we track (usually as VCS clone/checkout URLs), and the full list of visits we have done on any known software origin. This allows, for instance, to know when we took snapshots of a Git repository you care about and, for each visit, where each branch of the repo was pointing to at that time.

Our resources for offering the API as a public service are still quite limited. This is the reason why you will encounter a couple of limitations. First, no download of the actual content of files we have stored is possible yet --- you can retrieve all content-related metadata (e.g., checksums, detected file types and languages, etc.), but not the actual content as a byte sequence. Second, some pretty severe rate limits apply; API access is entirely anonymous and users are identified by their IP address, each "user" will be able to do a little bit more than 100 requests/hour. This is to keep our infrastructure sane while we grow in capacity and focus our attention to developing other archive features.

If you're interested in having rate limits lifted for a specific use case or experiment, please contact us and we will see what we can do to help.

If you'd like to contribute to increase our resource pool, have a look at our sponsorship program!

Planet Linux AustraliaBinh Nguyen: Life in Egypt, Life in Saudi Arabia, and More

On Egypt: - the thing that we mostly know Egypt for is the ancient Egyptian empire. Once upon a time the Middle East was effectively the global centre of knowledge, culture, etc... The pyramids were so spectacular for their age that their have been rumors throughout time of aliens being in contact with them. Would actually make a lot of stories in the Holy Scriptures a lot more sense as well?

Planet Linux AustraliaBen Martin: Printer bracket fix

Similar to many 3d printer designs, many of the parts on this 3d printer are plastic. Where the Z-Axis meets the Y-Axis is held in place by two top brackets (near the gear on the stepper is a bolt to the z alloy extrusion) and the bottom bracket. One flaw here is that there are no bolts to the z-axis on the bottom bracket. It was also cracked in two places so the structural support was low and the x-axis would droop over time. Not so handy.

The plastic is about 12mm thick and smells like a 2.5D job done by a 3d printer 'just because'.  So a quick tinker in Fusion 360 and the 1/2 inch thick flatland part was born. After removing the hold down tabs and flapping the remains away 3 M6 bolt holds were hand drilled. Notice the subtle shift on the inside of the part where the extrusion and stepper motor differ in size.

It was quicker to just do that rather than try to remount and register on the cnc and it might not have even worked with the limited z range of the machine.

The below image only has two of the three bolts in place. With the addition of the new bolt heading into the z axis the rigidity of the machine went right up. The shaft that the z axis is mounted onto goes into the 12mm empty hole in the part.

This does open up the mental thoughts of how many other parts would be better served by not being made out of plastic.

Planet DebianElena 'valhalla' Grandi: Mobile-ish devices as freedom respecting working environments

Mobile-ish devices as freedom respecting working environments

On planet FSFE, there is starting to be a conversation on using tablets / Android as the main working platform.

It started with the article by Henri Bergius which nicely covers all practical points, but is quite light on the issues of freedom.

This was rectified by the article by David Boddie which makes an apt comparison of Android to “the platform it is replacing in many areas of work and life: Microsoft Windows” and criticises its lack of effective freedom, even when the OS was supposed to be under a free license.

I fully agree that lightweight/low powered hardware can be an excellent work environment, especially when on the go, and even for many kinds of software developement, but I'd very much rather have that hardware run an environment that I can trust like Debian (or another traditional GNU/Linux distribution) rather than the phone based ones where, among other problems, there is no clear distinction between what is local and trustable and what is remote and under somebody else's control.

In theory, it would be perfectly possible to run Debian on most tablet and tablet-like hardware, and have such an environment; in practice this is hard for a number of reasons including the lack of mainline kernel support for most hardware and the way actually booting a different OS on it usually ranges from the quite hard to the downright impossible.

Luckily, there is some niche hardware that uses tablet/phone SoCs but is sold with a GNU/Linux distribution and can be used as a freedom respecting work environment on-the-go: my current setup includes an OpenPandora (running Angstrom + a Debian chroot) and an Efika MX Smartbook, but they are both showing their age badly: they have little RAM (especially the Pandora), and they aren't fully supported by a mainline kernel, which means that you're stuck on an old kernel and dependent on the producer for updates (which for the Efika ended quite early; at least the Pandora is still somewhat supported, at least for bugfixes).

Right now I'm looking forward to two devices as a replacement: the DragonBox Pyra (still under preorders) and the THERES-I laptop kit (hopefully available for sale "in a few months", and with no current mainline support for the SoC, but there is hope to see it from the sunxi community

As for software, the laptop/clamshell designs means that using a regular Desktop Environment (or, in my case, Window Manager) works just fine; I do hope that the availability of Pyra (with its touchscreen and 4G/"phone" chip) will help to give a bit of life back to the efforts to improve mobile software on Debian

Hopefully, more such devices will continue to be available, and also hopefully the trend for more openness of the hardware itself will continue; sadly I don't see this getting outside of a niche market in the next few years, but I think that this niche will remain strong enough to be sustainable.

P.S. from nitpicker-me: David Boddie mentions the ability to easily download sources for any component with apt-get source: the big difference IMHO is given by apt-get build-dep, which also install every dependency needed to actually build the code you have just downloaded.

P.S.2: I also agree with Davide Boddie that supporting Conservancy is very important, and there are still a few hours left to have the contribution count twice.


Harald WelteTesting (not only) telecom protocols

When implementing any kind of communication protocol, one always dreams of some existing test suite that one can simply run against the implementation to check if it performs correct in at least those use cases that matter to the given application.

Of course in the real world, there rarely are protocols where this is true. If test specifications exist at all, they are often just very abstract texts for human consumption that you as the reader should implement yourself.

For some (by far not all) of the protocols found in cellular networks, every so often I have seen some formal/abstract machine-parseable test specifications. Sometimes it was TTCN-2, and sometimes TTCN-3.

If you haven't heard about TTCN-3, it is basically a way to create functional tests in an abstract description (textual + graphical), and then compile that into an actual executable tests suite that you can run against the implementation under test.

However, when I last did some research into this several years ago, I couldn't find any Free / Open Source tools to actually use those formally specified test suites. This is not a big surprise, as even much more fundamental tools for many telecom protocols are missing, such as good/complete ASN.1 compilers, or even CSN.1 compilers.

To my big surprise I now discovered that Ericsson had released their (formerly internal) TITAN TTCN3 Toolset as Free / Open Source Software under EPL 1.0. The project is even part of the Eclipse Foundation. Now I'm certainly not a friend of Java or Eclipse by all means, but well, for running tests I'd certainly not complain.

The project also doesn't seem like it was a one-time code-drop but seems very active with many repositories on gitub. For example for the core module, titan.core shows plenty of activity on an almost daily basis. Also, binary releases for a variety of distributions are made available. They even have a video showing the installation ;)

If you're curious about TTCN-3 and TITAN, Ericsson also have made available a great 200+ pages slide set about TTCN-3 and TITAN.

I haven't yet had time to play with it, but it definitely is rather high on my TODO list to try.

ETSI provides a couple of test suites in TTCN-3 for protocols like DIAMETER, GTP2-C, DMR, IPv6, S1AP, LTE-NAS, 6LoWPAN, SIP, and others at (It's also the first time I've seen that ETSI has a SVN server. Everyone else is using git these days, but yes, revision control systems rather than periodic ZIP files is definitely a big progress. They should do that for their reference codecs and ASN.1 files, too.

I'm not sure once I'll get around to it. Sadly, there is no TTCN-3 for SCCP, SUA, M3UA or any SIGTRAN related stuff, otherwise I would want to try it right away. But it definitely seems like a very interesting technology (and tool).

Cory DoctorowNow in the UK! Pre-order signed copies of the first edition hardcover of Walkaway, my first adult novel since Makers

The UK’s Forbidden Planet is now offering signed hardcovers of Walkaway, my first novel for adults since 2009 — this is in addition to the signed US hardcovers being sold by Barnes and Noble.

Walkaway has scored starred reviews in Booklist (“memorable and engaging” and “ultimately suffused with hope”) and Kirkus (“A truly visionary techno-thriller that not only depicts how we might live tomorrow, but asks why we don’t already”).

Edward Snowden said the book was “a reminder that the world we choose to build is the one we’ll inhabit” and Kim Stanley Robinson called it “a utopia both more thought-provoking and more fun than a dystopia” and Neal Stephenson called “the Bhagavad Gita of hacker/maker/ burner/open source/git/gnu/wiki/99%/adjunct faculty/Anonymous/shareware/thingiverse/cypherpunk/ LGTBQIA*/squatter/upcycling culture, zipped down into a pretty damned tight techno-thriller with a lot of sex in it.” Yochai Benkler said “A beautifully-done utopia, just far enough off normal to be science
fiction, and just near enough to the near-plausible, on both the
utopian and dystopian elements, to be eerie as almost programmatic…a
sheer delight.”

The book comes out on April 25; I’m touring 20 cities in the USA, plus major Canadian cities and a week-long tour of the UK — details TBA.

Pre-order signed Walkaway in the UK [Forbidden Planet]

Pre-order signed Walkaway in the USA [Barnes and Noble]

Planet Linux AustraliaLev Lafayette: Multicore World 2017

The 6th Multicore World will be held on Monday 20th to Wednesday 22nd of February 2017 at Shed 6 on the Wellington (NZ) waterfront. Nicolás Erdödy (Open Parallel) has once again done an amazing job at finding some the significant speakers in the world in parallel programming and multicore systems to attend. Although a short - and not an enormous conference - the technical quality is always extremely high, dealing with some of the most fundamental problems and recent experiences in these fields.

read more

Planet Linux AustraliaOpenSTEM: Librarians take up arms against fake news | Seattle Times

Librarians are stepping into the breach to help students become smarter evaluators of the information that floods into their lives. That’s increasingly necessary in an era in which fake news is a constant.

Spotting fake news - librarian Janelle Hagen - Lakeside School SeattleSpotting fake news – by librarian Janelle Hagen – Lakeside School Seattle

Read more:

Planet Linux AustraliaDonna Benjamin: 5 Tech Non Profs you should support right now!

icons for the 5 tech non profs whose actual icons appear below anyway.

Join 'em, support 'em, donate, promote... whatever. They all do good work. Really good work. And we should all support them as much as we can. Help me, help them, by following them, amplifying their voices, donating or even better?Joining them! And if all you've got is gratitude for the work they do, then drop 'em a line and just say a simple thank you :)


Software Freedom Conservancy Logo

Software Freedom Conservancy

Follow: @conservancy




Open Source Initiative

Follow: @OpenSourceOrg





Drupal Association

Follow: @drupalassoc





Internet Archive

Follow: @internetarchive


Join: as above, just choose monthly sustaining member



Wikimedia Foundation

Follow: @Wikimedia




Harald WelteFOSDEM 2017

Last weekend I had the pleasure of attending FOSDEM 2017. For many years, it is probably the most exciting event exclusively on Free Software to attend every year.

My personal highlights (next to meeting plenty of old and new friends) in terms of the talks were:

I was attending but not so excited by Georg Greve's OpenPOWER talk. It was a great talk, and it is an important topic, but the engineer in me would have hoped for some actual beefy technical stuff. But well, I was just not the right audience. I had heard about OpenPOWER quite some time ago and have been following it from a distance.

The LoRaWAN talk couldn't have been any less technical, despite stating technical, political and cultural in the topic. But then, well, just recently 33C3 had the most exciting LoRa PHY Reverse Engineering Talk by Matt Knight.

Other talks whose recordings I still want to watch one of these days:

CryptogramFriday Squid Blogging: Squid Communication through Skin Patterns

Interesting research. (Popular article here.)

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

CryptogramCSIS's Cybersecurity Agenda

The Center for Strategic and International Studies (CSIS) published "From Awareness to Action: A Cybersecurity Agenda for the 45th President" (press release here). There's a lot I agree with -- and some things I don't -- but these paragraphs struck me as particularly insightful:

The Obama administration made significant progress but suffered from two conceptual problems in its cybersecurity efforts. The first was a belief that the private sector would spontaneously generate the solutions needed for cybersecurity and minimize the need for government action. The obvious counter to this is that our problems haven't been solved. There is no technological solution to the problem of cybersecurity, at least any time soon, so turning to technologists was unproductive. The larger national debate over the role of government made it difficult to balance public and private-sector responsibility and created a sense of hesitancy, even timidity, in executive branch actions.

The second was a misunderstanding of how the federal government works. All White Houses tend to float above the bureaucracy, but this one compounded the problem with its desire to bring high-profile business executives into government. These efforts ran counter to what is needed to manage a complex bureaucracy where greatly differing rules, relationships, and procedures determine the success of any initiative. Unlike the private sector, government decisionmaking is more collective, shaped by external pressures both bureaucratic and political, and rife with assorted strictures on resources and personnel.

CryptogramDe-Anonymizing Browser History Using Social-Network Data

Interesting research: "De-anonymizing Web Browsing Data with Social Networks":

Abstract: Can online trackers and network adversaries de-anonymize web browsing data readily available to them? We show -- theoretically, via simulation, and through experiments on real user data -- that de-identified web browsing histories can\ be linked to social media profiles using only publicly available data. Our approach is based on a simple observation: each person has a distinctive social network, and thus the set of links appearing in one's feed is unique. Assuming users visit links in their feed with higher probability than a random user, browsing histories contain tell-tale marks of identity. We formalize this intuition by specifying a model of web browsing behavior and then deriving the maximum likelihood estimate of a user's social profile. We evaluate this strategy on simulated browsing histories, and show that given a history with 30 links originating from Twitter, we can deduce the corresponding Twitter profile more than 50% of the time. To gauge the real-world effectiveness of this approach, we recruited nearly 400 people to donate their web browsing histories, and we were able to correctly identify more than 70% of them. We further show that several online trackers are embedded on sufficiently many websites to carry out this attack with high accuracy. Our theoretical contribution applies to any type of transactional data and is robust to noisy observations, generalizing a wide range of previous de-anonymization attacks. Finally, since our attack attempts to find the correct Twitter profile out of over 300 million candidates, it is -- to our knowledge -- the largest scale demonstrated de-anonymization to date.

Sociological ImagesThe Unbearable Whiteness of Being Human

Flashback Friday.

A Google image search for the phrase “evolution” returns many versions of the iconic image of human development over time. The whiteness of these images — the fact that, unless they are silhouettes or sketches, the individuals pictured have light skin associated with white people — often goes unnoticed. For our purposes, I would like us to notice:

The whiteness in these images is just one example of a long history of discourse relating whiteness and humanity, an association that has its roots in racial science and ethical justifications of colonialism, slavery, and genocide. It matters in this context, above and beyond the the general vast overrepresentation of whites in the media and as allegedly race-neutral “humans,” because the context here is one explicitly about defining what is human, what separates humans from animals, and about evolution as a civilizing process.

By presenting whites as the quintessential humans who possess the bodies and behaviors taken to be deeply meaningful human traits, whites justified, and continue to justify, white supremacy. This is what white privilege looks lik: being constantly told by experts that you and people like you represent the height of evolution and everything that it means to be that incredible piece of work that is man (irony fully intended).

Originally posted in 2010.

Benjamin Eleanor Adam is a graduate student at the CUNY Graduate Center, where he studies the American history of gender and sexuality.

(View original at

Worse Than FailureError'd: The Reason is False

"Thanks for the explanation because thanks for the explanation because false!" writes Paul N.


"The strangest part is that only one of those functions actually works, the other gives 'undefined symbol' when you try to compile it," wrote Gus.


Awn U. writes, "The thing I hate about getting a new cell number is that sometimes, I get strange texts from numbers that I don't know."


"I can't help but imagine there's an admin who was wondering, 'I wonder which bus stop signs in the county support HTML? Let's see' before pushing some button before going home for the weekend," writes David.


Raphael wrote, "Apple only uses the ol' tactic when you're really close to the expiration date."


Kushagra B. wrote, "I've been working very hard lately and and it shows!"


"Steam?! What's that?!" Matt writes.


[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet Linux AustraliaOpenSTEM: This Week in HASS: term 1 week 2

OpenSTEM A0 world map: Country Outlines and Ice Age CoastlineFoundation to Year 3

Our standalone Foundation (Prep/Kindy etc) students are introduced to the World Map this week, as they start putting stickers on it, showing where in the world they and their families come from – the origin of the title of this unit (Me and My Global Family). This helps students to feel connected with each other and to start to understand both the notion of the ‘global family’, as well as the idea that places can be represented by pictures (maps). Of course, we don’t expect most 5 year olds to understand the world map, but the sooner they start working with it, the deeper the familiarity and understanding later on.

Year 1-3 Building Stonehenge Activity - OpenSTEM History/Geography program for Primary SchoolsStudents building Stonehenge with blocks

All the other younger students are learning about movements of celestial bodies (the Earth and Moon, as they go around the Sun and each other) and that people have measured time in the past with reference to both the Sun and the Moon – Solar and Lunar calendars. To make these ideas more concrete, students study ancient calendars, such as Stonehenge, Newgrange and Abu Simbel, and take part in an activity building a model of Stonehenge from boxes or blocks.

Years 3 to 6

Demon Duck of Doom

Our older primary students are going back into the Ice Age (and who wouldn’t want to, in this weather!), as they explore the routes of modern humans leaving Africa, as part of understanding how people reached Australia. Aboriginal people arrived in Australia as part of the waves of modern humans spreading across the world. However, the Australia they encountered was very different from today. It was cold, dry and very dusty, inhabited by giant Ice Age animals (the Demon Duck of Doom is always a hot favourite with the students!) and overall, a pretty dangerous place. We challenge students to imagine life in those times, and thereby start to understand the basis for some of the Dreamtime stories, as well as the long and intricate relationship between Aboriginal people and the Australian environment.

Planet Linux AustraliaOpenSTEM: This Week in HASS: term 1 week 1

We thought it would be fun to track what’s happening in schools using our primary HASS program, on a weekly basis. Now we know that some of you are doing different units and some will start in different weeks, depending on what state you’re in, what term dates you have etc, but we will run these posts based off those schools which are implementing the units in numerical order and starting in the week beginning 30 January, 2017.

Week 1 is an introductory week for all units, and usually sets some foundations for the rest of the unit.

Foundation to Year 3

Our youngest students are still finding their feet in the new big world of school! We have 2 units for Term 1, depending on whether the class is standalone, or integrating with some Year 1 students. This week standalone classes will be starting a discussion about their families – geared towards making our newest students feel welcome and comfortable at school.

Those integrating with Year 1 or possibly Year 2, as well, will start working with their teachers on a Class Calendar, marking terms and holidays, as well as celebrations such as birthdays and public holidays. This helps younger students start to map out the coming year, as well as provide a platform for discussions about how they spent the holidays. Year 2 and 3 students may choose to focus more on discussing which season we are in now, and what the weather’s like at the moment (I’m sure most of you are in agreement that it’s too hot!). Students can track the weather on the calendar as well.

Years 3 to 6

Some Year 3 students may be in classes integrating with Year 4 students, rather than Year 2. Standalone Year 3 classes have a choice of doing either unit. These older students will be undertaking the Timeline Activity and getting a physical sense of history and spans of time. Students love an excuse to get outdoors, even when it’s hot, and this activity gives them a preview of material they will be covering later in the year, as well as giving them a hands-on understanding of how time has passed and how where we are compares to past events. This activity can even reinforce the concept of a number line from Maths, in a very kinaesthetic way.


Planet Linux AustraliaDavid Rowe: Modems for HF Digital Voice Part 1

The newly released FreeDV 700C mode uses the Coherent PSK (COHPSK) modem which I developed in 2015. This post describes the challenges of building HF modems for DV, and how the COHPSK modem evolved from the FDMDV modem used for FreeDV 1600.

HF channels are tough. You need a lot of SNR to push bits through them. There are several problems to contend with:

When the transmit signal is reflected off the ionosphere, two or more copies arrive at the receiver antenna a few ms apart. These echoes confuse the demodulator, just like a room with bad echo can confuse a listener.

Here is a plot of a BPSK baseband signal (top). Lets say we receive two copies of this signal, from two paths. The first is identical to what we sent (top), but the second is delayed a few samples and half the amplitude (middle). When you add them together at the receiver input (bottom), it’s a mess:

The multiple paths combining effectively form a comb filter, notching out chunks of the modem signal. Loosing chunks of the modem spectrum is bad. Here is the magnitude and phase frequency response of a channel with the two paths used for the time domain example above:

Note that comb filtering also means the phase of the channel is all over the place. As we are using Phase Shift Keying (PSK) to carry our precious bits, strange phase shifts are more bad news.

All of these impairments are time varying, so the echoes/notches, and phase shifts drift as the ionosphere wiggles about. As well as the multipath, it must deal with noise and operate at SNRs of around 0dB, and frequency offsets between the transmitter and receiver of say +/- 100 Hz.

If commodity sound cards are used for the ADC and DAC, the modem must also handle large sample clock offsets of +/-1000 ppm. For example the transmitter DAC sample clock might be 7996 Hz and the receiver ADC 8004 Hz, instead of the nominal 8000 Hz.

As the application is Push to Talk (PTT) Digital Voice, the modem must sync up quickly, in the order of 100ms, even with all the challenges above thrown at it. Processing delay should be around 100ms too. We can’t wait seconds for it to train like a data modem, or put up with several seconds of delay in the receive speech due to processing.

Using standard SSB radio sets we are limited to around 2000 Hz of RF bandwidth. This bandwidth puts a limit on the bit rate we can get through the channel. The amplitude and phase distortion caused by typical SSB radio crystal filters is another challenge.

Designing a modem for HF Digital Voice is not easy!


In 2012, the FDMDV modem was developed as our first attempt at a modem for HF digital voice. This is more or less a direct copy of the FDMDV waveform which was developed by Francesco Lanza, HB9TLK and Peter Martinez G3PLX. The modem software was written in GNU Octave and C, carefully tested and tuned, and most importantly – is open source software.

This modem uses many parallel carriers or tones. We are using Differential QPSK, so every symbol contains 2 bits encoded as one of 4 phases.

Lets say we want to send 1600 bits/s over the channel. We could do this with a single QPSK carrier at Rs = 800 symbols a second. Eight hundred symbols/s times two bit/symbol for QPSK is 1600 bit/s. The symbol period Ts = 1/Rs = 1/800 = 1.25ms. Alternatively, we could use 16 carriers running at 50 symbols/s (symbol period Ts = 20ms). If the multipath channel has echoes 1ms apart it will make a big mess of the single carrier system but the parallel tone system will do much better, as 1ms of delay spread won’t upset a 20ms symbol much:

We handle the time-varying phase of the channel using Differential PSK (DPSK). We actually send and receive phase differences. Now the phase of the channel changes over time, but can be considered roughly constant over the duration of a few symbols. So when we take a difference between two successive symbols the unknown phase of the channel is removed.

Here is an example of DPSK for the BPSK case. The first figure shows the BPSK signal top, and the corresponding DBPSK signal (bottom). When the BPSK signal changes, we get a +1 DBPSK value, when it is the same, we get a -1 DBPSK value.

The next figure shows the received DBPSK signal (top). The phase shift of the channel is a constant 180 degrees, so the signal has been inverted. In the bottom subplot the recovered BPSK signal after differential decoding is shown. Despite the 180 degree phase shift of the channel it’s the same as the original Tx BPSK signal in the first plot above.

This is a trivial example, in practice the phase shift of the channel will vary slowly over time, and won’t be a nice neat number like 180 degrees.

DPSK is a neat trick, but has an impact on the modem Bit Error Rate (BER) – if you get one symbol wrong, the next one tends to be corrupted as well. It’s a two for one deal on bit errors, which means crappier performance for a given SNR than regular (coherent) PSK.

To combat frequency selective fading we use a little Forward Error Correction (FEC) on the FreeDV 1600 waveform. So if one carrier gets notched out, we can use bits in the other carriers to recover the missing bits. Unfortunately we don’t have the bandwidth available to protect all bits, and the PTT delay requirement means we have to use a short FEC code. Short FEC codes don’t work as well as long ones.


Over the next few years I spent some time thinking about different modem designs and trying a bunch of different ideas, most of which failed. Research and disappointment. You just have to learn from your mistakes, talk to smart people, and keep trying. Then, towards the end of 2014, a few ideas started to come together, and the COHPSK modem was running in real time in mid 2015.

The major innovations of the COHPSK modem are:

  1. The use of diversity to help combat frequency selective fading. The baseline modem has 7 carriers. A copy of these are made, and sent at a higher frequency to make 14 tones in total. Turns out the HF channel giveth and taketh away. When one tone is notched out another is enhanced (an anti-fade). So we send each carrier twice and add them back together at the demodulator, averaging out the effect of frequency selective fades:
  2. To use diversity we need enough bandwidth to fit a copy of the baseline modem carriers. This implies the need for a vocoder bit rate of much less than 1600 bit/s – hence several iterations at a 700 bits/s speech codec – a completely different skill set – and another 18 months of my life to develop Codec 2 700C.
  3. Coherent QPSK detection is used instead of differential detection, which halves the number of bit errors compared to differential detection. This requires us to estimate the phase of the channel on the fly. Two known symbols are sent followed by 4 data symbols. These known, or Pilot symbols, allow us to measure and correct for the current phase of each carrier. As the pilot symbols are sent regularly, we can quickly acquire – then track – the phase of the channel as it evolves.

Here is a figure that shows how the pilot and data symbols are distributed across one frame of the COHPSK modem. More information of the frame design is available in the cohpsk frame design spreadsheet, including performance calculations which I’ll explain in the next blog post in this series.

Coming Next

In the next post I’ll show how reading a few graphs and adding a few dBs together can help us estimate the performance of the FDMDV and COHPSK modems on HF channels.


Modems for HF Digital Voice Part 2

cohpsk_plots.m Octave script used to generate plots for this post.

FDMDV Modem Page

FreeDV Robustness Part 1

FreeDV Robustness Part 2

FreeDV Robustness Part 3

CryptogramPacemaker Data Used in Arson Conviction

Here's a story about data from a pacemaker being used as evidence in an arson conviction.

EDITED TO ADD: Another news article. BoingBoing post.

EDITED TO ADD (2/9): Another article.

Krebs on SecurityFast Food Chain Arby’s Acknowledges Breach

Sources at nearly a half-dozen banks and credit unions independently reached out over the past 48 hours to inquire if I’d heard anything about a data breach at Arby’s fast-food restaurants. Asked about the rumors, Arby’s told KrebsOnSecurity that it recently remediated a breach involving malicious software installed on payment card systems at hundreds of its restaurant locations nationwide.

arbys2A spokesperson for Atlanta, Ga.-based Arby’s said the company was first notified by industry partners in mid-January about a breach at some stores, but that it had not gone public about the incident at the request of the FBI.

“Arby’s Restaurant Group, Inc. (ARG) was recently provided with information that prompted it to launch an investigation of its payment card systems,” the company said in a written statement provided to KrebsOnSecurity.

“Upon learning of the incident, ARG immediately notified law enforcement and enlisted the expertise of leading security experts, including Mandiant,” their statement continued. “While the investigation is ongoing, ARG quickly took measures to contain this incident and eradicate the malware from systems at restaurants that were impacted.”

Arby’s said the breach involved malware placed on payment systems inside Arby’s corporate stores, and that Arby’s franchised restaurant locations were not impacted.

Arby’s has more than 3,330 stores in the United States, and roughly one-third of those are corporate-owned. The remaining stores are franchises. However, this distinction is likely to be lost on Arby’s customers until the company releases more information about individual restaurant locations affected by the breach.

“Although there are over 1,000 corporate Arby’s restaurants, not all of the corporate restaurants were affected,” said Christopher Fuller, Arby’s senior vice president of communications. “But this is the most important point: That we have fully contained and eradicated the malware that was on our point-of-sale systems.”

The first clues about a possible breach at the sandwich chain came in a non-public alert issued by PSCU, a service organization that serves more than 800 credit unions.

The alert sent to PSCU member banks advised that PSCU had just received very long lists of compromised card numbers from both Visa and MasterCard. The alerts stated that a breach at an unnamed retailer compromised more than 355,000 credit and debit cards issued by PCSU member banks.

“PSCU believes the alerts are associated with a large fast food restaurant chain, yet to be announced to the public,” reads the alert, which was sent only to PSCU member banks.

Arby’s declined to say how long the malware was thought to have stolen credit and debit card data from infected corporate payment systems. But the PSCU notice said the breach is estimated to have occurred between Oct. 25, 2016 and January 19, 2017.

Such a large alert from the card associations is generally a sign of a sizable nationwide breach, as this is likely just the first of many alerts Visa and MasterCard will send to card-issuing banks regarding accounts that were compromised in the intrusion. If history is any lesson, some financial institutions will respond by re-issuing thousands of customer cards, while other (likely larger) institutions will focus on managing fraud losses on the compromised cards.

The breach at Arby’s comes as many credit unions and smaller banks are still feeling the financial pain from fraud related to a similar breach at the fast food chain Wendy’s. KrebsOnSecurity broke the news of that breach in January 2016, but the company didn’t announce it had fully removed the malware from its systems until May 2016. But two months after that the company was forced to admit that many Wendy’s locations were still compromised.

B. Dan Berger, president and CEO of the National Association of Federal Credit Unions, said the number of cards that PSCU told member banks were likely exposed in this breach is roughly in line with the numbers released not long after news of the Wendy’s breach broke.

“Hundreds of thousands of cards is a big number, and with the Wendy’s breach, the alerts we were getting from Visa and MasterCard were in the six-digit ranges for sure,” Berger said. “That’s probably one of the biggest numbers I’ve heard.”

Berger said the Wendy’s breach was especially painful because the company was re-compromised after it scrubbed its payment systems of malicious software. Many banks and credit unions ended up re-issuing customer cards several times throughout last year after loyal Wendy’s customers re-compromised their brand new cards again and again because they routinely ate at multiple Wendy’s locations throughout the month.

“We had institutions that stopped approving debit and credit transactions through Wendy’s when they were still dealing with that breach,” Berger said. “Our member credit unions were eating the costs of fraud on compromised cards, and on top of that having to re-issue the same cards over and over.”

Point-of-sale malware has driven most of the major retail industry credit card breaches over the past two years, including intrusions at Target and Home Depot, as well as breaches at a slew of point-of-sale vendors. The malware sometimes is installed via hacked remote administration tools like LogMeIn; in other cases the malware is relayed via “spear-phishing” attacks that target company employees. Once the attackers have their malware loaded onto the point-of-sale devices, they can remotely capture data from each card swiped at that cash register.

Thieves can then sell that data to crooks who specialize in encoding the stolen data onto any card with a magnetic stripe, and using the cards to purchase high-priced electronics and gift cards from big-box stores like Target and Best Buy.

Readers should remember that they’re not liable for fraudulent charges on their credit or debit cards, but they still have to report the unauthorized transactions. There is no substitute for keeping a close eye on your card statements. Also, consider using credit cards instead of debit cards; having your checking account emptied of cash while your bank sorts out the situation can be a hassle and lead to secondary problems (bounced checks, for instance).

CryptogramFriday Squid Blogging: Whale Mistakes Plastic Bags for Squid

A whale recently died in Norway because there were thirty plastic bags in its stomach.

Researchers believe it may have mistaken the plastic bags for squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

CryptogramSecurity and Privacy Guidelines for the Internet of Things

Lately, I have been collecting IoT security and privacy guidelines. Here's everything I've found:

  1. "Internet of Things (IoT) Broadband Internet Technical Advisory Group, Broadband Internet Technical Advisory Group, Nov 2016.

  2. "IoT Security Guidance," Open Web Application Security Project (OWASP), May 2016.

  3. "Strategic Principles for Securing the Internet of Things (IoT)," US Department of Homeland Security, Nov 2016.

  4. "Security," OneM2M Technical Specification, Aug 2016.

  5. "Security Solutions," OneM2M Technical Specification, Aug 2016.

  6. "IoT Security Guidelines Overview Document," GSM Alliance, Feb 2016.

  7. "IoT Security Guidelines For Service Ecosystems," GSM Alliance, Feb 2016.

  8. "IoT Security Guidelines for Endpoint Ecosystems," GSM Alliance, Feb 2016.

  9. "IoT Security Guidelines for Network Operators," GSM Alliance, Feb 2016.

  10. "Establishing Principles for Internet of Things Security," IoT Security Foundation, undated.

  11. "IoT Design Manifesto,", May 2015.

  12. "NYC Guidelines for the Internet of Things," City of New York, undated.

  13. "IoT Security Compliance Framework," IoT Security Foundation, 2016.

  14. "Principles, Practices and a Prescription for Responsible IoT and Embedded Systems Development," IoTIAP, Nov 2016.

  15. "IoT Trust Framework," Online Trust Alliance, Jan 2017.

  16. "Five Star Automotive Cyber Safety Framework," I am the Cavalry, Feb 2015.

  17. "Hippocratic Oath for Connected Medical Devices," I am the Cavalry, Jan 2016.

  18. "Industrial Internet of Things Volume G4: Security Framework," Industrial Internet Consortium, 2016.

  19. "Future-proofing the Connected World: 13 Steps to Developing Secure IoT Products," Cloud Security Alliance, 2016.

Other, related, items:

  1. "We All Live in the Computer Now," The Netgain Partnership, Oct 2016.

  2. "Comments of EPIC to the FTC on the Privacy and Security Implications of the Internet of Things," Electronic Privacy Information Center, Jun 2013.

  3. "Internet of Things Software Update Workshop (IoTSU)," Internet Architecture Board, Jun 2016.

  4. "Multistakeholder Process; Internet of Things (IoT) Security Upgradability and Patching," National Telecommunications & Information Administration, Jan 2017.

They all largely say the same things: avoid known vulnerabilities, don't have insecure defaults, make your systems patchable, and so on.

My guess is that everyone knows that IoT regulation is coming, and is either trying to impose self-regulation to forestall government action or establish principles to influence government action. It'll be interesting to see how the next few years unfold.

If there are any IoT security or privacy guideline documents that I'm missing, please tell me in the comments.

EDITED TO ADD: Documents added to the list, above.

Worse Than FailureThe Second Factor

Famed placeholder company Initech is named for its hometown, Initown. Initech recruits heavily from their hometown school, the University of Initown. UoI, like most universities, is a hidebound and bureaucratic institution, but in Initown, that’s creating a problem. Initown has recently seen a minor boom in the tech sector, and now the School of Sciences is setting IT policy for the entire university.

Derek manages the Business School’s IT support team, and thus his days are spent hand-holding MBA students through how to copy files over to a thumb drive, and babysitting professors who want to fax an email to the department chair. He’s allowed to hire student workers, but cannot fire them. He’s allowed to purchase consumables like paper and toner, but has to beg permission for capital assets like mice and keyboards. He can set direction and provide input to software purchase decisions, but he also has to continue to support the DOS version of WordPerfect because one professor writes all their papers using it.

A YubiKey in its holder, along with an instruction card describing its use.

One day, to his surprise, he received a notification from the Technology Council, the administrative board that set IT policy across the entire University. “We now support Two-Factor Authentication”. Derek, being both technologically savvy and security conscious, was one of the first people to sign up, and he pulled his entire staff along with him. It made sense: they were all young, technologically competent, and had smartphones that could run the school’s 2FA app. He encouraged their other customers to join them, but given that at least three professors didn’t use email and instead had the department secretary print out emails, there were some battles that simply weren’t worth fighting.

Three months went by, which is an eyeblink in University Time™. There was no further direction from the Technology Council. Within the Business School, very little happened with 2FA. A few faculty members, especially the ones fresh from the private sector, signed up. Very few tenured professors did.

And then Derek received this email:

To: AllITSManagers
Subject: Two-Factor Authentication
Effective two weeks from today, we will be requiring 2FA to be enabled on
all* accounts on the network, including student accounts. Please see attached, and communicate the changes to your customers.

Rolling out a change of this scale in two weeks would be a daunting task in any environment. Trying to get University faculty to change anything in a two week period was doomed to fail. Adding students to the mix promised to be a disaster. Derek read the attached “Transition Plan” document, hoping to see a cunning plan to manage the rollout. It was 15 pages of “Two-Factor Authentication(2FA) is more secure, and is an industry best practice,” and “The University President wants to see this change happen”.

Derek compiled a list of all of his concerns- it was a long list- and raised it to his boss. His boss shrugged: “Those are the orders”. Derek escalated up through the business school administration, and after two days of frantic emails and, “Has anyone actually thought this through?” Derek was promised 5 minutes at the end of the next Technology Council meeting… which was one week before the deadline.

The Technology Council met in one of the administrative conference rooms in a recently constructed building named after a rich alumni who paid for the building. The room was shiny and packed with teleconferencing equipment that had never properly been configured, and thus was useless. It also had a top-of-the-line SmartBoard display, which was also in the same unusable state.

When Derek was finally acknowledged by the council, he started with his questions. “So, I’ve read through the Transition Plan document,” he said, “but I don’t see anything about how we’re going to on-board new customers to this process. How is everyone going to use it?”

“They’ll just use the smartphone app,” the Chair said. “We’re making things more secure by using two-factor.”

“Right, but over in the Business School, we’ve got a lot of faculty that don’t have smartphones.”

Administrator #2, seated to the Chair’s left, chimed in, “They can just receive a text. This is making things more secure.”

“Okay,” Derek said, “but we’ve still got faculty without cellphones. Or even desk phones. Or even desks for that matter. Adjunct professors don’t get offices, but they still need their email.”

There was a beat of silence as the Chair and Administrators considered this. Administrator #1 triumphantly pounded the conference table and declared, “They can use a hardware token! This will make our network more secure!”

Administrator #2 winced. “Ah… this project doesn’t have a budget for hardware tokens. It’s a capital expense, you see…”

“Well,” the Chair said, “it can come out of their department’s budget. That seems fair, and it will make our network more secure.”

“And you expect those orders to go through in one week?” Derek asked.

“You had two weeks to prepare,” Administrator #1 scolded.

“And what about our faculty abroad? A lot of them don’t have a stable address, and I’m not going to be able to guarantee that they get their token within our timeline. Look, I agree, 2FA is definitely great for security- I’m a big advocate for our customers, but you can’t just say, let’s do this without actually having a plan in place! ‘It’s more secure’ isn’t a plan!”

“Well,” the Chair said, harrumphing their displeasure at Derek’s outburst. “That’s well and good, but you should have raised these objections sooner.”

“I’m raising these objections before the public announcement,” Derek said. “I only just found out about this last week.”

“Ah, yes, you see, about that… we made the public announcement right before this meeting.”

“You what?”

“Yes. We sent a broadcast email to all faculty, staff and students, announcing the new mandated 2FA, as well as a link to activate 2FA on their account. They just have to click the link, and 2FA will be enabled on their account.”

“Even if they have no way to received the token?” Derek asked.

“Well, it does ask them if they have a way to receive a token…”

By the time Derek got back to the helpdesk, the inbox was swamped with messages demanding to know what was going on, what this change meant, and half a dozen messages from professors who saw “mandatory” and “click this link” and followed instructions- leaving them unable to access their accounts because they didn’t have any way to get their 2FA token.

Over the next few days, the Technology Council tried to round up a circular firing squad to blame someone for the botched roll-out. For a beat, it looked like they were going to put Derek in the center of their sights, but it wasn’t just the Business School that saw a disaster with the 2FA rollout- every school in the university had similar issues, including the School of Sciences, which had been pushing the change in the first place.

In the end, the only roll-back strategy they had was to disable 2FA organization wide. Even the accounts which had 2FA previously had it disabled. Over the following months, the Technology Council changed its tone on 2FA from, “it makes our network more secure” to, “it just doesn’t work here.”

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

Planet Linux AustraliaBinh Nguyen: Life in Iran, Examining Prophets/Pre-Cogs 6/Hyperspace Travel, and More

Wanted to take a look inside Iran given how much trouble it seems to in: - ancient

Planet Linux AustraliaCraige McWhirter: Adding a Docker Runner to GitLab

In my particular scenario, I need to run both docker and docker-compose to test and build our changes. The first step to achieving this is to add an appropriate GitLab runner.

We especially need to run a privileged runner to make this happen.

Assuming that GitLab Runner has already been successfully installed, head to Admin -> Runner in the webUI of your GitLab instance and note your Registration token.

From a suitable account on your GitLab instance register a shared runner:

% sudo /usr/bin/gitlab-ci-multi-runner register --docker-privileged \
    --url \
    --registration-token REGISTRATION_TOKEN \
    --executor docker \
    --description "My Docker Runner" \
    --docker-image "docker:latest" \

Your shared runner should now be ready to run.

This applies to self-hosting a GitLab instance. If you are using the hosted services, a suitable runner is already supplied.

There are many types of executors for runners, suiting a variety of scenarios. This example's scenario is that both GitLab and the desired runner are on the same instance.


LongNowRichard Feynman and The Connection Machine

One of the most popular pieces of writing on our site is Long Now co-founder Danny Hillis’ remembrance of building an experimental computer with theoretical physicist Richard Feynman. It’s easy to see why: Hillis’ reminisces about Feynman’s final years as they worked together on the Connection Machine are at once illuminating and poignant, and paint a picture of a man who was beloved as much for his eccentricity as his genius.

Photo by Faustin Bray

Photo by Faustin Bray

Richard Feynman and The Connection Machine

by W. Daniel Hillis for Physics Today

Reprinted with permission from Phys. Today 42(2), 78 (01989). Copyright 01989, American Institute of Physics.

One day when I was having lunch with Richard Feynman, I mentioned to him that I was planning to start a company to build a parallel computer with a million processors. His reaction was unequivocal, “That is positively the dopiest idea I ever heard.” For Richard a crazy idea was an opportunity to either prove it wrong or prove it right. Either way, he was interested. By the end of lunch he had agreed to spend the summer working at the company.

Richard’s interest in computing went back to his days at Los Alamos, where he supervised the “computers,” that is, the people who operated the mechanical calculators. There he was instrumental in setting up some of the first plug-programmable tabulating machines for physical simulation. His interest in the field was heightened in the late 1970’s when his son, Carl, began studying computers at MIT.

I got to know Richard through his son. I was a graduate student at the MIT Artificial Intelligence Lab and Carl was one of the undergraduates helping me with my thesis project. I was trying to design a computer fast enough to solve common sense reasoning problems. The machine, as we envisioned it, would contain a million tiny computers, all connected by a communications network. We called it a “Connection Machine.” Richard, always interested in his son’s activities, followed the project closely. He was skeptical about the idea, but whenever we met at a conference or I visited CalTech, we would stay up until the early hours of the morning discussing details of the planned machine. The first time he ever seemed to believe that we were really going to try to build it was the lunchtime meeting.

Richard arrived in Boston the day after the company was incorporated. We had been busy raising the money, finding a place to rent, issuing stock, etc. We set up in an old mansion just outside of the city, and when Richard showed up we were still recovering from the shock of having the first few million dollars in the bank. No one had thought about anything technical for several months. We were arguing about what the name of the company should be when Richard walked in, saluted, and said, “Richard Feynman reporting for duty. OK, boss, what’s my assignment?” The assembled group of not-quite-graduated MIT students was astounded.

After a hurried private discussion (“I don’t know, you hired him…”), we informed Richard that his assignment would be to advise on the application of parallel processing to scientific problems.

“That sounds like a bunch of baloney,” he said. “Give me something real to do.”

So we sent him out to buy some office supplies. While he was gone, we decided that the part of the machine that we were most worried about was the router that delivered messages from one processor to another. We were not sure that our design was going to work. When Richard returned from buying pencils, we gave him the assignment of analyzing the router.

The Machine

The router of the Connection Machine was the part of the hardware that allowed the processors to communicate. It was a complicated device; by comparison, the processors themselves were simple. Connecting a separate communication wire between each pair of processors was impractical since a million processors would require $10^{12]$ wires. Instead, we planned to connect the processors in a 20-dimensional hypercube so that each processor would only need to talk to 20 others directly. Because many processors had to communicate simultaneously, many messages would contend for the same wires. The router’s job was to find a free path through this 20-dimensional traffic jam or, if it couldn’t, to hold onto the message in a buffer until a path became free. Our question to Richard Feynman was whether we had allowed enough buffers for the router to operate efficiently.

During those first few months, Richard began studying the router circuit diagrams as if they were objects of nature. He was willing to listen to explanations of how and why things worked, but fundamentally he preferred to figure out everything himself by simulating the action of each of the circuits with pencil and paper.

In the meantime, the rest of us, happy to have found something to keep Richard occupied, went about the business of ordering the furniture and computers, hiring the first engineers, and arranging for the Defense Advanced Research Projects Agency (DARPA) to pay for the development of the first prototype. Richard did a remarkable job of focusing on his “assignment,” stopping only occasionally to help wire the computer room, set up the machine shop, shake hands with the investors, install the telephones, and cheerfully remind us of how crazy we all were. When we finally picked the name of the company, Thinking Machines Corporation, Richard was delighted. “That’s good. Now I don’t have to explain to people that I work with a bunch of loonies. I can just tell them the name of the company.”

The technical side of the project was definitely stretching our capacities. We had decided to simplify things by starting with only 64,000 processors, but even then the amount of work to do was overwhelming. We had to design our own silicon integrated circuits, with processors and a router. We also had to invent packaging and cooling mechanisms, write compilers and assemblers, devise ways of testing processors simultaneously, and so on. Even simple problems like wiring the boards together took on a whole new meaning when working with tens of thousands of processors. In retrospect, if we had had any understanding of how complicated the project was going to be, we never would have started.

‘Get These Guys Organized’

I had never managed a large group before and I was clearly in over my head. Richard volunteered to help out. “We’ve got to get these guys organized,” he told me. “Let me tell you how we did it at Los Alamos.”

Every great man that I have known has had a certain time and place in their life that they use as a reference point; a time when things worked as they were supposed to and great things were accomplished. For Richard, that time was at Los Alamos during the Manhattan Project. Whenever things got “cockeyed,” Richard would look back and try to understand how now was different than then. Using this approach, Richard decided we should pick an expert in each area of importance in the machine, such as software or packaging or electronics, to become the “group leader” in this area, analogous to the group leaders at Los Alamos.

Part Two of Feynman’s “Let’s Get Organized” campaign was that we should begin a regular seminar series of invited speakers who might have interesting things to do with our machine. Richard’s idea was that we should concentrate on people with new applications, because they would be less conservative about what kind of computer they would use. For our first seminar he invited John Hopfield, a friend of his from CalTech, to give us a talk on his scheme for building neural networks. In 1983, studying neural networks was about as fashionable as studying ESP, so some people considered John Hopfield a little bit crazy. Richard was certain he would fit right in at Thinking Machines Corporation.

What Hopfield had invented was a way of constructing an [associative memory], a device for remembering patterns. To use an associative memory, one trains it on a series of patterns, such as pictures of the letters of the alphabet. Later, when the memory is shown a new pattern it is able to recall a similar pattern that it has seen in the past. A new picture of the letter “A” will “remind” the memory of another “A” that it has seen previously. Hopfield had figured out how such a memory could be built from devices that were similar to biological neurons.

Not only did Hopfield’s method seem to work, but it seemed to work well on the Connection Machine. Feynman figured out the details of how to use one processor to simulate each of Hopfield’s neurons, with the strength of the connections represented as numbers in the processors’ memory. Because of the parallel nature of Hopfield’s algorithm, all of the processors could be used concurrently with 100\% efficiency, so the Connection Machine would be hundreds of times faster than any conventional computer.

An Algorithm For Logarithms

Feynman worked out the program for computing Hopfield’s network on the Connection Machine in some detail. The part that he was proudest of was the subroutine for computing logarithms. I mention it here not only because it is a clever algorithm, but also because it is a specific contribution Richard made to the mainstream of computer science. He invented it at Los Alamos.

Consider the problem of finding the logarithm of a fractional number between 1.0 and 2.0 (the algorithm can be generalized without too much difficulty). Feynman observed that any such number can be uniquely represented as a product of numbers of the form $1 + 2^{-k]$, where $k$ is an integer. Testing each of these factors in a binary number representation is simply a matter of a shift and a subtraction. Once the factors are determined, the logarithm can be computed by adding together the precomputed logarithms of the factors. The algorithm fit especially well on the Connection Machine, since the small table of the logarithms of $1 + 2^{-k]$ could be shared by all the processors. The entire computation took less time than division.

Concentrating on the algorithm for a basic arithmetic operation was typical of Richard’s approach. He loved the details. In studying the router, he paid attention to the action of each individual gate and in writing a program he insisted on understanding the implementation of every instruction. He distrusted abstractions that could not be directly related to the facts. When several years later I wrote a general interest article on the Connection Machine for [Scientific American], he was disappointed that it left out too many details. He asked, “How is anyone supposed to know that this isn’t just a bunch of crap?”

Feynman’s insistence on looking at the details helped us discover the potential of the machine for numerical computing and physical simulation. We had convinced ourselves at the time that the Connection Machine would not be efficient at “number-crunching,” because the first prototype had no special hardware for vectors or floating point arithmetic. Both of these were “known” to be requirements for number-crunching. Feynman decided to test this assumption on a problem that he was familiar with in detail: quantum chromodynamics.

Quantum chromodynamics is a theory of the internal workings of atomic particles such as protons. Using this theory it is possible, in principle, to compute the values of measurable physical quantities, such as a proton’s mass. In practice, such a computation requires so much arithmetic that it could keep the fastest computers in the world busy for years. One way to do this calculation is to use a discrete four-dimensional lattice to model a section of space-time. Finding the solution involves adding up the contributions of all of the possible configurations of certain matrices on the links of the lattice, or at least some large representative sample. (This is essentially a Feynman path integral.) The thing that makes this so difficult is that calculating the contribution of even a single configuration involves multiplying the matrices around every little loop in the lattice, and the number of loops grows as the fourth power of the lattice size. Since all of these multiplications can take place concurrently, there is plenty of opportunity to keep all 64,000 processors busy.

To find out how well this would work in practice, Feynman had to write a computer program for QCD. Since the only computer language Richard was really familiar with was Basic, he made up a parallel version of Basic in which he wrote the program and then simulated it by hand to estimate how fast it would run on the Connection Machine.

He was excited by the results. “Hey Danny, you’re not going to believe this, but that machine of yours can actually do something [useful]!” According to Feynman’s calculations, the Connection Machine, even without any special hardware for floating point arithmetic, would outperform a machine that CalTech was building for doing QCD calculations. From that point on, Richard pushed us more and more toward looking at numerical applications of the machine.

By the end of that summer of 1983, Richard had completed his analysis of the behavior of the router, and much to our surprise and amusement, he presented his answer in the form of a set of partial differential equations. To a physicist this may seem natural, but to a computer designer, treating a set of boolean circuits as a continuous, differentiable system is a bit strange. Feynman’s router equations were in terms of variables representing continuous quantities such as “the average number of 1 bits in a message address.” I was much more accustomed to seeing analysis in terms of inductive proof and case analysis than taking the derivative of “the number of 1’s” with respect to time. Our discrete analysis said we needed seven buffers per chip; Feynman’s equations suggested that we only needed five. We decided to play it safe and ignore Feynman.

The decision to ignore Feynman’s analysis was made in September, but by next spring we were up against a wall. The chips that we had designed were slightly too big to manufacture and the only way to solve the problem was to cut the number of buffers per chip back to five. Since Feynman’s equations claimed we could do this safely, his unconventional methods of analysis started looking better and better to us. We decided to go ahead and make the chips with the smaller number of buffers.

Fortunately, he was right. When we put together the chips the machine worked. The first program run on the machine in April of 1985 was Conway’s game of Life.

Cellular Automata

The game of Life is an example of a class of computations that interested Feynman called [cellular automata]. Like many physicists who had spent their lives going to successively lower and lower levels of atomic detail, Feynman often wondered what was at the bottom. One possible answer was a cellular automaton. The notion is that the “continuum” might, at its lowest levels, be discrete in both space and time, and that the laws of physics might simply be a macro-consequence of the average behavior of tiny cells. Each cell could be a simple automaton that obeys a small set of rules and communicates only with its nearest neighbors, like the lattice calculation for QCD. If the universe in fact worked this way, then it presumably would have testable consequences, such as an upper limit on the density of information per cubic meter of space.

The notion of cellular automata goes back to von Neumann and Ulam, whom Feynman had known at Los Alamos. Richard’s recent interest in the subject was motivated by his friends Ed Fredkin and Stephen Wolfram, both of whom were fascinated by cellular automata models of physics. Feynman was always quick to point out to them that he considered their specific models “kooky,” but like the Connection Machine, he considered the subject sufficiently crazy to put some energy into.

There are many potential problems with cellular automata as a model of physical space and time; for example, finding a set of rules that obeys special relativity. One of the simplest problems is just making the physics so that things look the same in every direction. The most obvious pattern of cellular automata, such as a fixed three-dimensional grid, have preferred directions along the axes of the grid. Is it possible to implement even Newtonian physics on a fixed lattice of automata?

Feynman had a proposed solution to the anisotropy problem which he attempted (without success) to work out in detail. His notion was that the underlying automata, rather than being connected in a regular lattice like a grid or a pattern of hexagons, might be randomly connected. Waves propagating through this medium would, on the average, propagate at the same rate in every direction.

Cellular automata started getting attention at Thinking Machines when Stephen Wolfram, who was also spending time at the company, suggested that we should use such automata not as a model of physics, but as a practical method of simulating physical systems. Specifically, we could use one processor to simulate each cell and rules that were chosen to model something useful, like fluid dynamics. For two-dimensional problems there was a neat solution to the anisotropy problem since [Frisch, Hasslacher, Pomeau] had shown that a hexagonal lattice with a simple set of rules produced isotropic behavior at the macro scale. Wolfram used this method on the Connection Machine to produce a beautiful movie of a turbulent fluid flow in two dimensions. Watching the movie got all of us, especially Feynman, excited about physical simulation. We all started planning additions to the hardware, such as support of floating point arithmetic that would make it possible for us to perform and display a variety of simulations in real time.

Feynman the Explainer

In the meantime, we were having a lot of trouble explaining to people what we were doing with cellular automata. Eyes tended to glaze over when we started talking about state transition diagrams and finite state machines. Finally Feynman told us to explain it like this,

“We have noticed in nature that the behavior of a fluid depends very little on the nature of the individual particles in that fluid. For example, the flow of sand is very similar to the flow of water or the flow of a pile of ball bearings. We have therefore taken advantage of this fact to invent a type of imaginary particle that is especially simple for us to simulate. This particle is a perfect ball bearing that can move at a single speed in one of six directions. The flow of these particles on a large enough scale is very similar to the flow of natural fluids.”

This was a typical Richard Feynman explanation. On the one hand, it infuriated the experts who had worked on the problem because it neglected to even mention all of the clever problems that they had solved. On the other hand, it delighted the listeners since they could walk away from it with a real understanding of the phenomenon and how it was connected to physical reality.

We tried to take advantage of Richard’s talent for clarity by getting him to critique the technical presentations that we made in our product introductions. Before the commercial announcement of the Connection Machine CM-1 and all of our future products, Richard would give a sentence-by-sentence critique of the planned presentation. “Don’t say `reflected acoustic wave.’ Say [echo].” Or, “Forget all that `local minima’ stuff. Just say there’s a bubble caught in the crystal and you have to shake it out.” Nothing made him angrier than making something simple sound complicated.

Getting Richard to give advice like that was sometimes tricky. He pretended not to like working on any problem that was outside his claimed area of expertise. Often, at Thinking Machines when he was asked for advice he would gruffly refuse with “That’s not my department.” I could never figure out just what his department was, but it did not matter anyway, since he spent most of his time working on those “not-my-department” problems. Sometimes he really would give up, but more often than not he would come back a few days after his refusal and remark, “I’ve been thinking about what you asked the other day and it seems to me…” This worked best if you were careful not to expect it.

I do not mean to imply that Richard was hesitant to do the “dirty work.” In fact, he was always volunteering for it. Many a visitor at Thinking Machines was shocked to see that we had a Nobel Laureate soldering circuit boards or painting walls. But what Richard hated, or at least pretended to hate, was being asked to give advice. So why were people always asking him for it? Because even when Richard didn’t understand, he always seemed to understand better than the rest of us. And whatever he understood, he could make others understand as well. Richard made people feel like a child does, when a grown-up first treats him as an adult. He was never afraid of telling the truth, and however foolish your question was, he never made you feel like a fool.

The charming side of Richard helped people forgive him for his uncharming characteristics. For example, in many ways Richard was a sexist. Whenever it came time for his daily bowl of soup he would look around for the nearest “girl” and ask if she would fetch it to him. It did not matter if she was the cook, an engineer, or the president of the company. I once asked a female engineer who had just been a victim of this if it bothered her. “Yes, it really annoys me,” she said. “On the other hand, he is the only one who ever explained quantum mechanics to me as if I could understand it.” That was the essence of Richard’s charm.

A Kind Of Game

Richard worked at the company on and off for the next five years. Floating point hardware was eventually added to the machine, and as the machine and its successors went into commercial production, they were being used more and more for the kind of numerical simulation problems that Richard had pioneered with his QCD program. Richard’s interest shifted from the construction of the machine to its applications. As it turned out, building a big computer is a good excuse to talk to people who are working on some of the most exciting problems in science. We started working with physicists, astronomers, geologists, biologists, chemists — everyone of them trying to solve some problem that it had never been possible to solve before. Figuring out how to do these calculations on a parallel machine requires understanding of the details of the application, which was exactly the kind of thing that Richard loved to do.

For Richard, figuring out these problems was a kind of a game. He always started by asking very basic questions like, “What is the simplest example?” or “How can you tell if the answer is right?” He asked questions until he reduced the problem to some essential puzzle that he thought he would be able to solve. Then he would set to work, scribbling on a pad of paper and staring at the results. While he was in the middle of this kind of puzzle solving he was impossible to interrupt. “Don’t bug me. I’m busy,” he would say without even looking up. Eventually he would either decide the problem was too hard (in which case he lost interest), or he would find a solution (in which case he spent the next day or two explaining it to anyone who listened). In this way he worked on problems in database searches, geophysical modeling, protein folding, analyzing images, and reading insurance forms.

The last project that I worked on with Richard was in simulated evolution. I had written a program that simulated the evolution of populations of sexually reproducing creatures over hundreds of thousands of generations. The results were surprising in that the fitness of the population made progress in sudden leaps rather than by the expected steady improvement. The fossil record shows some evidence that real biological evolution might also exhibit such “punctuated equilibrium,” so Richard and I decided to look more closely at why it happened. He was feeling ill by that time, so I went out and spent the week with him in Pasadena, and we worked out a model of evolution of finite populations based on the Fokker Planck equations. When I got back to Boston I went to the library and discovered a book by Kimura on the subject, and much to my disappointment, all of our “discoveries” were covered in the first few pages. When I called back and told Richard what I had found, he was elated. “Hey, we got it right!” he said. “Not bad for amateurs.”

In retrospect I realize that in almost everything that we worked on together, we were both amateurs. In digital physics, neural networks, even parallel computing, we never really knew what we were doing. But the things that we studied were so new that no one else knew exactly what they were doing either. It was amateurs who made the progress.

Telling The Good Stuff You Know

Actually, I doubt that it was “progress” that most interested Richard. He was always searching for patterns, for connections, for a new way of looking at something, but I suspect his motivation was not so much to understand the world as it was to find new ideas to explain. The act of discovery was not complete for him until he had taught it to someone else.

I remember a conversation we had a year or so before his death, walking in the hills above Pasadena. We were exploring an unfamiliar trail and Richard, recovering from a major operation for the cancer, was walking more slowly than usual. He was telling a long and funny story about how he had been reading up on his disease and surprising his doctors by predicting their diagnosis and his chances of survival. I was hearing for the first time how far his cancer had progressed, so the jokes did not seem so funny. He must have noticed my mood, because he suddenly stopped the story and asked, “Hey, what’s the matter?”

I hesitated. “I’m sad because you’re going to die.”

“Yeah,” he sighed, “that bugs me sometimes too. But not so much as you think.” And after a few more steps, “When you get as old as I am, you start to realize that you’ve told most of the good stuff you know to other people anyway.”

We walked along in silence for a few minutes. Then we came to a place where another trail crossed and Richard stopped to look around at the surroundings. Suddenly a grin lit up his face. “Hey,” he said, all trace of sadness forgotten, “I bet I can show you a better way home.”

And so he did.

LongNowThe 10,000-Year Geneaology of Myths

The “Shaft Scene” from the Paleolithic cave paintings in Lascaux, France.

The “Shaft Scene” from the Paleolithic cave paintings in Lascaux, France.

ONE OF THE MOST FAMOUS SCENES in the Paleolithic cave paintings in Lascaux, France depicts a confrontation between a man and a bison. The bison appears fixed in place, stabbed by a spear. The man has a bird’s head and is lying prone on the ground. Scholars have long puzzled over the pictograph’s meaning, as the narrative scene it depicts is one of the most complex yet discovered in Paleolithic art.

To understand what is going on in these scenes, some scholars have started to re-examine myths passed down through oral traditions, which some evidence suggest may be far older than previously thought. Myths still hold relevance today by allowing us to frame our actions at a civilizational level as part of a larger story, something that we hope to be able to accomplish with the idea of the “Long Now.”

Historian Julien d’Huy recently proposed an intriguing hypothesis[subscription required]: the cave painting of the man & bison could be telling the tale of the Cosmic Hunt, a myth that has surfaced with the same basic story structure in cultures across the world, from the Chukchi of Siberia to the Iroquois of the Northeastern United States. D’Huy uses comparative mythology combined with new computational modeling technologies to reconstruct a version of the myth that predates humans’ migration across the Bering Strait. If d’Huy is correct, the Lascaux painting would be one of the earliest depictions of the myth, dating back an estimated 20,000 years ago.

The Greek telling of the Cosmic Hunt is likely most familiar to today’s audiences. It recounts how the Gods transformed the chaste and beautiful Callisto into a bear, and later, into the constellation Ursa Major. D’Huy suggests that in the Lascaux painting, the bison isn’t fixed in place because it has been killed, as many experts have proposed, but because it is a constellation.

Comparative mythologists have spilled much ink over how myths like Cosmic Hunt can recur in civilizations separated by thousands of miles and thousands of years with many aspects of their stories intact. D’huy’s analysis is based off the work of anthropologist Claude Levi-Strauss, who posited that these myths are similar because they have a common origin. Levi-Strauss traced the evolution of myths by applying the same techniques that linguists used to trace the evolution of words. D’Huy provides new evidence for this approach by borrowing recently developed computational statistical tools from evolutionary biology.  The method, called phylogenetic analysis, constructs a family tree of a myth’s discrete elements, or “mythemes,” and its evolution over time:

Mythical stories are excellent targets for such analysis because, like biological species, they evolve gradually, with new parts of a core story added and others lost over time as it spreads from region to region.  […] Like genes, mythemes are heritable characteristics of “species” of stories, which pass from one generation to the next and change slowly.

A phylogenetic tree of the Cosmic Hunt shows its evolution over time

This new evidence suggests that the Cosmic Hunt has followed the migration of humans across the world. The Cosmic Hunt’s phylogenetic tree shows that the myth arrived in the Americas at different times over the course of several millennia:

One branch of the tree connects Greek and Algonquin versions of the myth. Another branch indicates passage through the Bering Strait, which then continued into Eskimo country and to the northeastern Americas, possibly in two different waves. Other branches suggest that some versions of the myth spread later than the others from Asia toward Africa and the Americas.

Myths may evolve gradually like biological species, but can also be subject to the same sudden bursts of evolutionary change, or punctuated equilibrium. Two structurally similar myths can diverge rapidly, d’Huy found, because of “migration bottlenecks, challenges from rival populations, or new environmental and cultural inputs.”

Neil Gaiman

Neil Gaiman, in his talk “How Stories Last” at Long Now in 02015, imagined stories in similarly biological terms—as living things that evolve over time and across mediums. The ones that persist are the ones that outcompete other stories by changing:

Do stories grow? Pretty obviously — anybody who has ever heard a joke being passed on from one person to another knows that they can grow, they can change. Can stories reproduce? Well, yes. Not spontaneously, obviously — they tend to need people as vectors. We are the media in which they reproduce; we are their petri dishes… Stories grow, sometimes they shrink. And they reproduce — they inspire other stories. And, of course, if they do not change, stories die.

Throughout human history, myths functioned to transmit important cultural information from generation to generation about shared beliefs and knowledge. “They teach us how the world is put together,” said Gaiman, “and the rules of living in the world.” If the information isn’t clothed in a compelling narrative garb—a tale of unrequited love, say, or a cunning escape from powerful monsters— the story won’t last, and the shared knowledge dies along with it. The stories that last “come in an attractive enough package that we take pleasure from them and want them to propagate,” said Gaiman.

Sometimes, these stories serve as warnings to future generations about calamitous events. Along Australia’s south coast, a myth persists in an aboriginal community about an enraged ancestor called Ngurunderi who chased his wives on foot to what is today known as Kangaroo Island. In his anger, Ngurunderi made the sea levels rise and turned his wives into rocks.

Kangaroo Island, Australia

Linguist Nicholas Reid and geologist Patrick Nunn believe this myth refers to a shift in sea levels that occurred thousands of years ago. Through scientifically reconstructing prehistoric sea levels, Reid and Nunn dated the myth to 9,800 to 10,650 years ago, when a post-glacial event caused sea levels to rise 100 feet and submerged the land bridge to Kangaroo Island.

“It’s quite gobsmacking to think that a story could be told for 10,000 years,” Reid said. “It’s almost unimaginable that people would transmit stories about things like islands that are currently underwater accurately across 400 generations.”

Gaiman thinks that this process of transmitting stories is what fundamentally allows humanity to advance:

Without the mass of human knowledge accumulated over millennia to buoy us up, we are in big trouble; with it, we are warm, fed, we have popcorn, we are sitting in comfortable seats, and we are capable of arguing with each other about really stupid things on the internet.

Atlantic national correspondent James Fallows, in his talk “Civilization’s Infrastructure” at Long Now in 02015, said such stories remain essential today. In Fallows’ view, effective infrastructure is what enables civilizations to thrive. Some of America’s most ambitious infrastructure projects, such as the expansion of railroads across the continent, or landing on the moon, were spurred by stories like Manifest Destiny and the Space Race. Such myths inspired Americans to look past their own immediate financial interests and time horizons to commit to something beyond themselves. They fostered, in short, long-term thinking.

James Fallows, left, speaking with Stewart Brand at Long Now

For Fallows, the reason Americans haven’t taken on grand and necessary projects of infrastructural renewal in recent times is because they struggle to take the long view. In Fallows’ eyes, there’s a lot to be optimistic about, and a great story to be told:

The story is an America that is not in its final throes, but is going through the latest version in its reinvention in which all the things that are dire now can be, if not solved, addressed and buffered by individual talents across the country but also by the exceptional tools that the tech industry is creating. There’s a different story we can tell which includes the bad parts but also —as most of our political discussion does not—includes the promising things that are beginning too.

A view of the underground site of The Clock looking up at the spiral stairs currently being cut

When Danny Hillis proposed building a 10,000 year clock, he wanted to create a myth that stood the test of time. Writing in 01998, Long Now co-founder Stewart Brand noted the trend of short-term thinking taking hold in civilization, and proposed the myth of the Clock of the Long Now:

Civilization is revving itself into a pathologically short attention span. The trend might be coming from the acceleration of technology, the short-horizon perspective of market-driven economics, the next-election perspective of democracies, or the distractions of personal multi-tasking. All are on the increase. Some sort of balancing corrective to the short-sightedness is needed-some mechanism or myth which encourages the long view and the taking of long-term responsibility, where ‘long-term’ is measured at least in centuries. Long Now proposes both a mechanism and a myth.

CryptogramDo-It-Yourself Online Privacy/Safety Guide

This online safety guide was written for people concerned about being tracked and stalked online. It's a good resource.

Krebs on Security‘Top 10 Spammer’ Indicted for Wire Fraud

Michael A. Persaud, a California man profiled in a Nov. 2014 KrebsOnSecurity story about a junk email purveyor tagged as one of the World’s Top 10 Worst Spammers, was indicted this week on federal wire fraud charges tied to an alleged spamming operation.

According to an indictment returned in federal court in Chicago, Persaud used multiple Internet addresses and domains – a technique known as “snowshoe spamming” – to transmit spam emails over at least nine networks.


The Justice Department says Persaud sent well over a million spam emails to recipients in the United States and abroad. Prosecutors charge that Persaud often used false names to register the domains, and he created fraudulent “From:” address fields to conceal that he was the true sender of the emails. The government also accuses Persaud of “illegally transferring and selling millions of email addresses for the purpose of transmitting spam.”

Persaud is currently listed as #8 on the World’s 10 Worst Spammers list maintained by Spamhaus, an anti-spam organization. In 1998, Persaud was sued by AOL, which charged that he committed fraud by using various names to send millions of get-rich-quick spam messages to America Online customers. Persaud did not contest the charges and was ordered to pay more than a half-million dollars in restitution and damages.

In 2001, the San Diego District Attorney’s office filed criminal charges against Persaud, alleging that he and an accomplice crashed a company’s email server after routing their spam through the company’s servers.

Persaud declined to comment for this story. But he maintains that his email marketing business is legitimate and complies with the CAN-SPAM Act, the main anti-spam law in the United States. The law prohibits the sending of spam that spoofs that sender’s address or does not give recipients an easy way to opt out of receiving future such emails from that sender.

Persaud told FBI agents who raided his home last year that he currently conducts internet marketing from his residence by sending a million emails in under 15 minutes from various domains and Internet addresses.

The indictment charges Persaud with ten counts of wire fraud and seeks the forfeiture of four computers. Each count of wire fraud is punishable by up to 20 years in prison. If convicted, the court must impose a reasonable sentence under federal statutes and sentencing guidelines.

Persaud was charged in Chicago because at least two of the servers he allegedly used to conduct snowshoe spamming operations were located there (named only as victims “B” and “F” in the government’s indictment). A copy of the indictment against Persaud is here (PDF).

For more on how spam allegedly sent by Persaud was traced back to his companies, see my story from November 2014, Still Spamming After All These Years. For a deeper understanding of why and how spam is the engine that drives virtually all other forms of cybercrime, check out my book — Spam Nation: The Inside Story of Organized Cybercrime.

CryptogramPredicting a Slot Machine's PRNG

Wired is reporting on a new slot machine hack. A Russian group has reverse-engineered a particular brand of slot machine -- from Austrian company Novomatic -- and can simulate and predict the pseudo-random number generator.

The cell phones from Pechanga, combined with intelligence from investigations in Missouri and Europe, revealed key details. According to Willy Allison, a Las Vegas­-based casino security consultant who has been tracking the Russian scam for years, the operatives use their phones to record about two dozen spins on a game they aim to cheat. They upload that footage to a technical staff in St. Petersburg, who analyze the video and calculate the machine's pattern based on what they know about the model's pseudorandom number generator. Finally, the St. Petersburg team transmits a list of timing markers to a custom app on the operative's phone; those markers cause the handset to vibrate roughly 0.25 seconds before the operative should press the spin button.

"The normal reaction time for a human is about a quarter of a second, which is why they do that," says Allison, who is also the founder of the annual World Game Protection Conference. The timed spins are not always successful, but they result in far more payouts than a machine normally awards: Individual scammers typically win more than $10,000 per day. (Allison notes that those operatives try to keep their winnings on each machine to less than $1,000, to avoid arousing suspicion.) A four-person team working multiple casinos can earn upwards of $250,000 in a single week.

The easy solution is to use a random-number generator that accepts local entropy, like Fortuna. But there's probably no way to easily reprogram those old machines.

Sociological ImagesPossibly the most exhaustive study of “manspreading” ever conducted

“Manspreading” is a relatively new term.  According to Google Trends (below), the concept wasn’t really used before the end of 2014.  But the idea it’s describing is not new at all.  The notion that men occupy more space than women is one small piece of what Raewyn Connell refers to as the patriarchal dividend–the collection of accumulated advantages men collectively receive in androcentric patriarchal societies (e.g., wages, respect, authority, safety).  Our bodies are differently disciplined to the systems of inequality in our societies depending upon our status within social hierarchies.  And one seemingly small form of privilege from which many men benefit is the idea that men require (and are allowed) more space.

It’s not uncommon to see advertisements on all manner of public transportation today condemning the practice of occupying “too much” space while other around you “keep to themselves.”  PSA’s like these are aimed at a very specific offender: some guy who’s sitting in a seat with his legs spread wide enough in a kind of V-shaped slump such that he is effectively occupying the seats around him as well.

I recently discovered what has got to be one of the most exhaustive treatments of the practice ever produced.  It’s not the work of a sociologist; it’s the work of a German feminist photographer, Marianne Wex.  In Wex’s treatment of the topic, Let’s Take Back Our Space: Female and Male Body Language as a Result of Patriarchal Structures (1984, translated from the German edition, published in 1979), she examines just shy of 5,000 photographs of men and women exhibiting body language that results from and plays a role in reproducing unequal gender relations.

The collection is organized by an laudable number of features of the various bodily positions.  Interestingly, it was published in precisely the same year that Erving Goffman undertook a similar sociological study of what he referred to as “gender display” in his book, Gender Advertisements–though Goffman’s analysis utilized advertisements as the data under consideration.

Like Goffman, Wex examined the various details that made up bodily postures that seem to exude gender, addressing the ways our bodies are disciplined by society.  Wex paired images according to the position of feet and legs, whether the body was situated to put weight on one or two legs, hand and arm positions, and much much more.  And through this project, Wex also developed an astonishing vocabulary for body positions that she situates as the embodied manifestations of patriarchal social structures.  The whole book organizes this incredible collection of (primarily) photographs she took between 1972 and 1977 by theme.  On every page, men are depicted  above women (as the above image illustrates)–a fact Wex saw as symbolizing the patriarchal structure of the society she sought to catalog so scrupulously.  She even went so far as to examine bodily depiction throughout history as depicted in art to address the ways the patterns she discovered can be understood over time.

If you’re interested, you can watch the Youtube video of the entire book.

Tristan Bridges, PhD is a professor at The College at Brockport, SUNY. He is the co-editor of Exploring Masculinities: Identity, Inequality, Inequality, and Change with C.J. Pascoe and studies gender and sexual identity and inequality. You can follow him on Twitter here. Tristan also blogs regularly at Inequality by (Interior) Design.

(View original at

Worse Than FailureCodeSOD: Clean Up Your Act


Artie S. writes:

As part of a contract with a customer, we have to maintain some of their legacy applications. I found this today in a custom controller for a very old OpenCart site and am still laughing about it and thought you guys might enjoy it. I suppose it's NSFW, but it's code, right?

I threw in some blocks ( █ ) to make things safer for work. The other symbols (!, @) are part of the original sub.

public function isClean($body)
                $bad_words = array('sh█t','sh█tty','sh█ttiest','sh!t','sh!tty','sh!ttiest','f█ck','f█cking','f█cked','f█cker','cunt','piss','pissed','crap','crappy','crappier','crappiest','cr@p','cr@ppy','cr@ppier','cr@ppiest','cock','tits','titties');
                        $pos = strpos($body, $bad_words[$i]);
                                return false;
                return true;

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaChris Smart: Manage Intel Turbo Boost with systemd

If you have a little laptop with an Intel CPU that supports turbo boost, you might find that it’s getting a little hot when you’re using it on your lap.

For example, taking a look at my CPU:
lscpu |egrep "Model name|MHz"

We can see that it’s a 2.7GHz CPU with turbo boost taking it up to 3.5GHz.

Model name: Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz
CPU MHz: 524.633
CPU max MHz: 3500.0000
CPU min MHz: 400.0000

Here’s a way that you can enable and disable turbo boost with a systemd service, which lets you hook it into other services or disable it on boot.

By default, turbo boost is on, so starting our service will disable it.

Create the service.
cat << EOF | sudo tee \
Description=Disable Turbo Boost on Intel CPU
ExecStart=/bin/sh -c "/usr/bin/echo 1 > \
ExecStop=/bin/sh -c "/usr/bin/echo 0 > \

Reload systemd manager configuration.
sudo systemctl daemon-reload

Test it by running something CPU intensive and watching the current running MHz.

cat /dev/urandom > /dev/null &
lscpu |grep "CPU MHz"

CPU MHz: 3499.859

Now disable turbo boost and check the CPU speed again.
sudo systemctl start disable-turbo-boost
lscpu |grep "CPU MHz"

CPU MHz: 2699.987

Don’t forget to kill the CPU intensive process 🙂

kill %1

If you want to disable turbo boost on boot by default, just enable the service.

sudo systemctl enable disable-turbo-boost

Krebs on SecurityHouse Passes Long-Sought Email Privacy Bill

The U.S. House of Representatives on Monday approved a bill that would update the nation’s email surveillance laws so that federal investigators are required to obtain a court-ordered warrant for access to older stored emails. Under the current law, U.S. authorities can legally obtain stored emails older than 180 days using only a subpoena issued by a prosecutor or FBI agent without the approval of a judge.

cloudprivacyThe House passed by a voice vote The Email Privacy Act (HR 387). The bill amends the Electronic Communications Privacy Act (ECPA), a 1986 statute that was originally designed to protect Americans from Big Brother and from government overreach. Unfortunately, the law is now so outdated that it actually provides legal cover for the very sort of overreach it was designed to prevent.

Online messaging was something of a novelty when lawmakers were crafting ECPA, which gave email moving over the network essentially the same protection as a phone call or postal letter. In short, it required the government to obtain a court-approved warrant to gain access to that information.

But the U.S. Justice Department wanted different treatment for stored electronic communications. Congress struck a compromise, decreeing that after 180 days email would no longer be protected by the warrant standard and instead would be available to the government with an administrative subpoena and without requiring the approval of a judge.

HR 387’s sponsor Kevin Yoder (R-Kan.) explained in a speech on the House floor Monday that back in when the bill was passed, hardly anybody stored their personal correspondence “in the cloud.” He said the thinking at the time was that “if an individual was leaving an email on a third-party server it was akin to that person leaving their paper mail in a garbage can at the end of their driveway.”

“Thus, that individual had no reasonable expectation of privacy in regards to that email under the Fourth Amendment,” Yoder said.

Lee Tien, a senior staff attorney with the Electronic Frontier Foundation (EFF), said a simple subpoena also can get law enforcement the following information about communications records (in addition to the content of emails stored at a service provider for more than 180 days):

-local and long distance telephone connection records, or records of session times and durations;
-length of service (including start date) and types of service utilized;
-telephone or instrument number or other subscriber number or identity, including any temporarily assigned network address; and
-means and source of payment for such service (including any credit card or bank account number), of a subscriber to or customer of such service when the governmental entity uses an administrative subpoena authorized by a Federal or State statute or a Federal or State grand jury or trial subpoena.

The Email Privacy Act does not force investigators to jump through any additional hoops for accessing so-called “metadata” messaging information about stored communications, such as the Internet address or email address of a message sender. Under ECPA, the “transactional” data associated with communications — such as dialing information showing what numbers you are calling — was treated as less sensitive. ECPA allows the government to use something less than a warrant to obtain this routing and signaling information.

The rules are slightly different in California, thanks to the passage of CalECPA, a law that went into effect in 2016. CalECPA not only requires California government entities to obtain a search warrant before obtaining or accessing electronic information, it also requires a warrant for metadata.

Activists who’ve championed ECPA reform for years are cheering the House vote, but some are concerned that the bill may once again get hung up in the Senate. Last year, the House passed the bill in an unanimous 419-0 vote, but the measure stalled in the upper chambers of the Senate.

The EFF’s Tien said he’s worried that the bill heading to the Senate may not have the support of the Trump administration, which could hinder its chances in a Republican-controlled chamber.

“The Senate is a very different story, and it was a different story last year when Democrats had more votes,” Tien said.

Whether the bill even gets considered by the Senate at all is bound to be an issue again this year.

“I feel a little wounded because it’s been a hard fight,” Tien said. “It hasn’t been an easy fight to get this far.”

The U.S. government is not in the habit of publishing data about subpoenas it has requested and received, but several companies that are frequently on the receiving end of such requests do release aggregate numbers. For example, Apple, FacebookGoogleMicrosoft and Twitter all publish transparency reports. They’re worth a read.

For a primer on protecting your communications from prying eyes and some tools to help preserve your privacy, check out the EFF’s Surveillance Self-Defense guide.


LongNowLong Business: A Family’s Secret to a Millennia of Sake-Making

The Sudo family has been making sake for almost 900 years in Japan’s oldest brewery. Genuemon Sudo, who is the 55th generation of his family to carry on the tradition, said that at the root of Sudo’s longevity is a commitment to protecting the natural environment:

Sake is made from rice. Good rice comes from good soil. Good soil comes from fresh and high-quality water. Such water comes from protecting our trees. Protecting the natural environment makes excellent sake.

The natural environment of the Sudo brewery was tested as never before during the 02011 earthquake and subsequent nuclear meltdown. The ancient trees surrounding the brewery absorbed the quake’s impact, saving it from destruction. The water in the wells, which the Sudo family feared was poisoned by nuclear radiation, was deemed safe after radioactive analysis.

Damaged by the quake but not undone, the Sudo brewery continues a family tradition  almost a millennia in the making, with the trees, as Genuemon Sudo put it, “supporting us every step of the way.”

In looking at the list of the world’s longest-lived institutions, it is hard to ignore that many of them provide tangible goods to people, such as a room to sleep, or a libation to drink. Studying places like the Sudo brewery was part of the inspiration for creating The Interval, our own space that inspires long-term thinking.

CryptogramProfile of Citizen Lab and Ron Diebert

Here's a nice profile of Citizen Lab and its director, Ron Diebert.

Citizen Lab is a jewel. There should be more of them.

Google Adsense5 steps to improve Page Speed and boost page performance

The eighth installment of the #SuccessStack takes a second look at page speed, specifically tips you can implement that may improve your metrics.

Last week the #SuccessStack illustrated lots of reasons why mobile Page Speed is critically important to the ongoing success of your publishing business. Now you can explore what you can do that could improve this metric and boost your overall page performance as a result.

Step 1: See how much more you could earn

Before you put time and effort into improving your mobile speed, you want to see what it’s worth to you. This useful tool will help you make a personal calculation of how much more you could earn with a faster mobile experience. However, this tool does not calculate user experience or user loyalty, both of which are impacted by either a fast, or slow, mobile experience. 

Step 2: Look at how you measure up

Using tools to measure different aspects of your site will help you identify areas for improvement more easily than if you were to just estimate. Here are a few of our favorites:
  • Page Speed Insights analyzes your site performance, scoring its speed and user experience and identifies issues to fix. The best practice is a score of 85 or above.1
  • Webpagetest provides a Speed Index that indicates the average time at which visible parts of the page are displayed. Aim for a Speed Index of 3,000 or less and load time of 3 seconds or less — ideally 1-3 seconds.2
  • Chrome DevTools is a versatile real-time tool for evaluating your website’s performance right in the browser. You can simulate network and CPU speeds, examine network loading details and see how your site’s code is impacting your page.
  • Mobile-Friendly Test is designed specifically for mobile sites. This tool analyzes exactly how mobile-friendly the site is, and focuses on elements beyond speed as well. 

Step 3: Have a clear out - reduce the size of your pages.
Reduce the size of your pages.
  • Target 50 or fewer requests and 1,000 or fewer bytes to optimize load time. 
  • Compress and select efficient images, and prioritize download of visible content.
Assess the ads and trackers running on your page.
  • Use a tool to measure the bandwidth and latency impact of pixels and other elements on your pages (e.g., Ghostery). Evaluate if trackers are needed and used, and if they provide enough benefit.
  • Review latency of your ad partners, especially those delivering video ads, and remove low performing monetization partners.
Step 4: Prioritize the order your page loads in

It sounds obvious, but prioritizing loading of the elements that are visible above the fold will enhance your user experience, even of your net page loading speed doesn’t change.
  • Prioritize loading elements that are visible above the fold first: Minimize the amount of pieces that show above the fold of visible content. Load styling, javascript logic and images that are only accessed after direct interaction later. 
  • Enable HTTPS and HTTP/2: Support modern HTTPS to provide site integrity, encryption, authentication, and better user experience. More than 1-in-3 of top 100 sites run on modern HTTPS, and a quarter of them use HTTPS by default
  • Limit server requests where possible: Each mobile page makes an average of 214 server requests,3 some of which happen simultaneously and some that can only happen one after the other. Review each request on your site to understand the benefit it provides.  
Step 5: Measure, test, repeat

As the shift to mobile continues to grow, so will users expectations of lighting speed experiences across the web. This means that improving your mobile speed isn’t a one off job, you need to have a process in place to regularly evaluate and improve it. Follow the steps outlined above at regular intervals and record the results of the adjustments you make to refer back to when deciding on new optimization techniques in the future.
  • Continually assess your ad-related calls to remove low performing monetization partners.
  • Pick third-party ad-tech partners with lower latency.
  • Remove or reduce any bulky content.
  • Consolidate data and analytics tags.
  • Investigate open-source tools such as Accelerated Mobile Pages (AMP) and Progressive Web Apps (PWA). 
Implementing the strategies outlined in this article could have a serious positive impact on your business. Check out these inspirational stories from Sinclair News and What to Expect to see how significant shifts in mobile speed were achieved with a few technical tweaks.

Next steps
From your interest in Page Speed, you’re clearly committed to doing all you can to improve the performance of your site and grow your publishing business. With this in mind, you may benefit from a chat with one of our experts. They can offer a personalized consultation to help you make the right technology choices to support your business growth. Book a time.

Post content

1. Google Developers
2. Google and kissmetrics

TEDRemembering Hans Rosling

Photo: Asa Mathat

Bounding up on stage with the energy of 1,000 suns and his special extra-long pointer, Swedish professor Hans Rosling became a data rock star, dedicated to giving his audience a truer picture of the world. Photo: Asa Mathat

Is the world getting worse every day in every way, as some news media would have you believe? No. In fact, the most reliable data shows that in meaningful ways — such as child mortality rate, literacy rate, human lifespan — the world is actually, slowly and measurably, getting better.

Hans Rosling dedicated the latter part of his distinguished career to making sure the world knew that. And in his 10 TED Talks — the most TED Talks by a single person ever posted — he hammered the point home again and again. As he told us once: “You see, it is very easy to be an evidence-based professor lecturing about global theory, because many people get stuck in wrong ideas.”

Using custom software (or sometimes, just using a few rocks), he and his team ingested data from sources like the World Bank (fun story: their data was once locked away until Hans’ efforts helped open it to the world) and turned it into bright, compelling movable graphs that showed the complex story of global progress over time, while tweaking everyone’s expectations and challenging us to think and to learn.

We’re devastated to announce that Hans passed away this morning, surrounded by family. As his children announced on their shared website, Gapminder: “Across the world, millions of people use our tools and share our vision of a fact-based worldview that everyone can understand. We know that many will be saddened by this message. Hans is no longer alive, but he will always be with us and his dream of a fact-based worldview, we will never let die!”

Google AdsenseMore control over the ads you serve on your site(s)

We’ve heard the feedback that publishers like you have shared about wanting more ways to control the ads showing on your site(s). Today, we’re excited to share some recent updates to General Category Blocking that give you more controls to opt-in or filter out certain categories of ads -- putting you in charge.

General Category Blocking was introduced in some countries in 2010 to help you scalably prevent competitors’ ads and ads that may not fit your audience from appearing on your site, by letting you opt out of showing ads from certain categories and subcategories such as “Finance,” “Apparel,” and “Tourism.” Over the last few weeks we’ve increased the number of categories and subcategories from 250 to 470. This update lets you be more granular and block more specific categories without overblocking and having an unwanted negative impact on earnings. For example, instead of blocking the category “Apparel,” you can now pick any of the new subcategories “Sunglasses,” “Handbags,” or “Watches.” With this update we’ve also expanded our supported languages to include Chinese (simplified), Dutch, Polish, Russian, and Turkish and made this feature available in every country supported by AdSense.

With more options available it’s important to carefully consider the impact of blocking, or allowing, a category. Blocking a general category could lower your potential earnings as the affected advertisers’ bids are excluded from the auction. On the other hand, with more subcategories available you may have the opportunity to be more granular and pick the subcategory that most closely matches the type of ad you want to prevent -- minimizing the negative impact on your earnings.

We know how important it is to have control over the ads that appear on your site, and this update is adding to a set of controls available in AdSense already. We’d love to hear what you think about these update so please leave your feedback in the comments field below.
Posted by: Konkona Kundu, Product Manager

CryptogramCryptkeeper Bug

The Linux encryption app Cryptkeeper has a rather stunning security bug: the single-character decryption key "p" decrypts everything:

The flawed version is in Debian 9 (Stretch), currently in testing, but not in Debian 8 (Jessie). The bug appears to be a result of a bad interaction with the encfs encrypted filesystem's command line interface: Cryptkeeper invokes encfs and attempts to enter paranoia mode with a simulated 'p' keypress -- instead, it sets passwords for folders to just that letter.

In 2013, I wrote an essay about how an organization might go about designing a perfect backdoor. This one seems much more like a bad mistake than deliberate action. It's just too dumb, and too obvious. If anyone actually used Cryptkeeper, it would have been discovered long ago.

TEDA Q&A with Thordis Elva and Tom Stranger

In October 2016, a group gathered in San Francisco for the TEDWomen 2016 conference, this year themed around the idea of time. One talk was given by Thordis Elva and Tom Stranger, who took the stage to share a story that took place in 1996, when Stranger raped Elva, then his girlfriend. The talk had a powerful effect on the audience in the room, and is now available online. We had some follow-up questions.

This is an incredibly personal topic and talk. What made you decide to go so public with your story?
Thordis Elva:
I’d be lying if I said that it was an easy decision, and questions about how it’ll be received have entered my mind regularly on this journey. But most importantly, I know in my heart that hearing a story like the one I share with Tom would have made a world of a difference to me when I was younger. As a survivor, it would have helped me realize that the shame wasn’t mine to carry, and that there is hope of finding happiness in life even after a shattering experience like rape. Also, if hearing our story could help potential perpetrators realize how imperative it is to get consent for all sexual activity, ultimately lessening the likelihood of them abusing other people, that would be a goal worth striving for.

Tom Stranger: The road from that dire night in 1996 until now has been long and fraught with uncertainty. We acknowledge that the choices and pains in our past are not unique to us, and are situated within a deeply pervasive societal issue. In going public with our TED Talk and book, and speaking to the 20-year path of reconciliation and dialogue behind us, we are not seeking to offer a manual or methodology to other survivors or perpetrators of rape, but to simply offer a story — a personal communication that can possibly give hope to others, and add our voices to the public discussion that is now seeking to better comprehend and address this multifarious issue.

How did you prepare for the talk?
It took considered writing, online discussions from different time zones, significant editing and frequent rehearsal, and wouldn’t have been possible without the guidance of our two incredible coaches.

The chronology of our history was divided into periods, and we each committed to talking to these stages honestly. We did our best to keep our diverse audience in the fore of our minds, and selected the language that would be suitably considerate but also do justice to our individual parts. The intense period prior to the talk was dedicated to rehearsing, and this saw the talk repeated to the point of knowing that we retained the words well enough to invest in them the authentic feelings they deserved.

TE: After we laid this groundwork, we worked individually on committing the talk to memory, sometimes in very odd circumstances. I admit that for a while there, I was the strange lady who was talking to herself on the bus.

Tom, you show obvious contrition in the talk. Nonetheless, it will almost certainly be difficult for some people to see you take center stage like this. What would you say to someone who feels like you’re trying to portray yourself as some kind of hero for speaking up?

TS: I recognize this as valid questioning, and wouldn’t offer any argument to counter such a response. As much as I can convey, I recognize that just me being up on stage, and for people to see and hear me, would be challenging and triggering for some.

I acknowledge that my past choices invalidate any suitability for present or future praise.

The risk of me receiving any commendation for being on a TED stage with Thordis, and speaking to our history, is something that both of us have endeavored to avoid. I believe owning one’s past choices should be viewed as neither brave nor heroic in any way, but instead a necessary obligation and acknowledgement of individual culpability.

I’m also deeply invested in learning any ways to better the approach I use to share my part in our history.

Thordis, did you ever hesitate to give the man who raped you a platform like this?

TE: Yes, I did. I understand those who are inclined to criticize me as someone who enabled a perpetrator to have a voice in this discussion. But I believe that a lot can be learned by listening to those who have been a part of the problem — if they’re willing to become part of the solution — about what ideas and attitudes drove their violent actions, so we can work on uprooting them effectively. After having dedicated my career to preventing sexual violence, attending conferences on this subject around the world for over a decade now, it’s come to my attention how it’s often perceived as a women’s issue when it’s really a human issue that affects every country on the face of the planet. It’s my belief that all of us are needed when it comes to preventing violence and creating safer communities.

Tom, when did you tell your friends and family about the rape? How did they react?
I first sat down with my family in 2011. Since then I have gradually told my inner circles about my history and past choices. I have now had many honest conversations, and feel I’m better at voicing the words that accurately explain my actions that night and the consequences, for both Thordis, and myself.

I am blessed with a loving, understanding and supportive network of friends and family, who have, for the most part, seen me as more than my actions. Primarily, the reactions I’ve received have been receptive, quiet and thoughtful. Confusion and rumination have been common, as I kept this a dark secret for many years.

Understandably, my relationships with some people close to me have been, and will be, affected. I hold neither judgement nor altered opinion of friends or family who see me differently once they have learned about my occasion of perpetrating rape.

How did people respond after you gave the talk at TED Women?
I was caught off guard by the gratitude received from women and men who came up to me and voiced their support. It was immensely strengthening to have our story received in that way. Being at a conference that focused on issues pertaining to women’s current experiences all around the world, I was honored to be able to speak at such a forum, and to such a community. To receive any confirmation that attendees saw value and importance in the sharing of our story was a truly humbling and profound experience.

In saying this, I also received some precious and hugely valuable critical feedback. I still carry with me the warm responses we received, but the constructive and personal appraisals and reactions enabled me to better understand the positions and experiences of others, and will help me talk to our story with more sensitivity and awareness in the future.

TE: In short, the response was warmer than I had dared to hope and ever so humbling. Among other things, I received tight hugs, kind emails and encouraging words that will forever be dear to my heart, and serve as an inspiration to continue with this work. It’ll be interesting to see what the global response will be like, and I’m expecting a wide range of reactions.

How would you like the public to respond to your story? What do you believe is the “idea worth spreading” within it?
Sexual violence is one of the biggest threats to the lives of women and children around the world, and it also affects the lives of many men too. A problem of this size and magnitude calls for necessary shifts in attitude, one being that those who perpetrate violence shoulder the responsibility for it, as opposed to those who are subjected to it. Far too often, survivors are wrongly blamed and shamed, or in extreme cases even killed for having been abused. This fosters a culture of silence, which enables further violence, so I believe it to be a vicious circle. The strongest countermeasure to silence is raising your voice.

That said, Tom and I are not presenting ourselves as role models whose actions should be championed in any way. We’re simply sharing our story in the hope that it can be of use to people who share a similar experience, and serve as a reminder to everyone about the importance of sexual consent.

TS: I don’t believe that as a perpetrator, I personally have any right to expect or predict how individuals respond to our story.

I can only hope that if listening to our story evokes a difficult or painful response, that the individual knows there is help out there, and has access to this support.

If the public engages with our story, I hope that we’ve illustrated that silence and denial can be toxic, that it’s erroneous to address sexual violence as solely a women’s issue, and that we each can have a role in finding solutions to this global issue.